May 19, 2020
We’ve asked Joshua Perry, education technology expert and entrepreneur, to write a series of blogs about analytics and assessment. This is the second instalment, which looks the type of data a classroom teacher needs. Joshua is on Twitter as @bringmoredata
Hopefully, my first blog in this series persuaded you that, while school analytics has occasionally earned itself a bad name, it can nonetheless play a valuable role at all levels of the school. In this blog, I’ll dive into the detail of the insights a classroom teacher can derive from data. I’ll also point out examples of analysis that are common, but not necessarily that helpful.
The questions data can answer
My starting assumption is that the main role of a teacher is to communicate a curriculum in such a way that it is learnt by students; that is to say, knowledge becomes encoded in the long-term memory. Of course, that’s not the only role of a class teacher – wellbeing matters enormously too, as does skills development, and both are things can be measured and analysed. But if we accept that initial premise, it follows that a teacher’s biggest decisions relate to what to teach (or reteach), and when, so that’s my focus for this blog. It therefore follows that the most useful analysis will help to answer questions like:
- Have students understood the things I’ve just taught them?
- Have students understood the things I taught them a while ago?
- Do I need to know other things about students to plan their learning?
Have students understood the things I’ve just taught them?
The most actionable data comes from short, focused, formative quizzes, and the best ones may be designed by you to match exactly what you’ve taught. A high success rate doesn’t need to be a bad thing, providing the questions relate accurately to what was taught – if everyone gets 100% on a vocab test, that’s good! You can then move on to teaching other areas with confidence.
Have students understood the things I taught them a while ago?
For this, you still want those short, formative quizzes, but with questions spaced out over the weeks and months following the testing of the curriculum (a technique known as spaced repetition). Daisy Christodoulou talks about this at length in her excellent new book Teachers vs Tech, which concludes chapter 4 as follows:
“We need digital quizzes that build up to a complex skill, making it easier for students to learn.”
–Daisy Christodoulou, ‘Teachers vs Tech’
Additionally, you’re now also interested in longer summative tests that assess a student’s understanding over a broader curriculum, as well as how concepts interlink. Summative assessments therefore have slightly different properties, like multi-mark questions and a wider spread of test scores to sample the full range knowledge levels within a class. Well-designed summative tests incorporate knowledge from across the key stage – including prior years – so that cumulative knowledge is assessed; not just topics covered in the most recent term. Standardised tests (such as Renaissance’s Star Assessments) can be a good way of achieving this.
“It is true that summative assessments have not always been used effectively by schools, but that doesn’t mean we should damn them.”
It’s important to add a word about the perception of summative assessments. There is sometimes a view that ‘summative = bad’ while ‘formative = good’, and that the two are somehow in opposition. I think this dichotomy is false and unhelpful – a good summative assessment contains tonnes of formative value for a classroom teacher if analysed properly. It is true that summative assessments have not always been used effectively by schools, but that doesn’t mean we should damn them because they’re underutilised in some settings!
Do I need to know other things about students to plan their learning?
This could be more of a minefield than you might imagine. One common strategy here is contextual group analysis: Pupil Premium vs non Pupil Premium; boys vs girls and so on. There’s clearly some value in a teacher knowing contextual data about a student – EAL can be a key input into differentiated reading strategies, for example – but it is also true that group analysis can be a time-consuming activity with no clear impact on decision-making. Professor Becky Allen’s excellent blog on Pupil Premium addresses the tendency of schools to analyse PP attainment gaps. It feels necessary, because schools are funded based on PP and have an obligation to report on PP allocation. Also, Ofsted used to ask about it, so there’s some muscle memory at work here too.
However, at least at the class level, scrutinising this kind of data has minimal value because, as Allen makes clear: “Pupil premium students do not have homogeneous needs.” Instead, she recommends categorisations that are educationally meaningful, such as “the group who do not read at home with their parents; the group who cannot write fluently; the group who are frequently late to school.”
“There’s clearly some value in a teacher knowing contextual data about a student, but group analysis can be time-consuming with no clear impact on decision-making.”
This last point is worth noting in particular, since attendance data is often thought of as a thing you look at in isolation, but at the class level it also has real value as a dimension by which to slice assessment data.
Similarly, it can be seductive to look at analysis which compares boys vs girls – there are genuine differences to be found! – but it’s questionable whether that should lead to gender-differentiated teaching strategies. We know that the average attainment of girls is significantly higher than that of boys in all areas of Key Stage 2, but that is a structural issue that can’t be taught into oblivion. Equally, plenty of boys will excel in ways that the average national ‘gap’ does a poor job of capturing. So we neither want to obsess about closing the gap, nor use it as an excuse to ignore the potential of individuals in nationally underperforming groups. As Rosalind Walker explains in this excellent blog:
“If a lesson isn’t bang on, some students will make up for that deficit, through prior knowledge, independent study, or some other means. But some other students won’t. Often the students who won’t fall more than proportionately in one group, like boys, or disadvantaged, or whatever. At my school, we don’t go fiddling round with the delivery of our subject to make it somehow match the interests of a pupil premium child. (What?) We make sure that first, our teaching is of great quality.”
–Rosalind Walker, ‘E. Coli and Quality First Teaching’
Things to consider
Now, let’s return to questions (1) and (2), and consider the types of analyses that will help you find the answers you’re looking for. There are some common things to look for in both scenarios:
- Which students struggled overall? This helps you decide who needs focused intervention and reteaching.
- Which questions stumped a significant section of the class? This informs decisions about which concepts need reteaching to everyone.
- Which students didn’t finish the test? This matters because a non-answer may lead towards different actions than an incorrect answer. For example, a non-answer could indicate that the student is struggling with exam technique, so you may decide to focus on addressing that issue head-on.
For summative assessments in particular, there are a few additional things to consider:
- Which students struggled with specific strands? In other words, where is a child’s learning gap apparent across a whole curriculum area, and not just one specific question?
- Which strands were less well understood across the class? This will help you decide which areas require reteaching to everyone.
- Is the student’s overall grade in line with their target? This may affect how urgently you treat any of the issues you’ve discovered in previous questions – though of course a met target does not necessarily mean that a child is performing to their full potential.
Excess focus on overall grades
Out of everything we’ve considered, I find strand analysis to be the most underutilised technique – no doubt because it’s the hardest to do without assistance from technology. If you want to look at overall or question level grades, you can just bash together a spreadsheet and use conditional formatting to create a heatmap – and many do this already. Strand analysis, on the other hand, involves tagging questions with curriculum areas, and that’s more complex to achieve. Therefore, I think this is one area where classroom quizzing and testing systems can add real value with some simple, well-designed reports.
“I find strand analysis to be the most underutilised technique – no doubt because it’s the hardest to do without assistance from technology.”
Conversely, the overall grade is perhaps looked at a little more than is necessary. To be clear, it’s not that the summative grade holds no value; it’s just that they’re not worth looking at on its own that often. Overall grades can help with student motivation, or highlighting which students have the most urgent issues (at which point it’s time to drill down to the strand and question), or understanding how a whole term of teaching manifests itself as a student outcome. However, their inherent weakness is that a single grade can’t give any actionable information about what to do to fix the problem. In other words, you can’t just tell a child who’s on track for a 5 at GCSE that you think they should be able to get a 7; to borrow an oft-quoted line from assessment guru Dylan Wiliam, that’s “rather like telling an unsuccessful comedian to be funnier.”
I don’t think this excess focus on overall grades comes from teachers themselves. Instead, it’s once again a function of technological limitations (it’s easier to analyse overall grades than look at the component parts), and also a consequence of the overall grade having value to school / MAT leadership (more of that in my next blog). Anyway, whatever the reason, I’d suggest that class teachers don’t need to spend too long looking at overall grades, since they’re unlikely to lead to many actionable insights.
A word on frequency
While you can definitely overdo summative assessments (three times a year is plenty), there is a clear benefit to frequent short, formative assessments. We know that students are more likely to commit things to the long-term memory if they practice the process of retrieving that information by answering questions (this is known as the testing effect). It is also self-evident that teachers have to make decisions every day about what content to prioritise in a tight timetable; so if students and teachers are both benefiting from the process, and if technology can help to minimise teacher workload, it’s perfectly possible to set and analyse formative quizzing data every day. This is particularly true right now, when good tech solutions can help to bridge the gap between the home and school.
“While you can definitely overdo summative assessments, there is a clear benefit to frequent short, formative assessments.”
A final note
Finally, you may have noticed that I haven’t included anything here about the prevalent habit (particularly amongst primary schools) of tracking pupil performance against granular, teacher-assessed learning objectives. There is a case to be made for such things, but if overused this approach can lack formative value compared to quizzing and testing. Or, to put it another way, would you rather know that a student is “WORKING TOWARDS” for fractions, or know which questions they got wrong, and what their incorrect answer was?
In conclusion, I think class teachers should focus as much of their analytical energy as possible on interrogating the answers to well-designed questions. Both formative and summative assessments have a role – with the greatest value held in the questions and strands. Group analysis should be based on groups that can meaningfully have a teaching strategy applied to them. Most importantly, whatever the analysis, it’s vital to think through what decisions your insights can inform before looking at data.
Joshua’s blog series can be read here. To see how we’re supporting students and teachers during school closures, click here. You can follow Joshua on Twitter on @bringmoredata and Renaissance on @RenLearnUK