Here is a question from a critical thinking paper:
OCR Critical Thinking Unit 2 What’s the problem with the logic used in the argument?
The claim is that the improved exam results have been attributed to the new arrangements. So why is this problematic?
The problem is that whilst the new terms MIGHT have contributed to the results it is not logical to say they were fully responsible for them. In Critical thinking this can be referred to as ‘false cause’ (it may be a contributing factor but there are many more) or ‘post hoc’ (there is no proved link); the logic is flawed.
This is fairly serious when evaluating the impact of interventions in schools; in fact everything we do.
Can we be sure what has had a direct impact on results? How do we know if specific interventions were successful?
Hopefully schools, subjects and teachers will evaluate interventions in some way. However this process has huge potential issues. If we take one student, in one subject and their one grade, how is it possible to analyse what led to this grade? We can consider what we know happened in school. We will never really know the extent of external factors on this grade. I’ve blogged before on whose grade it is. If a child has had a private tutor and the school doesn’t know they could be falsely attributing interventions when the external support could have been a large contributor. We also may focus on just the exam year or the exam course years, but can we evaluate the impact that previous key stages may have on results?
Pitfalls of using flawed logic
- Some interventions may have no/little impact yet as the results have always been good when they’ve been done it’s almost tempting fate not to keep doing them. (Irrelevant appeal to tradition; we’ve always done it this way.)
- Copying what other schools do because someone has publicly shared what they think got them good results. (Hasty/sweeping generalisation ; going from one school where it worked, applying it to many where it might not)
- Spending money/time on the wrong thing
- We want to believe that we (a teacher & the school) have had the most positive impact on the results; we might not.
- The assumption that all actions/interventions are measurable.
- Assuming that there is some sort of panacea out there; other than decent teaching there probably isn’t.
- I’ve listened to many leaders from schools stand at conferences casually attributing their success to one or two things they’ve done. Their logic is flawed with the ‘cherry picking’ fallacy. They’re only telling you the ‘good’ bits they want you to hear; what did they not tell you about?
- Judging a teacher/school on the data. Can those that do this ‘prove’ they’re fully responsible for the exam result?
Possible ways to overcome flawed reasoning to make evaluations accurate
- Ask the students. Whilst this isn’t always reliable, if anonymous it can be useful. How many schools deliberately ask students if they’ve had professional external support out of school?
- Don’t credit one intervention highly over another.
- Remembering that excellent teaching through key stages will generally be the best way to achieve good results; maybe with this we don’t need spring term interventions with year 11, it should happen from the moment we see the students.
In case you’re interested, here’s another one: