Life without levels in RE: what do we want to assess?


I’ve written three posts so far in this series of posts that lead into this post so you can see my thinking on this:

Five checks to see if you’ve replaced levels with levels

The foundations of assessment: Myths & challenges

A model for whole school life without levels

What do we want to assess in RE?

This comes down to purpose. As the RE community cannot agree I decided to go with what I thought and came up with the following:

what RE does

However it is not possible to assess all of these. In fact it isn’t desirable to ‘measure’ some of these. How can you measure a child’s spirituality? And why would you want to? Can you or should you reward progression in spirituality?

So I decided to pick out those that I though would be appropriate to include in our model and came up with this:


I don’t we should try to assess or measure those in red. In fact, I think these don’t all just belong in RE; most are SMSC that should be developed across the curriculum.

I initially planned to deal with those that were left but have not included the challenging questions, these should probably also belong across the curriculum. This doesn’t mean I don’t use these in class but I don’t want to include them in this model.

Over time I continued to read blogs and what other schools were doing.

New GCSE and A level requirements

At the same time the GCSEs and A levels were at the beginning of being reformed. This seemed an ideal time to link in. I certainly didn’t want to just use KS4 in KS3 but wanted it to inform and compliment it. We don’t have KS5 in my school but I felt that our KS3 could help to support learning at KS5.

The DfE released the assessment objectives as follows (GCSE and A Level):

I wanted to unpick what these meant. I hate using the words ‘analyse’ and ‘evaluate’. Whilst it is important for students to understand these command words, I believe we should teach them what they mean in terms of breaking them down, not teaching them as command words. There are several skills needed to analyse and evaluate, I wanted to divide these so students can see how they contribute overall to their writing.

I picked out (in red) the knowledge, understanding and skills needed at A level that could transfer down:

AO overall.png

I continued to keep in mind what other people had said about new models without levels. In particular I looked at what Michael Tidd had written here:

Whilst not in a Catholic school I was also inspired by the Catholic levels of attainment, in particular for reflection:


Look at level 1 and EP. This really made me think about keeping things as simple as possible but with meaning.

I then began to think about how all of this could be presented to students and could contribute overall to an assessment model.


A model for whole school life without levels


This is the 3rd post on how my  school has been working on life without levels. Here is post 1 and post 2. In post 4 I will post the specifics of what I have been doing  for RE.

Whilst I am biased, I genuinely think that this is the best way to have handled it. I’ve heard and seen some awful things happening since schools have dropped levels, to the extent that I respect those that didn’t dive straight in and have kept using levels.

I won’t post specific examples what I’ve seen as it would be unfair to those who have probably spent many hours on creating systems with some ridiculous demands from leaders. But it’s fair to say that some systems are not about the students and learning and are more about the data and the spreadsheets.

The beginnings…

As far as I can recall HODs were asked to start thinking about things in September 2014. No deadlines were given, nothing was imposed top down. I started the process of researching, reading blogs and seeing what others had already done. This generally gave me an idea of what I definitely didn’t want to do and picked out parts of other people’s models that I might consider.

To some extent the delay of the GCSE Religious Studies specification approval was a blessing. Many schools have just used this and brought it into key stage 3. I knew that I didn’t want to do this as I don’t think GCSE is challenging for all students and key stage 3 is a chance to learn some broader study skills.


In HODs meetings we often spent a few minutes sharing ideas and discussing. Everything was relaxed and there was no sense that this had to be done ASAP.

The plan was for all subjects to design their own way of assessing. It was up to them to decide what was best for their subject and students. This reflects my school leadership  ethos; middle leaders are trusted to lead their own departments.

Every line management meeting, I brought new ideas to my boss and we discussed the pros and cons of those ideas. We get on very well and can happily critique ideas together. These discussions were invaluable. Luckily he is also the T&L lead so we could discuss things from all angles.

I think that some HODs stuck with systems very similar to levels which is their judgement but I wanted to make something more fit for purpose.

Whole school tracking

All our subject systems had to do was to be able to inform teachers of each student’s progress. We didn’t need to share any sort of attainment data. Our systems needed to be able to show what we expect from an individual student and whether that meant they were making expected, below expected or above expected progress.

The data team used key stage 2 data and CAT scores to create indicative banding based on the new GCSE 1-9 grades. All HODs needed to ensure was that they could say whether, for example, Billy in year 7 with an indicative banding of 6-8 is making the expected progress he needs.

We still don’t know what a 9 looks like or any other grade so it has all been professional judgement and some guessing. There has been a true understanding that no system is perfect and we’re all starting out with little knowledge of the future. There’s been no desperate rush to make sure that key stage 3 reflects the new GCSEs.

The beauty of this whole school system is that it focuses on progress and not just at attainment. Attainment leads to flight paths and unrealistic expectations. By looking a progress you can factor in the highs and lows of learning. My professional judgement is that overall this student is making the progress they need to or not; whether they get from A to B, but how they do it doesn’t matter.



The thing that has stood out throughout the entire process (that is still continuing) is that our leaders have been nothing but supportive and have always promoted development of ideas over a quick, possibly ineffective, system.

The model I’ve been working on has constantly evolved. No-one has ever given a deadline that it must be ‘complete’ by. I’m still trialling new aspects and will continue to tweak and change as needed.

We’ve been encouraged to collaborate and share amongst HODS. It’s overall been a positive and supported process.

On reflection, any school that has pushed their staff to create systems with little time or support have done their students and teachers a disservice. The point is, whilst all of this has been going on, children haven’t stopped learning or making progress just because there wasn’t something there to measure it. I’m doubtful that any school that cares so much about Ofsted that they ‘had’ to get a system in place would ever have been judged any differently because they could show a ‘polished’ data system.

Please feel free to ask any questions about the model and I may adjust the post if I realise I’ve left anything important out.

The foundations of assessment: Myths & challenges


This is the second post in a mini series on how we’ve been working on key stage 3 assessment in my school. The first post in the series outlines when levels look like levels they’re probably levels. This post will look at what I considered before I started to design anything.

The myths & challenges of assessment

1.Myth: Learning is linear

Students don’t start as an empty vessel in year 7 and aren’t gradually ‘filled’ with learning over key stage 3 & 4 at a steady, consistent rate.


2. Challenge: We cannot accurately assess learning from one piece of work….or two…or three.

Using one piece of work to assess where a student is at with their learning is inaccurate and not reflective of what they have learnt. Who wants to be judged by one lesson observation?

Professional judgement is key, however there’s not always a piece of evidence to back this up. Some leaders don’t like this.


3. Challenge: GCSE Religious Studies is  easier than the challenges at key stage 3

Sad but true. The new GCSEs have gone some way to address this. Should we just use key stage 3 to prepare to succeed at GCSE or as a foundation for learning how to study religion/a subject?

4. Myth: Everything needs to be measured

Schools love spreadsheets and MISs. Recording, tracking, monitoring; the stuff of a data leader’s dreams. I love a good spreadsheet. Levels could do this nicely. The benefit of levels for leaders was that without knowing a subject, without knowing the student they can tell if a child is working at the ‘required’ level; their little box will automatically turn red/yellow/green. Levels attempted to measure ‘learning’ or at best performance. If we need to monitor what might be better to track?


5. Myth: A piece of work shows the limit of what a student has learnt (know, understand, can do)

If we mark a piece of student work against criteria, you cannot accurately say that just because they’ve done X they will always be able to do X, including in a different context. Equally just because you cannot see Y in their work, doesn’t mean they can’t do Y.

6. Challenge: RE teachers usually teach more students than most other subjects and have the least time

If you teach 600 students, see them once a fortnight you’ll barely know all their names by the end of the year. So for the first data entry point, what will you enter? Copy and paste? Put enough to keep anyone off your back? Keep the boxes green? Who cares?

7. Challenge: Teachers feel compelled/are made to spend more time creating evidence and entering data than focusing on the students themselves.

Six data collections a year, 20 teaching groups, that’s a LOT of wasted time.


Five checks to see if you’ve replaced levels with levels


This is the first post in sharing the assessment system that my department has developed for ‘life without levels’.

I’ve already written about how I think for many, the systems that have been created are essentially levels rehashed in “Assessing without levels – A case of the emperor’s new clothes?”

I’ve also written about why I think that many teachers have struggled with creating new systems in “Teaching automatons? Looking at the what and why instead of the how“.

I spent a long time reading about what other people had done via blogs, Twitter, conferences and the official DfE case studies. I learnt a lot, both good and bad.

Firstly I looked at why levels were being dropped and considered how a new system may or may not avoid the previous issues. Consider these five questions to see if your ‘new’ system falls into the traps of the old levels:

  1. Do you have labels for children depending on how they’re working?

Calling a child a 4a or 6c was easy. It was a common language. However lots of the systems I’ve looked at have essentially replaced the number/letter combo with something else. It may be more ‘sexy’ but it’s still a label. Children aren’t silly, they know a ‘gold’ is better than a ‘silver’ they know that a ‘master’ is better than a ‘novice’. New labels, same problems. In fact worse problems, if you come up to ks3 being a ‘lion’ I have no idea what that means, at least I had some sort of idea with a 3a. Sadly ‘developing’ has now been reserved for students working a a certain level instead of being used for all students who are getting better at something.

2. Have you blocked different skills together in one box?

Expected progress

  • Can confidently add up
  • Can confidently subtract
  • Can confidently write their own name
  • Can confidently make their own bed
  • Can confidently hop and skip

What if they can confidently add up but fall over when they hop and skip? Are they still making expected progress?

If you’ve done this it is unlikely that all students classified by this box can really do all the expected skills. Some of the skills may be so different it is illogical to even put them together. The ‘best fit’ model was ditched for this reason.

3. Have you used language that means very little to students (or teachers*), usually based on Bloom’s taxonomy?


Some development.

Partial identification.

Fully mastered.

What do these ACTUALLY mean?

4. Does it take just as longer or even longer to use the system than levels?

One of the main aims was to reduce teacher workload, if if doesn’t, something’s gone badly wrong.

5. Is there a large chance of disagreement and potential issue of inconsistency*?

Is it a 2a or a 3c? How can the key stage 2 teacher assessment be correct, he/she’s nowhere near a 4b? Vague language and use of subjective adjectives all cause issues with consistency. Think of the hours wasted with teachers arguing about levels. Ideally this needs to be reduced as far as possible.

I tried to keep all of these in the back of my mind as I’ve been developing our assessment system. You will see how far we’ve achieved it in the next couple of posts….

*Exam boards continue to do this….


Why teachers won’t be replaced by Google and teachers are the experts


Now the pressure is off with year 11 I have some time to think. If you know me you’ll know I’m not that knowledgeable but I have the motivation to learn and luckily as a teacher I know some good strategies to support learning. I’ve always regretted not taking History at school and as I live with a historian I thought my first area to research would be historical.

So off I set with some coloured pens, a shiny new pad and some topics to research online.

Off I went, straight to Google. And I hit a problem. I didn’t know whether what it came up with is what I actually needed. I could have spent hours going through pages and trying to corroborate sources, decide if they were reliable or not and unpick the complex nature of interpretation in history. I’m relatively adept online. I generally ‘Google’ a lot for myself and for others. But I think that if I just had the internet at that point I would’ve given up.

However I was lucky, I had a teacher. He could give me some key ideas, answer my silly questions, tell me when there were different interpretations, guide me down different lines of enquiry and most importantly show an interest in my learning. Google doesn’t do this. The teacher is the expert. Without the teacher I would never have known if I was learning the correct information.

This experience has reminded me of a few important things:

  1. Students need teachers. They are the experts. They can answer the obscure. They can direct students to finding the correct answers. They can give detailed, nuanced, accurate knowledge efficiently.  They can encourage. They can tell you when you’ve got something wrong.
  2. Books are hugely important. We don’t get students to use them enough.
  3. If we tell students to ‘research’ on the internet, how do we expect them to know what is correct/true/reliable? Even teaching them the ins and outs of source reliability and how to do internet searches will not negate the need for a teacher.