Anticipation is riding high for young people across the country today as they plunge back to reality from summer holidaying for tomorrow's A-level results with GCSEs following on next week. All that hard work (or maybe not so hard) is about to become public as students find out whether they’ve got what they need to make their next steps onto university, college, apprenticeships and more.
The media is more than likely braced for its annual accusation of dumbing down as record numbers of passes may again be revealed. However this narrative is slightly different in the case of the recent IGCSE results (which are published earlier and which some may remember caused huge controversy last year). Reports here highlighted that results were significantly up as a result of rebalancing. One secondary school announced an astonishing increase of 26 percentage points. Such a jump in numbers can't help but make you question the exam system and how results are arrived at. How can wild differences occur in a system that relies on consistency and trust to allow colleges or universities to make admission decisions or employers to decide who is best for a job?
Ofqual, the body that regulates qualifications, exams and assessments explain how marking and grading works for GCSEs alongside the relevance of consistency from year to year - something referred to as ‘standards’.
Their process is two-fold. It firstly involves the practice of marking (including examiners being monitored for consistency and mistakes) and secondly the setting of grade boundaries (or awarding). All exam boards must follow Ofqual’s rules and statistical analysis plays an important part here. Ofqual states "the basic principle is that if the group of students (the cohort) taking a qualification in one year is of similar ability to the cohort in the previous year then the overall results (outcomes) should be comparable". The different exam boards’ awarding committees look at how the previous cohort performed in their prior key stage as compared to current year’s cohort. So, for GCSE students, the starting position is how they and their older peers performed in key stage 2 and, for A-Level, they look at GCSE results. There are a few further factors considered that help to moderate further, for example reports from examiners on how the questions in that year worked in practice.
This is the official line. For a challenging perspective read Brian Lightman, former general secretary for the Association of School and College Leaders who writes an open letter to Cath Jadhav, associate director of standards and comparability at Ofqual with some questions on how this works in practice. One question I would add is: why are predictions so important when you have got the actual results in front of you? If you trust the examiners to have marked consistently (and have processes in place to moderate this) then why not use the actual data to decide grade boundaries? It is not clear how an increase in results, due to improved teaching and learning might be achieved.
An (unnecessary) layer of complication is being added to proceedings for GCSE which instead of reporting results as grades A*-G will see the grades 9-1 being awarded instead. The eagle eyed will spot that that there are 8 possible grades in the letter-based method and 9 possible grades in the numeric. One will therefore not neatly slot into the other. And for A-levels due tomorrow, there is the impact of a year of curriculum change to AS Levels and new course materials much of which has led to a negative effect on teachers and head teachers confidence levels in the forthcoming results. A recent YouGov poll showed that over half of all teachers do not trust this year’s GCSE and A-Level results. Heck.
Perhaps what is most concerning from the teaching profession’s perspective is how what was regarded as a reliable part of teaching, the predicting of grades, has been turned on its head, but that this is just part of many significant changes to the school system; normalising frequent change as something to get used to. This is even more frustrating as Russell Hobby, general secretary of the National Association of Head Teachers explains, "people don’t understand the rationale and the reasons behind a lot of the changes".
So what if you should not agree with your results? From a social justice perspective the current arrangement of schools paying £50 per remark greatly favours those schools who are able to afford it. And from a cost perspective, it is the school that is more greatly penalised than the exam board, who are actually the ones most at fault. Laura McInerney makes this position clearly and has a alternative system for remarking in mind that might help rebalance these inequalities.
The other critical change happening this year is a remodelling of the accountability system that measures secondary schools. Instead of using 5 GCSE results A*- C (including English and Maths and three other top subjects) there are now two new headline measures called Progress 8 and Attainment 8. In short this is where pupils’ results are compared to the actual achievements of other pupils with the same prior attainment. For long, you can read the Department for Education's note.
Given the wash of data we are about to receive, if there is a way to simplify information I would really like to see it. Everything has got a little bit esoteric with only the most data-savvy able to wade their way through. What is most useful information for parents, students or governors? Another thought from Laura McInerney, is that, given schools now have a statutory responsibility to deliver information, advice and guidance relating to careers and continuing education beyond school, an alternative narrative (and the most reliable and easy to grasp) is around school leavers’ destinations. Could the number of students going on to top universities, or those with an apprenticeship become schools new headlines? That at least, is arguably more of a measure of a purposeful education experience.
This chimes with a desire from businesses, and higher and further education institutions for metrics that go beyond what information it is possible to retain for an exam. There is work taking place looking at how you measure personal characteristics that link to employability like the ability to think creatively, to collaborate, to work to a schedule, to problem solve, to be resilient and gritty, to be imaginative and have tenacity.
We know from working with the RSA academies that in a competitive market place being able to demonstrate the extra-curricular is vital too (and we don’t mean playing Pokemon Go) but volunteering and paid work experience, demonstration of entrepreneurship and leadership, or other creative skills and talents - those transferable experiences that make you stand apart when the rest of your life is ahead and waiting for you.
Good luck to everyone expecting results tomorrow and next week. Fingers crossed.
Be the first to write a comment
Comments
Please login to post a comment or reply
Don't have an account? Click here to register.