Is PISA worth the hype?

The latest PISA rankings are due out early next month, and while they may struggle to grab the headlines amid all the electioneering and impeachment inquiring, they are sure once again to get educators talking. As policymakers nervously direct their gaze towards the updated league tables, seeking vindication for their agendas and reforms, and researchers look to see whether Finland will continue its recent decline, whether Shanghai will keep its throne, or whether the UK and US can finally end their rut, we have a question of our own to ponder.

Is PISA really worth all the hype and attention?

The good: A shared language for global education

Not even the most ardent sceptic could deny the team at OECD, spearheaded by Andreas Schleicher, credit for galvanising the education world into a shared conversation around educational practices. For arguably the first time in history, educators from all corners of the globe, of all stripes, can evaluate their ideas and assumptions against a consistent standard.

PISA has shone a light on what does and doesn’t work across a variety of schooling environments. More than just scaled scores, the rankings come with detailed analysis that seeks to wrap the numbers around meaningful narratives. The release of each round of findings is always followed by several years of reflection and analysis on a whole host of questions, from the minutiae of instructional strategies to systemic questions such as the merits of streaming.

PISA is not merely a thought exercise; it has spurred reform in education systems where progress may otherwise have stagnated. When education systems show demonstrable progress against a raft of education reforms, the justification for preserving status quo practices becomes untenable. In a manner unprecedented, policymakers and researchers have had their assumptions tested and challenged by the relative successes of other nations.

The OECD intends for PISA to serve as a global standard, and is transparent in declaring its own assumptions around curriculum and assessment. I attended the launch event of the PISA 2021 mathematics framework which, by all accounts, represents one of the most progressive visions for what a twenty first century mathematics education should entail, with emphasis on the specific real-world problem skills expected of today’s students. As a standard bearer for international education in reading, mathematics and science, the OECD has done well to take a stand against outmoded teaching practices that privilege knowledge transmission above deeper understanding of concepts.

The bad: Technical and ethical critiques of PISA

PISA is not without opposition, however, which is wide-ranging in scope. Some concerns are technical, such as the statistical models employed by PISA to develop complete profiles of students based on partial responses. Others take aim at the escalation in standardised educational practice that arises from PISA. In 2014, a consortium of education academics signed a letter addressed to Dr Schleicher that warned against the short-termist, reactionary thinking prompted by PISA’s 3-year cycle, and the nourishment it gives to high-stakes testing regimes. It also attacked the narrow scope of PISA which, like any single-point assessment, can ever capture a subset of students’ cognitive and non-cognitive skills.

Others have pointed out that the OECD has a bias towards economic development (the clue is in its name), which translates into a neoliberal framing of education. In particular, the academic standards expected by PISA are tied to particular skills and competences that the future workforce will need, with almost no attention paid to citizenship, ethics, morality and all else that goes into a well-rounded education.

By design, PISA champions standardisation as a basis for comparative international education. That seems problematic in light of the sheer diversity among students worldwide, and from one education system to another. Where is there room to celebrate the sociocultural diversity that underpins the character of nations, that surely implies variances in how they define and measure educational success? And why should we assume that the skills of the future workforce are even consistent across all systems and contexts?

This concern is especially pertinent in low-income countries, where the assumptions of PISA seem to fall apart. PISA is listed as one of the global indicators of the UN’s Sustainable Development Goal 4, which relates to ‘inclusive and equitable quality education for all’. Recognising that the cost and implementation of PISA may prove prohibitive, the OECD has devised PISA-D (PISA for development) to extend coverage in middle and low-income countries. Where it has been implemented, PISA-D has revealed ‘shockingly low learning levels’ among participating countries. But participation remains very limited: even with funding being sourced from multiple partners, across all of Africa only Senegal and Zambia have so far taken part. Given that educational access and quality is markedly low in this region (for example, 84% of children and adolescents in sub-Saharan Africa lack proficiency in mathematics compared to a global average of 56%), PISA does not seem all that indicative of SDG4 at all.

Even with higher participation rates, the relevance of PISA as an assessment instrument would be questionable for low-income contexts. PISA targets fifteen year-old students, which implicitly presumes that secondary education is a reliable snapshot of the system as a whole. On the contrary, net enrolment in secondary schooling is low, and many out-of-school adolescents are not within scope of such studies. Furthermore, as our own research at Whizz points to, the most precipitous drop in academic attainment in these contexts often occur in primary years. That PISA-D does not engage primary students in these contexts is a glaring oversight.

So is PISA worth the hype?

At Whizz, we see tantalising potential with comparison studies like PISA because many of their aims resonate with our own. We see merit in defining and measuring learning outcomes with respect to a global standard, with the important caveat that no such initiative must be allowed to suppress the agency of individual nations to interpret and localise that standard within their sociocultural context. There should be room to contest and refine what comprises that standard and the OECD, at the very least, has put its cards on the table to reveal some of the most forward-thinking ideas on twenty-first century education. That is a standard all policymakers can aspire to.

Where PISA and its ilk fall short is in their blunt use of comparison. League tables reinforce the very worst tendency of policymakers to battle for rank and position, when instead the focus should be on sharing best practices and lessons learned. Comparison data should be a connective tissue between education systems, an instrument of co-operation rather than competition, with more focus on the story behind the numbers.

The OECD’s visionary ideas on curriculum are undermined by the archaic assessment practices that its PISA data relies on. The periodic time horizon of PISA seems an arbitrary constraint when one considers the affordances of digital technologies. Virtual tutoring technologies bring a steady flow of insights that enable a much more iterative approach to education programme design. Why wait three years for the next tranche of learning data when it can be gathered – and acted upon – on a continuous basis?

Real-time data collection forms a tight coupling between the learning interactions of students, and the assessment data that arises as a by-product. When learning data is collected in real time, it serves not only to support analysis and evaluation of educational practices, but also to inform the learning and teaching upon which the data is based. The investment thesis behind large-scale assessment studies earns more credibility when students, parents and teachers themselves stand to benefit from the data as it is being generated. Such an assessment framework would hardly be limited to a single age group – after all, why should primary educators lose out on the insights available through this model of assessment?

Of course, large-scale adoption of any technology presents its own cost-benefit calculus. But while we share many of the criticisms levelled at PISA, we can’t help but feel that many of them could be overcome with a more enlightened understanding of what technology-based assessment can achieve. Perhaps then, PISA will be worth the hype.