Bias in the Algorithms

The following is a transcript of Dr Junaid Mubeen’s talk ‘Bias in the Algorithms’ delivered at the NewSchools Summit, Oakland, California, May 2019.

A school Principal once challenged me to analyse the learning profile of one of his students, whom I’m going to call Joshua. In front of me, I had a treasure trove of learning analytics on Joshua that our virtual math tutoring system had collected. And as I scanned through Joshua’s profile, I noticed a curious trend. Joshua’s usage levels had been relatively low in the past, but recently they’d shot up and his improvement was strides ahead of his peers. I could see that Joshua’s strongest topic in math was fractions and that he was struggling a little in geometry.

The Principal then called Joshua in to meet us in his office and lo and behold, my analysis held up. It turns out the Principal had just awarded Joshua a prize for his recent efforts, and that fractions was indeed the topic he considered his best. So I was feeling rather proud of myself. After Joshua left, the Principal mentioned something else about him that I’d missed, and that the data would never reveal. And that’s that Joshua is homeless. Well, I was a little stunned.
It turns out that Joshua lives in a hostel, where he shares a bedroom with his mother and four siblings. Joshua has no computer to call his own, which explains why his usage was low for so long. But a few months ago, Joshua’s mother committed to taking him to the community library twice a week where he was able to access our virtual tutoring program, Math-Whizz, online.

Learning about Joshua’s life story was an exercise in humility for me. Because I had the learning analytics, sliced and diced in every which way. But what the Principal had gifted me with, was the human story behind those numbers. He had provided me with the context.

Now, our virtual tutoring algorithms have all of that data at hand, but none of the context. If we place any premium on students’ privacy, we’ll surely agree that no algorithm should be privy to the most intimate details of a child’s life, like whether or not they’re homeless. And that presents a challenge for algorithmic education: the same systems we entrust to make key choices on what students should learn, when they should learn or how they should learn lack some of the most essential insights into their everyday lives.

Machine learning algorithms are not immune to this. For all their sophistication, they just don’t care very much for context. Those algorithms learn from historical data, they find patterns, and they make predictions about the future based on what they see. It’s often said that those who don’t learn from history are doomed to repeat it. Well, machines do learn from the past, but their goal is to repeat it. And the risk, of course, is that if our history is steeped in biases and prejudices, then our algorithms may well amplify them in the future.

If a student’s learning profile signals that they are struggling, then an algorithm may unwittingly infer that that’s all the student is capable of. Algorithms that defer solely to data have a kind of fixed mindset because they often can’t see beyond the horizon of a student’s current attainment level. Imagine what algorithmic judgements could have been made of Joshua in the past: he’s unmotivated, incapable, unteachable. No engineer would program in these judgements so explicitly of course, but they arise in the most subtle ways. It’s not inconceivable that an algorithm, left unchecked, would condemn Joshua to a diet of lower-grade content because that’s all he’s deemed capable of. The only way to offset the cold, often cruel judgements of algorithms is to ground them in some context. And that’s a job for us humans.

At Whizz I’ve had the privilege of working in learning environments the world over. And what I’ve found, invariably, is that every environment presents a unique context. When we first rolled out our program in rural Kenya, we soon discovered that we were working with students who were over four years behind grade level, on average, in math. We found that when they got onto our program, they were learning at half the rate compared to their peers in other regions.

So that’s the data.

Now the context: the students live on the fringes of society, especially the girls. They walk ten kilometres to school each day, both ways. Usually without breakfast. Many of them are refugees of conflict, others work several hours each day tending to cattle, and their parents literally can’t afford to lose them to school. The teachers have the best will in the world but they typically have the math knowledge of a fourth grade student. They have to contend with class sizes of over a hundred. Many had never set eyes on anything as expensive as a computer, and were terrified of even putting their hands to them for fear of breaking them. The schools struggled to pay their electricity bills and even when the lights were on they could scarcely rely on a stable internet connection.

That’s just a snapshot of the story we found out there in Kenya but it was clear that some of our previous assumptions about how to support students and teachers just weren’t going to hold up here. And that’s another challenge for machine learning algorithms, which make judgements on individual users based on the behaviours of previous cohorts. That approach breaks down precisely when we’re faced with new groups of students that have little in common with those that came before. So if algorithms are to be the enabler of personalised learning, we have to protect against their impulse to make aggregate judgements.

Those students in Kenya are now, five years later, learning at rates that are in line with our international standards. The credit doesn’t go to the algorithms which, on their own, would probably have left students trapped in a vicious cycle of low achievement and low expectations. No, the credit goes to the human effort of engaging students and teachers, capturing their realities, feeding back to our developers and challenging them to continually refine and even uproot the very models that those algorithms are premised on.

Our philosophy at Whizz, and really our operational model, is that the only way to serve the individual needs of our users, is to have proximity to them – to immerse ourselves in their environment. That’s why we always have people on the ground to extract the context behind our data and to sense-check the behaviours of our algorithms.

We’ve learned that the cutting edge of technology doesn’t always give users what they need. So while we use machine learning because it can drive efficiency with learning, we’ll gladly retreat to less fashionable rule-based models to ground those same algorithms in common sense. And while we recognise that adaptive, automated tutoring can massively reduce teacher workload, we’re adamant that teachers themselves must have the controls to direct and even override the decisions of our virtual tutor. Expert human judgement is our safeguard against the inherent blind spots of algorithms.

These design choices are very much a reflection of our values. I think the biggest lie to come out of Silicon Valley is that technology is neutral. That’s a fiction, especially for technology products whose economic logic is premised on velocity and efficiency and scale and surveillance which, by default, allows space for bias and prejudice to creep in.

It doesn’t matter how much technical sophistry we pack into algorithms. When they’re left unchecked, they pose huge threats to our ideals of equity and inclusiveness. So I would suggest that EdTech needs instead to be developed on the values of community and context, transparency and privacy, and proximity. There’s only so much an algorithm ought to know about its users, much less it can infer about their learning potential. But that’s okay because the most cutting edge feature of all in EdTech is our own human judgement – so let’s not surrender it so willingly.