• This content is password-protected. To view it, please enter the password below.

  • There is a moment, the first time you use it, that feels like witnessing a miracle. You type a clumsy phrase – a lost city in the Amazon jungle, overgrown with impossible flowers, in the style of a 19th-century botanical illustration – and the machine obeys. In seconds, an image blooms on the screen, more perfect than you had imagined. The script you were struggling with writes itself. The code you were stuck on is untangled. The feeling is one of immense, frictionless power. But then, a quiet dissonance begins to creep in. You look at the image, and something is off in the shadows. You read the script, and a strange, hollow repetition echoes in its rhythm. You stare at the work, this polished and plausible artefact, and a chilling question surfaces: did I do that? This anxiety is not, as is often claimed, about telling “real” from “fake.” That is a shallow problem of verification.

    The true crisis of the generative age is a quiet, creeping atrophy of our capacity for judgment, the hollowing out of a uniquely human skill. This is not a threat to what we know, but to how we know. This essay is an argument that human judgment is a muscle, a form of skilled, practical wisdom that is built only through the friction of a resistant world. The seamless, effortless outputs of generative AI are the cushioned walls of a sensory deprivation tank for the human will, and by embracing them, we are choosing to forget how to walk.

    I. Judgment Forged in Resistance

    To understand the limb we are so eager to place in a cast, we must first understand how it works. What is this faculty we call judgment? It is not merely information. You can read every book on naval architecture—memorize the tensile strength of steel, the principles of hydrodynamics, the history of shipbuilding—and you will possess a vast store of propositional knowledge, of Ryle’s ‘knowing that.’ But this will not, by itself, grant you the ability to guide a ship through a storm. That ability, that intuitive, adaptive, and intelligent practice, is a form of ‘knowing how.’ It is a skill woven into the mind through repetition, failure, and correction.

    You know how to ride a bicycle not because you can recite all the physics, but because you carry the memory of falling in your bones. Judgment is the highest order of this skill. It is the seasoned physician’s ability to diagnose a rare disease from a constellation of ambiguous symptoms, the jurist’s capacity to apply legal precedent to a case unlike any seen before, the artist’s eye for when a single brushstroke is needed to bring a canvas to life. In every instance, a body of ‘knowing that’ is present, but the judgment itself is in the masterful, uncodifiable application: the how.

    So, if judgment is a skill, how do we learn it? Where is the gymnasium for the soul? I found my answer in the frustrating glow of a computer screen late at night, years ago, trying to learn how to code. No textbook could teach me the intuitive feel for why a program was breaking. That knowledge came only from the agonizing, formative hours spent hunting for a single misplaced semicolon that was crashing the entire system. It was the stubborn, infuriating resistance of the code itself, its absolute refusal to bend to my intentions, that taught me the deep logic of the machine. The triumph was not in getting the program to run, but in the person I had to become to make it happen: more patient, more meticulous, more attuned to the subtle interplay of cause and effect. This is the pedagogy of resistance.

    Matthew B. Crawford argues that this struggle against a non-compliant reality is the primary mechanism for cultivating intelligence. The novice carpenter learns the soul of wood from the visceral feedback of the grain splitting under a poorly aimed chisel, as opposed to from a manual. The scientist learns the shape of the truth not from a hypothesis, but from the data that stubbornly refuses to fit. This struggle is the entire point. It is the whetstone that sharpens the blade of our discernment. Generative AI, in its very essence, is engineered to eliminate this sacred resistance. It offers a world without grain, a world where the wood never splits, where the code never breaks, where a plausible answer is always a click away. It presents itself as a liberator from the struggle, but what it is actually offering is liberation from the self we might have become.

    This is where we are met with the most seductive argument in favor of this new world: that AI just reallocates effort from low-value drudgery to high-value creativity. Let the machine summarize the tedious articles, write the boilerplate code, check the grammar, so that we, the human agents, can focus on grand strategy and visionary synthesis. This is a powerful and comforting lie. It rests on a fatal misunderstanding of how mastery is built. It imagines that a complex skill can be neatly disassembled, its “drudgery” outsourced, leaving behind a pure, refined core of “creativity.”

    But in any meaningful craft, the two are inextricably linked. The “tedious” process of a historian meticulously cross-referencing primary sources is the very activity through which that judgment is forged. The “drudgery” of a programmer debugging line-by-line is not an obstacle to be removed from the act of engineering; it is the primary means by which they develop a deep, intuitive feel for the system. By outsourcing the parts of a process that demand sustained, focused, punishing effort, we remove the formative practice. We are repositioned from practitioners, our hands dirty with the substance of our work, to spectators, or at best, managers of a black box. The effort is indeed reallocated, but it is a reallocation away from the foundational struggles that build ‘knowing how,’ toward the superficial curation of a machine’s vast, un-owned repository of ‘knowing that.’

    II. The Ghost in My Machine

    The precise mechanism of this deskilling can be understood through C. Thi Nguyen’s haunting concept of “value capture.” Nguyen describes how our engagement with a rich, complex, and often ineffable value can be hijacked by a simplified, legible, and quantifiable proxy. The deep, intrinsic value of “getting an education,” for instance, is captured by the simple metric of a grade point average. Our behavior then shifts to optimize for the proxy (cramming for the test) rather than engaging with the original, richer value of genuine understanding. Generative AI is the most powerful engine for value capture ever invented. It takes the profound, intrinsically valuable, and deeply personal process of “developing judgment on a topic” (a ‘knowing how’) and replaces it with a simpler, clearer, more efficient proxy goal: “producing a document that has the formal properties of a well-judged analysis.” Our values are captured by the affordances of the tool. The goal is no longer to become a better thinker, but to produce a better-looking thought-product, instantly.

    This initiates a subtle but devastating alienation from our own minds. When I use a generative tool to help write an argument, an ambiguity infects the process. Is this sentence, this turn of phrase, this intellectual connection, genuinely my own, birthed from my unique history and effort? Or is it a clever, statistically probable echo from the machine’s training data, a ghost from the vast digital graveyard of other people’s thoughts? This ambiguity corrodes the sense of earned intellectual ownership that is the primary reward for difficult cognitive work. It robs us of the irreplaceable feeling of making something with our own minds. The skill we begin to cultivate is the thin, procedural ‘knowing how’ of manipulating a particular piece of software. We become expert prompt engineers instead of expert thinkers.

    This leads directly to the functionalist’s cold, soul-crushing objection: who cares? If the final output is indistinguishable from, or even superior to, what a person could produce through struggle, what is the loss? If a student produces a more coherent essay on Kant using an AI assistant, isn’t that a better educational outcome? This argument sees all human activity as a problem of production. It views the human mind as a clumsy, inefficient factory, and the essay as just another product on the assembly line. It is a logic that is blind to formation.

    The entire purpose of asking a student to write that essay is not to add another mediocre document to the world’s infinite library of Kantian commentary. It is to force that student through the crucible of a cognitive process: the struggle of research, the frustration of synthesis, the terror of the blank page, and the slow, dawning clarity of articulation. The essay is merely the scar tissue from that formative wound. To outsource the process is to abandon the entire pedagogical and humanistic project. To value the AI-generated text as equal to the human-authored one is to commit a category error of immense proportions. It is to say that a perfectly replicated, 3D-printed Stradivarius has the same value as the original, ignoring the history, the craft, the soul embedded in the wood by a human hand. One is an artifact of pattern-matching; the other is an index of a mind’s intentional, effortful, and beautiful struggle to bring order to the world.

    III. The Politics of Passivity

    This degradation of individual judgment is a political catastrophe in the making. The subject being cultivated in this new environment of frictionless, plausible outputs is a subject uniquely ill-equipped for the brutal, practical demands of democratic citizenship. A healthy democracy is a messy, grinding, collective practice of self-governance. And citizenship, at its core, is a form of ‘knowing how.’ It is the practiced skill of listening to those with whom you passionately disagree, the hard-won ability to evaluate the credibility of a source, the wisdom to weigh competing arguments, and the fortitude to hold power accountable. These are not innate virtues; they are skills that atrophy from disuse. The environment of generative AI pours a corrosive acid on the very foundations of this practice.

    First, it creates an unsustainable cognitive burden that leads to an epidemic of exhaustion. When any email, news report, social media post, or video can be a frictionless, photorealistic fabrication, the task of verification becomes a constant, draining, full-time job. The rational human response is not to become a hyper-vigilant forensic detective, but to simply disengage. We retreat into a shell of generalized cynicism, a state of learned helplessness where the effort required to distinguish truth from fiction becomes too costly to bear. This is the death of public reason. A functioning democracy depends on citizens exercising the skill of distinguishing serious argument from bad-faith noise. An environment that makes this practice prohibitively difficult dissolves the shared ground of reality necessary for debate, leaving only the raw, tribal allegiance of the informationally shell-shocked.

    Second, the constant habituation to frictionless, optimized, and personalized solutions cultivates a deep and dangerous intolerance for the messy reality of democratic processes. Democracy is, by design, full of friction. Its core components – deliberation, checks and balances, compromise, procedural justice – are intended to be slow, difficult, and frustrating. They are features, not bugs. They are the institutional brakes designed to prevent a society from careening off a cliff of popular passion. But a citizenry being technologically conditioned, like a rat in a cage, to expect instant, seamless solutions will inevitably view the hard work of democracy as an intolerable system error. This procedural intolerance creates fertile ground for the authoritarian strongman who promises to sweep away the gridlock and the endless debate to just “get things done.” The difficult ‘knowing how’ of politics is replaced with the seductive, smooth efficiency of executive command, and we cheer for our own disenfranchisement because it feels so much more convenient.

    This is where the final objection from the techno-optimists appears: that AI will not create passive subjects but will empower a new generation of hyper-informed citizens, armed with tools to analyze legislation, fact-check politicians in real time, and level the informational playing field. This vision is a dangerous illusion. It ignores the brutal logic of the political economy that drives this technology. While such empowering tools might exist in niche corners, the dominant commercial applications of generative AI are overwhelmingly engineered for passive consumption.

    The business models of Silicon Valley depend on capturing and holding our attention, and that is most easily achieved through frictionless, entertaining, validating, and pacifying content. The informational weather system will be structured to foster passivity. More importantly, this vision makes the cardinal error of confusing access to information with the possession of skill. You can give a person a library of every great book on surgery, but that mountain of ‘knowing that’ will not grant them the ‘knowing how’ to perform an appendectomy. The skill of political judgment, like surgery, requires practice. An environment that systematically disincentivizes this difficult practice by offering easy, plausible answers will cause that skill to wither on the vine, no matter how much information is theoretically at our fingertips.

    Conclusion

    The crisis of our generative age, then, is not that we will be tricked by fakes, but that we will willingly de-skill ourselves into obsolescence. We are becoming like machines: processors of input, generators of output, with no struggle, no soul, and no judgment in between. The threat is the erosion of the human capacity to skilfully and intentionally engage with a resistant world. I have argued that this occurs through a systematic substitution, a form of value capture where the difficult, formative process of building ‘knowing how’ is replaced by the frictionless production of artefacts that merely simulate ‘knowing that.’

    What, then, must we do? The resistance cannot be primarily technological; it must be ethical, pedagogical, and deeply personal. It is the conscious, deliberate, and sometimes painful choice to re-valorize difficulty. It is the decision to seek out friction, to embrace the resistant mediums that forge our skills, whether that medium is a block of wood, a line of code, a difficult text, or a conversation with an adversary. In an age offering the seductive ease of infinite, disembodied knowledge, the most vital and rebellious human act is to insist on the difficult, embodied, and effortful practice that constitutes real understanding. To surrender that struggle is to surrender the very thing that makes us agents in our own lives.

    References

    Crawford, M. B. (2009). Shop Class as Soulcraft: An Inquiry into the Value of Work. Penguin Press.

    Nguyen, C. T. (2020). Games: Agency as Art. Oxford University Press.

    Ryle, G. (1949). The Concept of Mind. University of Chicago Press.

  • There is a very particular kind of quiet that exists only in an Oxford tutorial room. It’s a dense, expensive quiet, woven from the silence of centuries, the rustle of turning pages, and the soft sigh of an old sofa as you sink into it. The world outside – the frantic clang of a Deliveroo cyclist’s bike, the insistent buzz of a phone with a message from home – recedes, muffled by thick stone and leaded glass. For two hours a week, for four years, this room, or one very much like it, becomes your universe. And in this universe, you are given the most peculiar and profound luxury: the permission to think about thinking itself.

    My tutors, I must say, were wonderful. They were sharp, patient, and deeply empathetic people who treated my often-fumbling thoughts with a seriousness I had rarely encountered before. They nurtured a genuine love for the intellectual chase. But love, and even gratitude, does not blind you to absurdity. And there is a profound, almost hallucinatory absurdity in sitting on a plush sofa, a cup of coffee growing cold beside you, discussing whether moral facts exist in the fabric of the universe, when back home both of your parents are really grafting. 

    This is the central paradox of studying philosophy at a place like Oxford. It is both the most rigorous intellectual training imaginable and, simultaneously, an elaborate, beautiful, and deeply insulated fantasy. It’s a four-year step into what Adam Curtis calls the “Land of Make Believe,” a place where powerful, elegant stories are constructed to explain the world, but which feel increasingly flimsy and shifty when held up against the messy, contradictory, and often brutal texture of lived experience.

    Take things like Metaethics. Here, you are invited to climb to the highest possible peak of abstraction and look down upon the entire landscape of morality. You read Mackie’s Error Theory and entertain the notion that every moral statement you’ve ever made – that kindness is good, that cruelty is bad – is systematically false. There are no objective moral properties in the world, the theory goes; we are merely projecting our feelings onto a silent, value-free universe.

    You debate these ideas. You write essays on them, honing your arguments, dissecting the logic. It’s a fantastic mental gym. But the whole time, a second, more urgent reality is running on a parallel track. I would sit there, parsing the finer points of expressivism, and my mind would flash to my mum. To the years she spent not just surviving domestic abuse, but then going on to run a Women’s Aid, a place built on the unwavering, non-negotiable premise that hurting a woman is *wrong*. Not a subjective preference, not a mere emotive utterance, but a deep, foundational, and objective truth that motivated every action and every funding application. To sit in a tutorial and treat that conviction as a philosophical token to be coolly analysed felt like betrayal. A weird, intellectualised violence against the very real struggles that made my presence in that room possible. The privilege wasn’t just being at Oxford; it was the privilege of being able to afford, even for an hour, the belief that morality might just be a fiction.

    Then there was Knowledge and Reality, the main piece of Oxonian philosophical vertigo. You grapple with the brain-in-a-vat hypothesis. How do you *know* you’re not just a disembodied brain being fed electrical signals to simulate this very experience? The existential dread it’s supposed to induce feels academic, bloodless. My problem was never a lack of belief in the external world. The external world was my most pressing and undeniable reality. It was the warehouse where my stepdad sacrificed his own ambitions, a place of tangible, back-aching solidity. It was the cold dread that coiled in my stomach during my second year, when the reality of being stalked by my birth father was more terrifying and inescapable than any philosophical demon.

    Perhaps nowhere did this feeling of unreality feel more acute, more “shifty,” than in the discussions around the Ethics of AI. This should have been the nexus, the point where my two disciplines met. But the conversation rarely touched the ground. Instead, we floated in the stratosphere of longtermism and existential risk. We discussed the alignment problem: how to ensure a future superintelligence shares human values. We theorised about digital consciousness and the ethics of creating synthetic beings that could suffer. Cosmic and apocalyptic. 

    And here, the fantasy revealed another layer of its own precariousness. I would sit there, a scholarship kid working to pay bills, being taught by a brilliant tutor in early career who could dance circles around complex arguments. And I’d know, because we all knew, that this magnificent mind was likely on a short-term, insecure contract. They were a part of the academic precariat, inhabiting the same hallowed spaces as the tenured dons but without any of the security. 

    This is the shifty genius of the system. The Land of Make Believe isn’t just for the students; the staff are often forced to perform within it, too. They must project an aura of timeless, unhurried scholarship, of pure intellectual pursuit, while navigating the brutal realities of the modern academic job market. The institution itself runs on this disparity: its image built on an ancient model of stability and contemplation, its engine increasingly powered by insecure labour. We were two people, on opposite sides of the desk but on the same plush sofa, suspended in a bubble of abstract thought, both tethered to precarious economic realities that the content of our discussion was designed to ignore. It all weirdly felt like a grand act of misdirection from the small, personal, and systemic problems happening right now, even within the walls of the university itself.

    This is the essence of the Curtis critique. Powerful institutions create simplified, coherent narratives to manage a chaotic world. These narratives are often beautiful and compelling, but they fail to capture the jagged edges of individual experience. For four years, I was steeped in one of the most powerful narrative-creation engines on the planet. Philosophy, as taught at Oxford, is the ultimate training in building and dismantling these elegant systems of thought. You learn to construct a flawless argument, then to pivot and dismantle it with a critique. You build a castle of logic, then you learn precisely where the foundations are weakest.

    It’s an incredible skill. But you begin to realise that you’re living inside the very system you’re studying. The entire experience – the sandstone colleges, the formal dinners, the hushed libraries, the abstract debates guided by brilliant minds on temporary contracts – is a meticulously crafted story. It’s a story about the pure pursuit of knowledge, a story that says, “Here, in this protected space, we can solve the puzzle of existence.”

    For a while, you believe it. You have to. You sink into the sofa and let the Land of Make Believe wash over you, because the alternative is to be constantly, painfully aware of the chasm between this world and the one you came from. But the real world has a habit of intruding. A phone call. A bank balance. A memory. And the shifty feeling returns. The feeling that the map you are being taught to draw does not match the territory you actually have to navigate.

    And yet, here is the peculiar conclusion. I am deeply, profoundly grateful for my time in the Land of Make Believe. It was an illusion, but the gifts it bestowed are real. I am a product of it. The ability to write this, to deconstruct these narratives, to see the hidden assumptions behind a statement, to question the structures of power that present themselves as natural and inevitable. These skills were forged on those old sofas, honed in those dense, expensive silences.

    The fantasy was the training ground. It gave me the language and the tools to articulate the dissonance I felt. It taught me how to dismantle the very arguments that are used to justify the inequalities that shaped my life. It gave me a framework for understanding the power of stories, and the danger of believing them too readily. You cannot effectively critique a system without first understanding its logic, its language, its deepest-held beliefs. Oxford taught me that. My life teaches me why it matters. 

    Leaving has been like waking from a strange and vivid dream, but finding my pockets full of strange and useful tools from the dream-world. The gratitude I feel is not for the illusion, but for the training I received inside it. The great challenge is to hold both realities at once: to appreciate the profound privilege of the fantasy while honouring the brutal reality it obscures. The real work begins now, trying to use the tools forged in that quiet, panelled room to make a small dent in the loud, chaotic world outside. It’s about taking the lessons from the Land of Make Believe and trying, with clear eyes and a full heart, to apply them to the land of the real.

  • You arrive in Oxford, and the first thing that hits you isn’t the beauty, exactly. It’s the weight. The sheer, crushing tonnage of history, privilege, and expectation bearing down on honeyed stone. It’s a city built on dreaming, but whose dreams, you start to wonder? And paid for by whom? For someone like me, fresh (or not so fresh) from Bradford, carrying the psychic clutter of a life lived closer to the bone, it feels less like arriving at the seat of learning and more like infiltrating an exquisitely preserved, impossibly intricate museum where the exhibits occasionally condescend to speak.

    My Bradford isn’t the Brontë moors or the curry capital marketing spiel. It’s the grit under your fingernails that never quite washes off. It’s my mum, a teaching assistant who held kids’ hands sticky with paint while navigating the wreckage of domestic abuse, who then somehow found the strength not just to survive but to lead Bradford Women’s Aid, fighting battles far more immediate and less theoretical than any debated in Oxford’s Union. It’s my stepdad, who shelved dreams of spreadsheets and balances – the quiet dignity of an accountant – for the relentless rhythm of a warehouse floor, the sacrifice etched into the lines around his eyes, all so his stepkids could get citizenship, could get a chance, the kind of chance that felt like a lottery ticket cashed in just by getting through Oxford’s gates. It’s the phone calls home, the tight knot in my stomach when my dad – the one who actually raised me – got laid off, and suddenly my Master’s year isn’t just about deciphering Kripke or debugging code, it’s about clocking 15, sometimes 20, hours a week at a tech job after a summer internship, remotely dialling into meetings while trying to ignore the simmering anxiety about whether the bills got paid back home. It’s the shadow of my birth father, a stalker whose unwanted presence bled into the supposed sanctuary of my degree, a private horror played out against the backdrop of tutorials and formals. It’s the quiet desperation heard through the phone lines late at night while coordinating Nightline, the raw, unvarnished pain of people cracking under pressures this city seems designed to both generate and ignore.

    You carry this baggage – not with pride, not with shame, just as a matter of fact, like another set of textbooks – into the rarefied air. And you look around, trying to find your bearings, trying to find allies, trying to make sense of how this place proposes to engage with the world you know, the world that feels simultaneously a million miles away and breathing down your neck. And broadly, you find two distinct tribes attempting to chart a course towards a ‘better future’, both convinced they hold the map, both operating within the peculiar ecosystem of Oxford privilege, and both, from my vantage point, profoundly, maddeningly, flawed.

    The Earnest Vanguard: Socialists, Activists, and the Well-Meaning Throng

    First, there are the socialists, the activists, the constellation of groups orbiting the idea of social justice. Oxford Action for Palestine, XR rebels gluing themselves to things, the Labour Club, the anti-casualisation campaigners, the 93% Club trying to carve out space, Class Act fighting the good fight against unspoken assumptions. On paper, these are my people. The language is familiar – inequality, exploitation, solidarity, the need for systemic change. Their hearts, overwhelmingly, seem to be in the right place. They want a fairer world. They are angry about injustice, often articulate, sometimes even inspiring. You go to the meetings, initially with a sense of relief – ah, here we are.

    But then the unease creeps in. It’s subtle at first, then less so. It’s the whiff of performance, the stylised rebellion. It’s the meetings held in ancient college rooms, wood-panelled affairs where Marx is debated with the same intellectual fervour as medieval poetry, discussions often led by voices honed by elocution lessons, by people whose understanding of the ‘working class’ feels… curated. It’s theoretical, abstracted. Poverty is a dataset, oppression a structural diagram. The visceral, grinding reality of choosing between heating and eating, the gnawing fear of the bailiffs, the sheer, soul-crushing exhaustion of working two jobs while juggling childcare – these things are understood intellectually, perhaps empathetically, but not, it often feels, known.

    You sit there, listening to impassioned speeches about divestment or the evils of a particular corporation, and your mind drifts to the practicalities. Fine, divest. But what happens tomorrow to the families whose pensions are tied up in those funds? What’s the plan, the messy, complicated, real-world plan? You hear debates about the precise ideological shade of red one should adhere to, the endless splintering into factions, the purity tests, the denunciations. It feels like rearranging deckchairs on the Titanic, while back home, people are already in the freezing water.

    There’s a peculiar kind of class tourism that sometimes permeates these spaces. The adoption of working-class aesthetics or argot by people whose safety net is woven from inherited wealth or connections. The genuine shock on someone’s face when you explain you can’t make the protest because you have to work for a corporation, not as a political statement, but because rent needs paying, now. The well-meaning pity that feels more alienating than outright hostility.

    I coordinated Nightline. The calls weren’t about the nuances of intersectional theory or the legacy of colonialism, important though those are. They were about loneliness. Crippling anxiety. The terror of failure. Suicidal thoughts. Raw, immediate human suffering, often exacerbated by the pressures of this very institution. And you’d finish a shift, drained and heavy, and walk past posters urging attendance at a rally about something happening thousands of miles away, or a complex geopolitical issue distilled into a snappy, righteous slogan. The disconnect felt jarring, almost violent.

    The focus often seems to be on symbolic victories within the Oxford bubble. Toppling a statue, changing a name, getting the university to issue a statement. These things might matter, they might be necessary steps, but they feel profoundly insufficient when stacked against the scale of the problems outside these walls. It’s tilting at windmills made of sandstone. There’s a righteousness, an unshakeable certainty, that can feel brittle. It assumes a moral clarity that the messy compromises of life, the kind my mum and stepdad navigated daily, rarely afford. They talk of solidarity, but the lived experience often feels leagues apart. You end up feeling like a specimen, the ‘working-class voice’ wheeled out to add authenticity, while the fundamental dynamics remain unchanged. They are fighting for people like me, ostensibly, but rarely with us in a way that feels truly level, truly comprehending the immediate, unglamorous struggles. It’s the difference between reading about hunger and actually being hungry.

    This isn’t to dismiss their efforts entirely. Groups like Class Act and the 93% Club are vital, carving out spaces for students who feel perpetually out of sync. They articulate the microaggressions, the financial barriers, the cultural clashes that are exhausting and demoralising. They are doing necessary work. But their very existence highlights the absurdity of the situation – needing support groups simply to navigate an institution that claims to be a meritocracy. And even within these groups, the focus can sometimes feel internal – fixing Oxford, making this space more tolerable – rather than fundamentally challenging the structures outside Oxford that create the inequalities in the first place. The energy is immense, the passion undeniable. But it often feels like it’s spinning its wheels in the beautiful, ancient mud of Oxford itself, generating heat and noise but little traction on the real-world terrain where people like my family live.

    The Rational Redeemers: Effective Altruism and AI Safety

    Then there’s the other tribe, often overlapping with the Comp Sci corridors I haunt for my degree and my part-time job. The Effective Altruists. This is a different beast altogether, though no less convinced of its own righteousness. If the activist left operates on passionate conviction and historical critique, this world runs on cold, hard rationality, on utilitarian calculus, on the hypnotic allure of saving billions of future lives.

    You walk into their talks, often funded by tech billionaires or organisations spun off from that world, and the atmosphere is… clean. Cerebral. Lots of bright young men (and some women), overwhelmingly STEM-inclined, talking about existential risk, expected value calculations, longtermism. The problems they focus on are vast, abstract, often terrifying: rogue artificial intelligence wiping out humanity, catastrophic pandemics, the heat death of the universe. The scale is cosmic, the timeframe geological.

    There’s an undeniable intellectual appeal. It feels serious, grown-up, untainted by messy emotions or ideological squabbles (or so it presents itself). They wield statistics like weapons, crafting arguments of elegant, ruthless logic. Why donate to a local food bank (low impact, addresses symptoms not causes) when you could fund mosquito nets in Africa (high impact, saves quantifiable lives per pound spent)? Why worry about climate change refugees now when an unaligned superintelligence could theoretically negate all future value, forever?

    As someone juggling code and Kant, I understand the attraction. There’s a seductive tidiness to it. But sitting there, listening to discussions about maximising Quality-Adjusted Life Years (QALYs) across millennia, the dissonance becomes deafening. My phone buzzes. It’s a text about the bills back home. Another unexpected expense. Another reason the hours I log for that tech company, a cog in the very machine some EAs might analyse for its potential future risks, are utterly non-negotiable.

    The sheer abstraction of it feels like a different kind of privilege. The ability to focus on hypothetical future apocalypses while ignoring the rolling, present-day apocalypse of poverty, precarity, and despair that grinds people down right now. It’s a luxury afforded by distance, by security. It’s easy to calculate expected value when your own basic needs are met, when you aren’t haunted by the immediate spectre of eviction or the memory of a birth father’s unwanted attention making your own city feel unsafe.

    There’s a subtle, sometimes overt, dismissal of other forms of caring, other forms of struggle. The work my mum did at Women’s Aid? Probably not ‘cost-effective’ by EA metrics. Hard to quantify the impact of offering a cup of tea and a safe space to a terrified woman. My stepdad’s sacrifice? Meaningless in the grand utilitarian calculus. The small acts of kindness, the community solidarity, the messy business of loving and supporting people through immediate hardship – these things don’t fit neatly into the spreadsheets. They are deemed inefficient, perhaps even irrational.

    And the AI Safety focus sometimes feels a bit jarring. Here we are, wrestling with the potential dangers of powerful algorithms, while the current impacts of technology – job automation displacing workers like my dad, algorithmic bias reinforcing existing inequalities, the gig economy creating new forms of exploitation – are often treated as secondary concerns, mere bumps on the road to Artificial General Intelligence. We’re discussing the ethics of sentient code while people are struggling to afford the broadband needed to apply for jobs online.

    There’s a cultural element too. It can feel like a club, exclusive and slightly self-congratulatory. The jargon, the shared texts, the implicit hierarchy based on who can perform the most rigorous rationalist dissection. It often feels deeply disconnected from the texture of ordinary life, from the emotional landscape where most human decisions are actually made. It’s a view from 30,000 feet, meticulously charting the terrain below while oblivious to the people struggling in the mud. The focus on long-term, potentially infinite, future lives can feel like an excuse to disengage from the difficult, intractable problems of the present. It’s cleaner, intellectually stimulating, and carries the flattering implication that you, with your superior reasoning, are one of the vital few safeguarding humanity’s entire future. It’s a powerful narrative, especially potent in a place like Oxford, which has always fancied itself as shaping the destiny of the world.

    Caught Between Two Castles

    So here I stand, a Bradfordian socialist by instinct and upbringing, doing philosophy and computer science at Oxford, working a tech job to keep the lights on back home, haunted by personal ghosts, and watching these two distinct currents of world-saving swirl around me. And the overwhelming feeling is one of profound disillusionment. Not with the desire to make things better, but with the ways Oxford attempts it.

    Both the activist left and the EA/AI Safety bubble, in their different ways, seem to suffer from the Oxford condition: a detachment from the messy, compromised, often unglamorous reality lived by the vast majority. One side romanticises or theorises the struggle from a comfortable distance, often getting lost in ideological purity spirals or symbolic gestures. The other attempts to transcend the mess entirely, applying abstract logic to cosmic scales, potentially overlooking the immediate suffering right under its nose. Both seem convinced that the solutions lie in intellectual frameworks conceived within these privileged walls, whether it’s the correct reading of Marx or the most efficient QALY calculation.

    Orwell, in The Road to Wigan Pier, turned his unflinching gaze on the middle-class socialists of his day, puncturing their pretensions, their detachment, their peculiar habits that alienated the very people they claimed to champion. He saw the gap between the Hampstead intellectual advocating revolution and the miner coughing his lungs out underground. I see a similar gap today. Between the seminar room debate on post-capitalist futures and the frantic juggling of bills and shifts. Between the white paper on existential risk and the quiet desperation in a Nightline caller’s voice.

    The experience feels like being perpetually bilingual, code-switching between the language of survival and the languages of Oxford theory. You nod along in the AI Safety pizza party, understanding the logic gates and the probability curves, while mentally calculating if your paycheque will cover the bill you got texted earlier. You stand at the rally, chanting the slogans, while feeling a pang of guilt for not being able to afford the train fare home for the weekend. You try to explain the sheer, bone-deep weariness of it all, the constant low-level hum of anxiety, the weight of family responsibility, and you see the flicker of incomprehension in well-meaning eyes.

    Perhaps this is the inevitable fate of the infiltrator. You see the machine from the inside, but you still feel the grinding gears of the world you came from. You appreciate the intellectual firepower, the genuine desire for change in many quarters, but you can’t shake the feeling that it’s all happening in a hermetically sealed environment, disconnected from the urgent, messy reality of places like Bradford.

    My socialism hasn’t evaporated. If anything, the proximity to extreme wealth and the intellectual justifications for inequality have sharpened it. But my faith in these particular expressions of change-making, as incubated in the Oxford pressure cooker, has been sorely tested. The road from Bradford to Oxford is paved with sacrifices, anxieties, and contradictions. The road out, the road towards genuine, grounded change that acknowledges the immediate, visceral needs of people alongside the grander visions, feels harder to find. It’s certainly not clearly signposted from the dreaming spires. It feels like it needs to be built from different materials altogether, starting not with theory or calculus, but with the foundations of lived experience, with the grit and the grief and the stubborn, unglamorous hope forged in the places Oxford prefers to study rather than truly understand. The weight of this place is immense, but the weight of reality back home is heavier still. And finding a way to reconcile the two, to make one truly serve the other, feels like the hardest problem of all.