Eurasia Review Interviews: Adrian Monck on AI, Journalism and the Erosion of Shared Reality
Sharing this chat with Eurasia Review
A conversation with Adrian Monck, Editor of 7 Things and former Managing Director and Managing Board Member at the World Economic Forum.
14 May 2026
By Aritra Banerjee
The crisis in today’s information environment goes beyond misinformation, disinformation and propaganda. As artificial intelligence (AI) systems increasingly shape visibility, interpretation and public discourse, the deeper challenge may lie in the erosion of shared reality itself.
Questions surrounding trust, legitimacy, institutional authority and the boundaries of public consensus are becoming more central as synthetic content, algorithmic amplification and AI-mediated systems reshape how societies process information and construct meaning.
At the same time, traditional mechanisms that once mediated public understanding – legacy journalism, editorial gatekeeping, academic expertise and institutional authority – are increasingly under strain. In their place has emerged a far more fragmented and accelerated information ecosystem, where influence is shaped not only by states and media organisations but also by platforms, recommendation systems, private technology firms and increasingly autonomous AI-driven processes.
Adrian Monck has worked across journalism, academia, institutional communications and geopolitical analysis. From reporting on conflict and global events to helping shape public engagement and communications at the World Economic Forum, and now through his newsletter 7 Things, Monck has witnessed the evolution of information from the broadcast era to increasingly AI-mediated environments.
In this interview with EurAsia Review, Monck discusses the collapse of traditional gatekeeping, synthetic consensus, AI-driven influence systems, institutional legitimacy, cognitive security, and the growing contest over who – or what – shapes public understanding today.
Q1. You have worked across several very different versions of the information ecosystem – broadcast journalism, conflict reporting, journalism education, global institutional communications, and now 7 Things. Looking back across that arc, what has changed most in the way societies understand world events?
Edward Watts’ The Final Pagan Generation tells the story of Rome’s liberal patrician elite as their world collapsed. They faced a movement of intolerant, populist upstarts disrupting their traditional polytheism and its rites.
By the end, Rome’s gods and temples and rituals had all been swept away. Christianity prevailed. The empire’s relaxed approach to worship gave way to heresy and schism and holy war. Christianity became the prism through which centuries of Western thought were made to pass. Villages and towns still paraded their idols, but now they were saints. There was some subsumed continuity, but no one today worships Jupiter or Mithras. These gods are dead.
The elites themselves adapted. Elites do. Their younger sons became bishops, popes and saints. Emperors became holy and Roman.
I feel as if the same thing is happening to the old establishment of the West. A mass erasure of an enlightenment liberalism that lasted centuries – a quiet intellectual faith in progress and human improvement that linked hands with political complacency and refused to overturn or abandon the status quo.
The people who embodied that for me were my fellow journalists in the old Western media elite. The best of them were sympathetic, smart and well-intentioned. But their abiding faith was the power of information to deliver a kind of moral transformation. That faith led them to extraordinary, award-winning reporting. But it was like pouring water onto stone. Nothing could grow from it.
Reporting from Bosnia in the early nineties, I thought I was serving a single national audience that would absorb the reporting and draw broadly the same conclusions I drew.
What actually happened was that coverage of Srebrenica, the siege of Sarajevo and the camps did not produce, among some young British Muslims, a desire to lobby peacefully for intervention.
Instead, it convinced them that Europe was apathetic in the face of their co-religionists being massacred. It didn’t matter that Bosnia’s Muslims were European converts, with a fondness for beer and a relaxed – almost Anglican – attitude to religious practice. Same footage, different audiences, different lessons. I thought we were stirring a national conversation.
I had been raised to believe my society’s division was class, that its common history might be an instrument for a shared re-imagining of the future by shaping how we told the present (yes, the curse of radical journalism).
The myths of national identity and popular history we relied on to tell stories had already collapsed – and we hadn’t noticed we were trapped inside them.
Britain didn’t dwell on the cruelties of empire, the humiliations of post-colonialism, or the brutality of its own industrial and rural past.
We were producing the news from our privileged pulpit and watching it travel into communities that advertisers did not care about. We gave ourselves plaudits for the results.
In the meantime, we missed what executives at multinational companies could see. That anyone who travelled and observed could pick up on. The hollowing out of the Western worldview. If you look at successful popular histories like Peter Frankopan’s and William Dalrymple’s, they re-centre the world east. Europe – the West – is once more a patchwork archipelago on the far tip of Asia.
The interpreters of global events I am most interested in now are those connected to those other emergent places.
Q2. 7 Things is built around separating signal from noise. In an age where not only content but also engagement, consensus and public reaction can be generated or distorted, what do you now regard as a reliable signal?
The discipline I try to apply at 7 Things is to privilege the signals of open source intelligence (OSINT). The things that survive scepticism: customs data, ship movements, satellite imagery, regulatory filings, court documents, central bank balance sheets, money in motion.
And to interpret it, I try to bring both an understanding of how organisations form views and cultures, and how things in the real world work. That cover-ups incriminate more often than crimes. That conspiracies do occur, but more often cock-ups. That silence is easier to buy than truth is to bend. That the most awful things happen for the most banal of reasons.
Behavioural data over declarative data. Which is a fancy way of saying, the gap between what you say and what you do.
Q3. You have written and spoken for many years about trust in media and public understanding. Do you think the central problem today is misinformation, disinformation and malinformation (MDM) itself, or something deeper – a loss of shared interpretation and trust in authority?
The ‘crisis of trust’ framing has always been partly a confidence trick. What surveys actually show, when you look at them honestly, is that people will tell pollsters they don’t trust the media and then spend the rest of the day consuming exactly what they claim to despise.
The behavioural evidence and the declarative evidence point in opposite directions. As Onora O’Neill argued many years ago, we may not have a crisis of trust so much as a culture of suspicion – and the two are very different things.
So MDM is a symptom, not the disease. The deeper thing is structural. We built an information economy where the producers – once newspapers and broadcasters, now platforms – are paid for attention, not accuracy.
In Can You Trust the Media? I argued that editors and proprietors want your time, attention and money. Like medieval preachers deploying scripture, editors treat ‘the truth’ as a fabric to be woven, stretched and dyed to fit the day.
Platforms took Hunter S. Thompson’s Fear and Loathing, the Daily Mail‘s ‘daily hate’ emotional logic, removed the editorial friction that occasionally slowed it down, and the professional gatekeeping that occasionally kept it honest. What’s changed since I wrote the book is the scale, speed and the near-total absence of liability.
The mainstream press has spent years complaining about misinformation as if the problem were external to journalism – bad actors, foreign agents, conspiracy theorists.
But the public correctly noticed that legacy institutions lied too, repeatedly, and rarely faced consequences. And that they protected their own. The Wall Street Journal is a brilliant newspaper, but its reporters will never investigate their owners.
Britain’s Hutton inquiry, America’s Iraq WMD coverage, the early Covid interventions: the issue isn’t that the public was misled by Russians. The authority-conferring process – the way a contested claim used to get settled through an editorial chain, peer review, regulatory findings, or the courts – has been visibly broken by the institutions that oversaw it.
The liberal answer to this is to fund accountability journalism harder and trust that an informed public will do the rest. If you want a clear example, look at the Baltimore Banner – a non-profit bankrolled by a Democratic mega-donor. Partnered with the New York Times, it won a Pulitzer for an investigation into the city’s fentanyl crisis and older Black men. The reporting is truly excellent – behavioural data, structural causation, a statistical model the Banner shared with other newsrooms so its method could be checked and extended.
It is also, politically, a desperate restatement of the old faith – that the problem is informational, that neutral reporting plus civic-minded money equals accountability, that the status quo basically works if only you resource it properly.
I’m not sure that even the public the Banner reaches accepts that faith any more, even when the journalism is exemplary. Which is rather the point. The genteel paganism of journalism is being swept away.
That journalism now relies so heavily on philanthropy tells you everything about it as a political project. Newsrooms are alms-houses for today’s concerned billionaires.
The honest reconstruction project is a bigger problem than journalism.
We don’t need more fact-checking.
We need accountable publishing, and we need to extend the duties that have always governed publishers – libel, fraud, incitement – to the algorithmic systems that have so far escaped them.
The misinformation frame lets institutions off the hook by pretending the problem is the public’s gullibility. It isn’t. The public, as it usually does, has noticed something real and moved.
Q4. Having worked both inside newsrooms and within a major global institution, what did that transition teach you about the difference between reporting events and helping institutions explain themselves to a sceptical public?
Each world produces a particular kind of self-deception, and the people who move between them can easily import the worst habits of both rather than the best. It’s comforting to think of yourself switching environments and bringing innovation when perhaps you’re just bringing pollution. No one wants to think of themselves as an invasive species.
Reporting an event, you tell yourself you are an adversarial check on authority. The mythology is built around this – the fourth estate, speaking truth to power, the journalist as outsider. The reality, certainly in the American and British television news I came up through, was that we were largely an insider operation talking to other insiders, with a small pejorative vocabulary deployed selectively against those we found embarrassing but rarely against the institutions we depended on for access. Our scepticism was real and cultivated but narrowly deployed. Those habits don’t feel like complicity from the inside. They feel like professional judgement.
Going to work inside an institution, I expected to encounter the opposite – suppression, message discipline, communications control. What I found, at the World Economic Forum and I think more broadly, was something stranger and harder to argue with.
The institution genuinely did not see itself the way its critics saw it. The people inside were not cynically constructing a façade; they were sincere and well-meaning. The convening logic, the public-private partnership language, the multistakeholder framing – these weren’t talking points; they were the actual operating worldview of the people in the building. That makes institutional communications more difficult to do honestly than people outside imagine, because any ‘dishonesty’ is not strategic. It’s epistemic and emergent.
What crossing the line actually teaches you is how much of what passes for analysis in both worlds is, in fact, sociological positioning.
Journalists distrust institutions and organisations partly because they’re disposed to and partly because suspicion is the cheapest possible form of critique – you can produce a great deal of sceptical copy that costs you nothing and risks nothing, and most newsrooms reward it.
Institutional communicators must trust their own institution partly because they have to and partly because the alternative is unemployment.
Neither side is doing the harder work, partly because both the work and its results are often dull.
The skill, in the end, is being sceptical enough to be credible outside, candid enough to be useful inside.
Q5. Recent research has warned that AI swarms could create ‘synthetic consensus’ – the appearance of broad public agreement generated by coordinated artificial personas. If public opinion can be partly manufactured, does journalism’s role shift from verifying facts to also authenticating the public conversation itself?
Public opinion has never been a pristine input that journalism faithfully describes. Lippmann understood this a hundred years ago, and every working journalist understands it from about week three on the job.
The question of whose voices get in, whose framings define an issue, whose silence is taken to mean assent – these were always editorial choices, made by a small group of people whose collective view of the world bore only an intermittent relationship to the views of the population they claimed to be informing. Manufactured consent didn’t arrive with bots. Bots just turned human production processes into an infernal machinery for always-on outrage.
So when policy people ask whether journalism’s role now expands to authenticating the public conversation itself, I’d push back on the premise.
Journalism never authenticated the public conversation. It curated a version of it – a version assembled from quotes that were on hand, sources who were willing to talk, communities already formed in the minds of the people doing the reporting.
What’s actually new is not the authentication problem but the scale and speed of fabrication. A determined actor can now generate, at marginal cost, the appearance of public sentiment on any question, in any language, attributable to any demographic, and deliverable through platforms whose systems are built to encourage it. That is a different order of problem. It’s also one journalism is not equipped to solve, because its own publics have shrunk. Doing due diligence on the civic virtue of the republic is no longer much of a business model, unless you’re in the business of political campaigning.
The deeper problem the question gestures toward is real, but it is not journalism’s alone to solve. Synthetic consensus works because our old civic life has been hollowed out – the parties, unions, congregations, local press and voluntary associations that used to produce slow, accountable opinion-formation.
Journalism cannot replace civic infrastructure, and we should stop pretending it can.
Q6. We usually think of influence operations as messages aimed at people. But increasingly, content may also be created for machines – search systems, recommender algorithms, crawlers and future AI training data. How should journalists and institutions think about manipulation that targets machines as much as human audiences?
The audience is now bifurcated: humans who read and machines that ingest. Increasingly, the second audience determines what the first ever sees. This is genuinely new in a way the misinformation debate has been slow to absorb.
Pre-2023, you optimised for one chokepoint – search engine optimisation (SEO). Now you’re writing for an unknown number of models that will ingest, paraphrase and re-serve your content in ways you can’t see or audit. Bylines are just bytes.
The space available for manipulation is enormous and largely undefended. Training-data poisoning – the deliberate insertion of false content into the substrates from which models learn – is a research field, not a hypothetical. Retrieval-augmented spoofing, where bad actors craft pages designed to be ingested by search-augmented large language models (LLMs), is happening at scale. Citation laundering through synthetic intermediaries – fake papers, fake outlets, fake think tanks – produces evidentiary trails a model treats as multiple sources.
The adversarial threat grabs the most attention, but most institutions face a less exotic problem: good-faith ingestion that distorts like telephone whispers. A model paraphrasing a careful policy paper for a reader who will never see the original isn’t poisoning anything. It is still routing your specific claims through a system that strips provenance, flattens hedges and substitutes its own confidence for yours. The defences look similar from the writing end, which is why they are worth treating together – but they are not the same problem, and institutions that conflate them will misallocate.
For years, this is what popular journalism was accused of doing. But that was at least embodied and individualised.
For institutions, assume every public document you produce will be gobbled up, paraphrased and re-served as an authoritative summary by a system that will not credit you and which the reader will treat as the source.
Write so your specific factual claims survive that journey. Be quotable in a way that resists distortion – short, declarative, falsifiable. Make your provenance machine-readable. Watermark where you can, sign where you can, and accept that the reader will not see the seal, but the next-generation model might.
For journalists, the implication is that provenance becomes the product. The byline matters less than the auditable chain, and the auditable chain matters more than the prose.
Not a romantic conclusion, but it’s where we’re headed.
Q7. Policymakers now speak of ‘cognitive security’ – the protection of public attention, trust, judgement and decision-making in a manipulated information environment. Does that concept make sense to you, and what would a democratic version of cognitive security look like in practice?
The concept makes sense as a description of the problem; the framing is dangerous as a prescription.
‘Security’ implies a securitising actor – usually a state – deciding which forms of attention deserve protection and which do not. You can see early drafts in the cognitive-warfare doctrines now circulating in NATO papers and in the more enthusiastic European resilience programmes.
It’s not hard to imagine ‘cognitive security’ becoming the legal basis for content rules, platform mandates and citizen monitoring that look indistinguishable from censorship.
But the libertarian alternative – leave it to the platforms, trust the marketplace of ideas – is precisely what produced the problem. The platforms have better lawyers, bigger lobbying budgets and larger market capitalisations.
So we are choosing between a state-led regime that risks putting soft authoritarianism in the hands of rising populists and a market-led non-regime that has already delivered visible, documented harms. Ouch.
A democratic version of cognitive security would mostly involve re-applying existing legal categories to new defendants. Extend the libel, fraud, incitement and consumer-protection laws that have always governed publishers to the algorithmic systems that have largely escaped them.
The US Supreme Court ducked the question in Gonzalez v. Google, deciding on narrower Anti-Terrorism Act grounds and leaving the algorithmic-amplification carve-out exactly where the Ninth Circuit left it. That ambiguity is exploited every day. And the platforms want to globalise their freedom from accountability.
The upstream problem – what models ingest and how – needs a separate regime, probably closer to product liability than to libel. Karen Hao’s Empire of AI has a horrifying tale of training data fetched up from the bowels of the dark net poisoning the models it was supposed to help.
Mandate model transparency. Food labelling standards for algorithms. Guarantee public-interest data access for researchers – the access that has been progressively withdrawn over the last five years as the evidence got embarrassing. Give citizens the right to know when they are being targeted, by whom, and on the basis of what data.
Speed is their weapon; slowness is the public’s defence.
Q8. Many institutions still communicate as though clear facts will eventually defeat bad narratives. But today’s information environment and platform architecture often reward emotion, identity and resonance, even when proven false. How can institutions communicate with emotional force without becoming manipulative, while countering the first narrative advantage?
The question contains a flattering assumption: that institutions communicate honestly and are simply outpaced by the emotional sophistication of bad actors.
Institutions are usually outpaced because their preferred register avoids saying anything that could move anyone, and that is mostly a self-inflicted wound.
In the old days, it was circumlocution. These days, now every LLM has absorbed Politics and the English Language, it’s pithy and pointless.
I’ve made the demand-side version of this argument earlier – that audiences read institutional vagueness as evasion. The supply-side version is that the platforms didn’t invent anger, fear, shame and contempt. They didn’t even industrialise them; the rotary printing press and the tabloids did that. But they removed the checks that had prevented the most poisonous eruptions of the public sphere, and they did it under the libertarian banner of free speech when all they wanted was the age-old monopolist’s licence to escape responsibility.
And into this toxified swamp wades the standard institutional response of the old order. Reshape bad narratives with a corrective communications strategy. More clarity. Better messaging. A sharper line. Perhaps … a video.
This almost never works, and the reason it doesn’t work is not that the audience has been brainwashed by the algorithm. The reason is the one I gave at the start: there is a bigger problem being avoided, and the audience knows it.
The obvious test is accountability: if you can sign your name to a claim, take the professional consequences of having argued it, and stand the cross-examination that follows, you are engaged in rhetoric, not propaganda.
Counter-narratives don’t fail because they are late. They fail because they are abstract, defensive and written by committee for an audience that has already left the room.
Q9. In your recent Al Jazeera discussion, the question was framed around who controls AI. Is ‘control’ the right way to think about AI governance, or are we dealing with a more distributed ecosystem of states, companies, platforms, models and users?
The discussion about ‘who controls AI’ is the question politicians and commentators ask when they want to sound serious without committing to a position.
It’s the AI-policy equivalent of ‘we need a national conversation about X’ – a phrase that performs political concern whilst reserving the right to actually mean anything.
Control implies a controller, singular, and there isn’t one. There are some six frontier labs, four states with serious AI leverage, several thousand different deployment contexts, and a highly concentrated cluster of equipment makers and cloud providers, each with incompatible interests and incompatible regulatory exposure.
What we should be asking is who and what do we want to protect? And from who and what?
Few companies – or states – are interested in answering these questions.
Q10. Do you see AI companies and major platforms as geopolitical actors in their own right? If so, what responsibilities follow when private systems increasingly shape what is visible, credible and memorable?
Not in the sense of the old East India Companies, with armies and fleets. But the companies have an interest in shaping the policies of their hosts – principally Washington and Beijing. This shaping is almost impossible to see in an opaque political environment like China’s, but in the US and Europe you can measure it in the size of lobbying budgets and satellite HQs.
Geopolitics was invented by the energy industry, which needed horizons measured in decades to pay off its investments. Energy still underpins the global economy, and AI is desperate for it, but AI is also now the engine for future investment and strategic decision-making. It has its own geopolitical needs from a globalised supply chain.
The more interesting question is why the polite fiction that they are ‘just companies’ has lasted as long as it has. The answer, I think, is that the fiction suits everyone with power. It suits the companies, who get the prerogatives of statecraft without the duties. It suits the states, which can use private firms as instruments of policy while retaining deniability. It suits the financial press, which would rather cover earnings than sovereignty – or vulnerability. And it suits a particular kind of tech-libertarian who genuinely believes the firms operate outside the political domain, despite the daily evidence that they do not.
The responsibilities that follow are uncomfortable, particularly for executives who would prefer the prerogatives without the duties:
Due process for when accounts are suspended, models deprecated, capabilities throttled or geo-restricted.
Transparency about recommender weighting: the editorial decisions made by ranking systems are now of greater consequence than any proprietor’s editorial decisions ever were, and they remain mysterious by design.
Predictability of model behaviour – enterprise customers in regulated industries need to know what their tools will do, and so do the regulators who oversee them.
Non-arbitrariness: when private systems exercise functions that resemble the state’s, the rule-of-law expectations attached to states should follow.
The sovereignty implications are real and underdiscussed. A model trained on American or Chinese text, deployed globally, embeds a worldview the way the BBC World Service once did – but without the editorial accountability, the public-service remit, or the international law framework that grew up around state-backed broadcasters.
Other countries are noticing, and the Indian, French and Gulf efforts to build sovereign capabilities are an early attempt to address a real asymmetry.
The ugly category we need is ‘geopolitically significant private infrastructure’. We had it for Standard Oil, the railroads, AT&T, and the steel industry. We invented antitrust, common-carrier obligations and structural separation to address it. The lesson is not that the same instruments will work for AI – they probably won’t, not without significant adaptation.
The lesson is that previous generations recognised the problem and built solutions to address it.
The current generation is still pretending the problem doesn’t exist, which is the most reliable indicator that the firms are winning the battle.
Q11. Much of the AI governance debate is still framed around the United States, China and Europe. What does that framing miss about the Global South, including countries such as India, Gulf states, and regions whose languages, data, histories and political contexts are often underrepresented in AI systems?
Roughly five out of every six people on the planet, plus their languages, datasets, historical archives and political traditions, aren’t represented in current frontier models.
The strategic studies great power framing has migrated to AI policy without anyone asking whether it works, and it doesn’t. It also suits the people doing the framing, which is the reason these things persist: the US-China-Europe story is the one Western think tanks know how to tell, with the personnel they already have, citing the literature they already read.
The Gulf is the analytically most interesting omission, and the one I am closest to professionally. It is a capital actor whose decisions can shape which models get built, where compute physically lands, and what kinds of national-strategy bets become viable for everyone else.
India has a linguistic diversity at a scale that breaks current model assumptions about coverage and quality. It has digital public infrastructure – Aadhaar, UPI, the DPI stack – that the West hasn’t matched and doesn’t really understand. A regulatory position that is neither American (let the firms run) nor Chinese (let the state direct) but something in between, oriented around public infrastructure rather than private platforms. To treat India as a ‘developing market’ for foreign frontier models, as much AI-policy literature still does, is to misread the actual position. India is not waiting to be regulated by Brussels. It has its own model.
Africa is probably the omission the framing flatters itself by ignoring, because Africa is where the demographic weight of this century sits and where the biggest decisions about whose languages, whose archives and whose institutional knowledge make it into frontier models will be ratified, or not, over the next twenty years.
Southeast Asia features in mainstream discussion only as an extension of US-China competition rather than as a region with its own interests, which, of course, it has.
And the Global South framing itself, which is meant to address all of this, is itself a soothing, smoothing label. Indonesia, Brazil, Nigeria, the UAE, Mexico and South Africa do not share the same problems and do not have the same interests. Bundling them together is the kind of analytical laziness that produces the very Western-centric AI conversation we’re meant to be transcending.
The deeper issue is that frontier model performance in non-English, non-Mandarin contexts remains visibly worse, and the gap is not closing as fast as the field’s marketing suggests. That is a structural inequity that will compound. The countries best placed to do something about it are the ones currently absent from the governance conversation – and the governance conversation is constructed, partly deliberately, partly by inertia, to keep them absent.
Q12. You describe 7 Things as an AI-assisted attempt to make sense of a world that increasingly defies easy explanation. In your own practice, where does AI genuinely help with sense-making, and where does human judgement remain irreplaceable?
I hate writing and love thinking. I have wanted AI to write since the days of Narrative Science, a company that helped to automate the writing of sports play-by-play commentary and financial results. Journalism is formulaic writing, and AI is good at writing formulaically. But sadly its process of writing is currently poor.
Let me give you an example.
It likes the short declarative sentence. The follow-up will be the lengthy and discursive middle paragraph – a parade of paradiastoles, a litany of listicles, interminable triads – wheeled out to pad and substantiate points that sound by prose progression to be rhetorically related, but which lack argument and logic. It’s not real writing; it’s re-writing what’s real.
Did that capture it? Punchiness and prolixity, or burstiness and perplexity, as they would say in the business.
But what AI can do is sort through hundreds of feeds. It can identify patterns I would not catch unaided and translate across a dozen languages I do not read. First-pass synthesis. Matching patterns across book references I half-remembered. Challenge assumptions. Find weaknesses. My 7 Things workflow uses all these capabilities heavily, and it would be impossible without them.
But the decisions about what the readership actually needs to see, and what I am prepared to be wrong about in front of tens of thousands of people, are mine, and so they should be – not because the models can’t generate them, but because the questions behind them can’t be delegated.
That’s the part of journalism that doesn’t get automated. The curiosity. Not because it’s irreplaceable, but because it comes from – and is shaped by – human beings. At least for now.
Anyone who claims AI is useless for sense-making has either not tried it seriously or puts professional pride ahead of technological engagement. It can edit and critique. It can fill a blank page where there was only emptiness, which is every writer’s dream – and nightmare. AI is bad at knowing what’s missing. It is brilliant at recreating gaps from its base.
Meanwhile, the prissiness of the temple of journalism towards AI will only continue. Like the watch industry after the invention of quartz, it will become more about brand and signalling and less about keeping time. If AI helps a Korean analyst communicate in ways that were previously impossible, what’s not to be grateful for? If I enjoy reading something, I don’t care what tools you used in its creation.
And besides, language is the pool in which humans swim and LLMs doggy paddle, and twentieth-century philosophy taught us that it is, at best, a pond.
Even the sea of mathematics, with its symbolic roar, is bounded. Perhaps the most exciting prospect of AI is that in science it takes us out into the ocean, beyond the limits of our current capacity to generate knowledge.
That is the actual prospect worth thinking about, and it makes the chatbot-as-amanuensis debate look like exactly what it is: a parlour game of the kind Priestley played with his mice and bell jars, diverting an audience in the front room whilst something more consequential is being built in the next.


