“No Technology Is Going To Take Over All Of Society”

Exploring LLMs, Generative AI, and their Societal Implications

Felix M. Simon
6 min readJun 17, 2023
Copyright: DALL:E

How do Large Language Models function and what are the key components that make them work? How and where are they used, including in the media? Can they contribute to the promotion of democratic values?

These were some of the key questions we explored at the recent symposium “Automating Democracy: Generative AI, Journalism, and the Future of Democracy”, held on June 16th, 2023, at Balliol College, University of Oxford. Supported by the Balliol Interdisciplinary Institute, the Institute for Ethics in AI, and the Oxford Internet Institute, participants delved into the technical intricacies and practical applications of these emerging technologies, with the aim to explore how they may (or may not) affect our media landscape and democratic culture.

Organised jointly by Dr Linda Eggert, an Early Career Fellow in Philosophy, and Felix M. Simon, a communication researcher and DPhil student at the Oxford Internet Institute, the symposium provided a platform for scholars, experts, practitioners and student to engage in robust and open debates about LLMs, Generative AI, and their implications for our world.

Felix Simon and Linda Eggert introducing the first panel

No Ghost in the Machine

The first panel focused on how Large Language Models (do not) work. The impact of ChatGPT has been so large primarily due to its user-friendly nature. “Current AI is often just re-branding. The progress of this technology has been more gradual than commonly assumed,” said Hal Hodson, a technology journalist at The Economist. Much of this remained obscured from public view, not because it was impossible to learn more about it, but because the broader public did not pay a lot of attention.

Hodson and Hannah Kirk, an AI researcher and DPhil student at the Oxford Internet Institute, concurred that Large Language Models (LLMs) are fundamentally intricate statistical machines, drawing on extensive datasets to generate output. Hodson, who has extensively covered the technology, argued that the vast majority of experts in the field view LLMs as machines, devoid of human-like attributes — contrary to what some recent news coverage might suggest. He also cautioned against overblown assumption about their impact: “No technology is going to take over all of society.”

Kirk delved into a less explored aspect — the refinement of these systems beyond initial their training. She emphasised the significance of reinforcement learning through human feedback (RLHF), a technique widely adopted by industry labs. However, Kirk expressed concern about the limited representation of these feedback providers and how their values profoundly shape AI systems. “We need to ask: Who are these people and how do their values end up shaping these systems?”

During the discussion, another important point emerged: the question of who reaps the rewards from these systems. As much of the pre-training relies on web-scraped data without compensating its creators, issues of copyright and remuneration claims loom large. Hodson anticipates an ongoing battle, yet believes that a concord between copyright holders and technologists will eventually be reached.

One common issue, recognised by both Kirk and Hodson, is the tendency to anthropomorphise these systems. “You cannot avoid anthropomorphisation but you should try,” said Kirk. They both strive to avoid this inclination in their work. Kirk, however, also noted that some companies have little incentive to discourage it, as it enhances the perception of their systems’ power beyond reality.

Linda Eggert, Hannah Kirk, Hal Hodson

AI is Not A Quick Fix For the Media

One area where LLMs are increasingly used is the media and the news. “We began training individuals to use ChatGPT in 2018,” shared Laura Ellis from the BBC, highlighting the broadcaster’s early engagement with the technology. Since then, the organisation has experienced the typical upheavals that accompany any new technological advancement — enthusiasm mixed with concerns about job displacement and worries about the transformation of the BBC’s work. These days, the broadcaster adopts a cautious approach, actively considering safety, potential harms, and copyright issues, among a range of issues according to Ellis.

Gary Rogers, a media consultant and founder of news agency RADAR, emphasised the longstanding presence of AI in the news industry. “It is not a novel concept. Many people were simply unaware because it was primarily used downstream, for instance, in recommendations. This has changed with the advent of ChatGPT.” RADAR extensively uses natural language processing (NLP) and natural language generation (NLG) techniques, producing approximately 150,000 stories annually based on local and national data sources, covering topics such as local elections.

While a growing number of media organisations admit to exploring and using LLMs in their work, for example to write summaries or create new products, few have policies in place which guide their use. Rogers urged publishers to rectify this: “Organisations need to put policies in place so people can work with this safely, because they are already using this.”

Gemma Newlands, a sociologist and lecturer at the Oxford Internet Institute, provided a broader perspective on the adoption of new technologies within organisations and the accompanying challenges. She underscored the inherent tensions between the individual and organisational motivations for the use of any technology, emphasising that questions of power and agency cannot be avoided.

“Using ChatGPT can be individually benefitting for a journalist, but detrimental for the organisation if it introduces, for example, errors in journalistic copy,” Newlands argued. Conversely, organisations may devalue journalists’ work by relying excessively on such technologies, resulting in a situation beneficial to the organisation but detrimental to the individual. Newlands also contemplated how the adoption of AI could shape the social perception of journalism and how people value and trust it. And: “Finding a way to use AI is easy. Finding a way that makes sense for your business model is really hard.”

Linda Eggert, Gemma Newlands, Gary Rogers, Laura Ellis

AI and Democracy or: Who Gets To Decide The Direction

The final panel considered the impact of AI on democracy and the governance of the technology. Hélène Landemore of Yale University stressed the need to democratically build and regulate AI systems. “We have all the models to build a global assembly to think about regulating something like AI,” she argued, pointing to the success of citizen assemblies in thinking about ways to address issues such as climate change or assisted dying. “We need a global demos that goes beyond nation states to shape the development of this technology.”

John Tasioulas of the Institute for Ethics in AI warned that AI cannot be left to progress without adequate regulation. “The assumption that a technology will automatically be democratic in nature is a pipe-dream.” He sees AI as part of a larger tug-of-war between technocratic and more populist forces, with a compromise of some form needed. He was also critical of the goals towards which AI systems are currently optimised and developed. “The focus is often only on economic growth, but that is often not in the common interest.” AI should be developed for the common good, not the interests of a set of companies or technocratic elites.

Polly Curtis of the London-based think tank Demos shared some of these worries. “AI at the moment is an investment arms race powered by greed,” she argued. For Curtis, there is a chance to use AI to repair the relationship between citizens and the state in countries such as the UK, by making information and services more accessible and fostering a better connection between the state and its citizens. “But it could also go the other way, by baking in the biases and discrimination we already have. The lesson from the last 20 years of technological development is this: The worst is not going to happen but change is inevitable and will not necessarily positive.”

--

--

Felix M. Simon

Research Fellow AI & News, Reuters Institute for the Study of Journalism, Uni of Oxford | DPhil, Oxford Internet Institute