It's time we involve citizens in the AI revolution

By Vincent Straub, Graduate Student at the Oxford Internet Insitute

With intelligent machines increasingly playing a role in our daily lives, the public has a right to be informed of the social implications of new technologies.

As the ongoing revolution in robotics and artificial intelligence (AI) disrupts society, it has reignited debate about which principles should guide the development of new technologies, and how we should use them. But although topics like automation and algorithmic bias are now under the public spotlight, there is not enough focus on ensuring that citizens understand how intelligent machines could shape us, and even change what it means to be human altogether. This only risks getting worse if we continue to let industry and academia steer the debate and leave out the public in assessing the social implications of new technology. In response, some are pushing for more transparency in AI research, but that’s not the only measure we should be taking. 

The field of AI aims to develop computer systems that can perform tasks we normally associate with human thinking. For example, a program that translates text from one language to another, or a model that identifies diseases from radiologic images; both can be viewed to ‘possess’ some form of artificial intelligence. Robots are often the containers for these systems (AI is the brain and the robot is its body if you will). 

The pace at which these technologies are transforming our economy and everyday lives is impressive. But often we don’t stop to ask how these systems actually work; in some cases, they still depend on a largely invisible (often female) data labeling workforce. 

More alarmingly, we give little thought to the social consequences of adopting such technologies. Previous technological innovations, like steam power and electricity, have modified the way we live, of course. But so far they have not fundamentally altered what makes us human and what differentiates us from machines—our capacity for love or, more generally, connection, friendship, and empathy. In the age of intelligent machines, this could change. 

Now that AI systems are mastering the ability to personalize individual experiences, and with ‘emotional’ companion robots learning to recognize human feelings, our need for human-to-human social interaction may be reduced. 

Yet in times of political polarization, it is exactly such interaction that is crucial for fostering love, mutual understanding, and building a cohesive society. As Kai-Fu Lee, the acclaimed AI scientist has pointed out, for all of AI’s promise, ‘the one thing that only humans can provide turns out to be exactly what is most needed in our lives: love’.

A new public-private initiative to involve citizens in understanding the social implications of AI could unite society under the banner of safeguarding core human values whilst improving AI literacy but, what would this look like in practice? To begin, the government could partner with tech to develop an educational curriculum that teaches the technical basics and social implications of AI to all citizens. At the same time, public and private funders of AI research could adopt an agenda that views AI not just as a technological but a social challenge. Both approaches would ensure we develop a stronger grasp of the upsides and potential pitfalls of using new technologies like AI. 

This may sound costly and far-fetched, but there are examples that show it is possible. Last year saw the launch of Elements of AI in Finland, a first-of-its-kind online course, accessible to all, that teaches some of the core technical aspects and social implications of AI. Since being developed by the publicly funded University of Helsinki and tech company Reaktor, over 130,000 people have signed up to take the course.

The UK has also begun to make headway in this area. The Royal Society, for example, last year launched a ‘You and AI’ public debate series to build a greater understanding of the ways AI affects our lives, and how it may affect them in the future. Similarly, the RSA brought together a citizens’ jury to deliberate about the ethical use of AI, and earlier this year, innovation foundation Nesta showed how government support and public funding could be used to advance the use of AI tools in schools and colleges. At the University of Oxford, the announcement of a new Institute for Ethics in AI also means students from the arts and humanities will soon be able to study the social implications of AI (although the way this initiative is being funded has drawn significant criticism).

But these are still just small drops in the ocean when compared to the funding flowing into developing better AI technology. Regardless of the form any initiative to understand the social implications of AI takes, what matters now above all is that we get the issue on center stage in the AI debate.  

Half a century ago, when AI and robots were still largely the purview of science fiction, the consequences for society were small. Now that both increasingly play a role in our daily lives, every citizen has a stake in the matter. At the start of a new decade, it’s time we demand policymakers think about how the AI revolution can not only grow our economy but strengthen our social bonds and consolidate our democracy.

About

Vincent Straub is a Graduate Student at the Oxford Internet Institute studying social data science. Previously he worked at the innovation foundation Nesta. You can connect with Vincent on Twitter @Vincent_Straub.

Previous
Previous

Researchers Roles: A Big Picture

Next
Next

Integrity: The Most Important Research Practice