How can artificial intelligence help us augment our collective intelligence?

Nesta launched the Centre for Collective Intelligence Design back in 2018 at an event jointly hosted by SAGE Publishing. The event featured talks, workshops and discussions exploring the development of collective intelligence as its own field, bridging the worlds of academia and industry together to create a new look domain. October 2019 saw the return of this one day event, jam-packed with interactive sessions and an array of attendees from tech to the arts, data science to critical thinking and beyond.

Collective intelligence is not a new concept but as new technology is changing the world around us and our society becomes ever more reliant on data and algorithms to make decisions the combination of human and machine intelligence becomes ever more important. We spoke to attendees of the event about the future of artificial intelligence, the opportunities and challenges that collective intelligence presents and why the arts are interested in collective intelligence.

The first video is on the future of artificial intelligence. Aleks Berditchevskaia, a senior researcher at the Centre for Collective Intelligence Design at Nesta mentioned why she thinks there is the wrong focus around artificial intelligence:

A lot of the conversations or the hype around AI has focused on technological development or maybe hyperbole about replacement or existential threat and we think that there's the wrong focus there. It's about mobilizing different resources of intelligence that we have in society and specifically, at the Centre for Collective Intelligence Design, we think about how we can use artificial intelligence as one of the many tools in your toolbox to help make better decisions or better predictions or to generate new types of solutions when you're trying to solve problems. We're interested in thinking about how specifically machines can be used to enhance and scale collective human efforts

This video features insights from:

Carly Kind, Director, Ada Lovelace Institute
Karina Vold, Postdoctoral Researcher, Leverhulme Centre for the Future of Intelligence Julien Cornebise, Director of Research AI for Good, Element AI
Rosana Ardila, Senior Open Innovation Manager, Mozilla Common Voice project

Read the transcript below or click here to watch the video.

What are the opportunities for artificial intelligence and collective intelligence?

Carly Kind, Director, Ada Lovelace Institute

I think the biggest opportunity is the ability to sift through the noise. I think AI can help to augment collective intelligence and human endeavours by helping us work out what's important and what isn't important to crunch through the seemingly endless amount of data we have these days and, help to top up human capability rather than replace it. I think that AI really presents opportunities to uncover patterns and correlations that we're not currently able to do through lack of capacity in terms of the velocity of data, the speed of data, and the amount of data.

Karina Vold, Postdoctoral Researcher, Leverhulme Centre for the Future of Intelligence

The main opportunity is to think about how individual human social interactions and social capacities can be improved and by that I mean things like mind modelling. So how a system can help us better predict the emotions, beliefs, decisions, intentions of our common peoples who were interacting with people in our social network, and then how they might also improve our emotional intelligence of how we detected the emotions of other people by having that awareness of how we can improve our communication to other people.

Julien Cornebise, Director of Research AI for Good, Element AI

One aspect is crowdsourcing, and there is a lot of that already in citizen science. That provides data for training algorithms in projects where there is not a gigantic amount of data that you would otherwise find if you were a tech giant but there is also another aspect around the impact of AI. The impact of AI and of any tech is driven by human intentions, by how it’s being used as a society. That’s where collective intelligence and how we design the incentives around human actions will really condition whether AI will be a catastrophe, or a really useful tool and a success.

Rosana Ardila, Senior Open Innovation Manager, Mozilla Common Voice project

There's a lot of really boring tasks that one has to perform on a daily basis or tasks that are quite repetitive. These tasks are not really using the full potential of the human being. If you’re able to put these tasks into the machine that enables you to free up the person’s full potential to achieve other tasks creating amazing possibilities.

What are the challenges of AI and CI?

Carly Kind

The biggest challenge is avoiding homogeneity through AI. I think the way most AI systems are built, it's to maximize the majoritarian view and to present a homogenous view of a particular issue or of a particular community and so building diversity into AI and inclusivity is really the challenge. That comes in part through a diverse workforce, which doesn't exist at the moment. The AI pipeline is heavily dominated by a homogenous community itself and that means that diverse perspectives are being missed from the development, the technical development of AI, as well as its deployments.

Karina Vold

The obvious one is privacy. Even what I was describing as an opportunity comes with taking a lot of personal data about both you and the people in your social network, and how that data is stored, how that data gets used, or how it might be used, poses a lot of risks. Besides that, I think an area of concern that gets overlooked is what philosophers called cognitive atrophy. The idea that by continuing to rely on our devices as ways of extending our own cognitive capacities we will start to lose some of our own—the clear example of this is memory. We now tend to offload a lot of information that we used to have to remember internally onto our devices.

Julien Cornebise

Well, the tech saviour syndrome, which is hey, stand back, we’re going to save the world with AI. Then you end up making tools that are technically fantastic and scientifically fascinating but have side effects that were really not expected.

Rosana Ardila

Is the data neutral and what is the data? Algorithms and models become decision-making machines, and often they're just a black box so it depends on what data you use to train it. A lot of people have been bringing up, how they're starting to use them, for example, to judge cases and they’re using data that is biased. When we don’t think about the consequence of automation we might come to a point where the decision being made is just reproducing injustices. In this way, technology is not neutral and that’s a big, big problem.

How can social science impact the work that AI is doing?

Like any tech, if you want to create something that's useful you want to understand the user, you want to understand clearly the context in which it is going to be used and social scientists have the skills of analogies of human systems, of field reporting to really see the bigger picture. Another broader aspect is a social scientist looking at AI not just helping AI from within but looking at it and catching when its applications are starting to have nefarious or unintended side effects.

Tell us about a cool project you’re currently working on?

Rosana Ardila

Mozilla has a team that is working on machine learning. It's mostly around voice and speech and so what we have is models for speech to text and for text to speech so as to recognize and to synthesize speech and that is where data actually plays an important role because your model can only be as good as the data that you are using to train it.

This is why we have an initiative that has put together a platform to crowdsource voice data from the public called Common Voice. You can go to the website and donate your voice by reading a sentence. At the same time as recording your voice, we ask you to listen to other people’s voices and confirm that what you hear is what you see. By doing this fairly simple task you’re helping us create a dataset that we will be able to use to train a model.

 Karina Vold

The core team that I'm with at the Leverhulme Centre for the Future of Intelligence is working on a program called kinds of intelligence. We have an interdisciplinary team, so some machine learners, some comparative psychologists, some philosophers like myself, and we're working to try to figure out ways to compare notions of intelligence across all sorts of animals all the way up to humans and machines. For example, when you hear AI described as creative, what does that mean? And how does that compare to the creativity we see in monkeys, apes, dolphins, but also things like memory and generality and other core concepts.

How can artificial intelligence help us augment our collective intelligence? Watch the video now.

Previous
Previous

More than a Month: Women's History & Progress

Next
Next

Qualitative Methodologies: Equity & Local Knowledges