Instead of seeing criticisms of AI as a threat to innovation, can we see them as a strength?

*We’ve amended the original version of this article as Cog X partnered with Vision 2030 and not the UN Sustainable Development Goals as previously stated. Vision 2030 uses the SDG publicly licensed imagery to support the delivery of the United Nations Sustainable Development Goals (Global Goals).

By Eve Kraicer, MSc Candidate at The London School of Economics and Political Science

At CogX, the Festival of AI and Emergent Technology, two icons appeared over and over across the King’s Cross location. The first was the logo for the festival itself, an icon of a brain with lobes made up of wires. The second was used by Vision 2030, a partner of the festival. For their promotion of the festival, Vision 2030 used the publicly licensed icon for the UN sustainability goals (SDGs): a circle split into 17 differently colored segments, each representing a goal for 2030 - aims like zero hunger and no poverty. Although not affiliated with the SDGs, the idea behind the partnership with Vision 2030 was to encourage participants of CogX– speakers, presenters, expo attendees – to think about how their products and innovations could help achieve these global goals.  

In practice, the relationship between AI development and SDGs is more complicated. This tension was summed up during a session called ‘Provably Beneficial AI’ given by Professor Stuart Russell. In it, he explained that if machines achieve intelligence, given our current understanding of AI design and deployment, their actions will be programmed to meet their own goals, not ours. What does it mean to innovate and research in AI, given that its ‘end point is our own destruction?’ Can AI be sustainable? Where does the AI brain of CogX fit within the circle of the SDGs?

As I ran between stages (clutching my umbrella), what emerged at CogX was not a story simply of AI being the thing that will lead us towards or away from these goals. Rather, it was a blend of possibility, caution, awe and fear. On the one hand, speakers shared research and products to improve mental health diagnostics, reduce energy emissions and make theatre more accessible. On the other, as Prof Russell and others pointed out, it’s also being used to spread disinformation, perpetuate inequalities and disable privacy.

CogX already has the researchers, entrepreneurs, educators, journalists and policy makers in its network to integrate ethics and sustainability with AI. What’s left is making sure they share the stage, not just the poster.  

For me, this contradiction did not feel unproductive or hypocritical. Rather, I think it’s reflective of the reality of developing tools that are powerful enough to scale both problems and solutions at a global level. I’d argue that what causes clear harm is denying it, to instead pursue innovation at any and all opportunities. 

I heard a lot of this at CogX. For while the conference brought together opposing views, it’s unclear if they crossed paths. There were over 500 speakers, on ten stages, across five zones; how self-selecting was finding AI ambivalence? Did this conference bring the critics and advocates together? Or did it just put them in proximity?

As Carly Kind, the newly appointed director of The Ada Lovelace Institute pointed out, should there even be an ‘Ethics Stage’ at all? It seems to suggest those at the ‘Cutting Edge Stage’ or the ‘Economy Stage’ are not necessarily unethical, but do not need to be ethically focused. But even simple AI technologies, those developed for benign tasks, can have a multiple, unintended impacts. (See for example, here or here or here). Acknowledging this, acknowledging that even if what your building with AI is not overtly built for harm nonetheless has ethical consequences, is key to beginning to imagine sustainable AI.

I don’t have an answer as to what the end, or even the middle of that kind of AI would be. And I heard varied opinions on this at CogX: AI is the best way to achieve sustainability, don’t allow the ideal of perfectly sustainable AI to stop good AI, redefine intelligence, use AI to ensure AI is ethical, bring diverse voices to the table, ask whether the technology you want to build needs to be built, and sometimes, just stop building.

I personally find some of these solutions more compelling than others, but I wonder if what is important about the list is its variety. AI – its impacts, design and application – is various, too. Instead of seeing criticisms of AI as a threat to innovation, can we see them as a strength? 

CogX already has the researchers, entrepreneurs, educators, journalists and policy makers in its network to integrate ethics and sustainability with AI. What’s left is making sure they share the stage, not just the poster. 

About

Eve Kraicer is an MSc Candidate at The London School of Economics and Political Science (LSE). Her research uses data science methodologies to study the intersection of gendered violence and digital media. She has previously worked as a Research Assistant at McGill University and The LSE, and as an Intern at Penguin Random House Canada. You can find her on twitter @helaeve.





Previous
Previous

Qualitative Research & Policy Studies: Interview with Editors

Next
Next

2018 Concept Grant winners: An interview with MiniVan