Artificial Intelligence raises ethical, policy challenges – UN expert

Digital solutions are transforming lives: Think of robots that help the elderly, a mobile phone app that identifies crop pests, or surgical robots in hospitals. These advances are all due to Artificial Intelligence and an extraordinary new era of machine learning.

While these bring tremendous benefits, AI also raises concerns, ranging from security, to human rights abuses. Speaking in Paris last weekend, Secretary-General Antonio Guterres praised AI but cautioned that “technology should empower not overpower us” and that the world needs to set policies that contain unintended consequences or malicious use of frontier technologies.

So what is AI?

UN News asked Eleonore Pauwels, Research Fellow on Emerging Cybertechnologies at United Nations University (UNU), about AI – what it is, how it works, and what she sees happening in the next few years.

In its current form, called “deep learning”, AI is a growing set of autonomous and self-learning algorithms she told us, capable of performing tasks it was commonly thought could only be done by the human brain. At its core, AI produces powerful predictive reasoning while minimizing the noise from unpredictable and complex human behaviour.

In some invisible and other more visible ways, AI is transforming our lives, from reshaping our intimate and networked interactions, to monitoring our bodies, moods, and emotions.

At an individual level, AI has already begun to shift our understanding of agency, identity, and privacy. The all-encompassing capture and optimization of our personal information – the quirks that help define who we are and trace the shape of our lives – will increasingly be used for various purposes without our direct knowledge or consent.

How to protect independent human thought in an increasingly algorithm-driven world, goes beyond the philosophical and is now an urgent and pressing dilemma, said Ms. Pauwels.

AI is already ubiquitous, but will affect people differently, depending on where they live, how much they earn, and what they do for a living. Scholars from civil society have started raising concerns about how algorithmic tools could increasingly profile, police, and even punish the poor.

On the global and political stage, where corporations and states interact, AI will influence how these actors set the rules of the game. It will shape how they administer and exert power on our societies’ collective body. These new forms of control raise urgent policy challenges for the international community.

Where are we heading?

The evolution of AI is occurring in parallel with technical advances in other fields, such as genomics, epidemiology, and neuroscience. That means that not only is your coffee maker sending information to cloud computers, but so are wearable sensors like Fitbits; intelligent implants inside and outside our bodies; brain-computer interfaces, and even portable DNA sequencers, points out the UNU Research Fellow.

When optimized using AI, this trove of data can achieve truly life-saving innovations. Consider research studies conducted by Apple and Google: the former’s Heart Study app “uses data from Apple Watch to identify irregular heart rhythms, including those from potentially serious heart conditions such as atrial fibrillation,” while the Google-powered Project Baseline declares: “We’ve mapped the world. Now let’s map human health.” Never before has our species been equipped to monitor and sift through human behaviors and physiology on such a grand scale. We might call this set of networks the “Internet of Bodies.”

What are your fears about AI?

There is great promise born out of the AI revolution, but also great peril, especially when it comes to ownership and control of our most intimate data. When computer codes analyze not only shopping patterns and dating preferences, but our genes, cells, and vital signs, the entire story of you takes its place within an array of fast-growing and increasingly interconnected databases of faces, genomes, biometrics, and behaviors.

The digital representation of your characteristic data could help create the world’s largest precision medicine dataset – or it could render everyone more vulnerable to exploitations and intrusions than ever before.

What might governments seek to do with such information and capabilities? How might large corporations, using their vast computing and machine-learning platforms, try to commodify these streams of information about humans and ecosystems? Indeed, behavioral and biological features are beginning to acquire a new life on the internet, often with uncertain ownership and an uncertain future.

Equally troubling, rising tech platforms are often our last line of defense to ensure the security of the massive, precious datasets that fuel our e-commerce, and soon, our smart cities and much more. That is, the same multinationals that reign over data and its liquidity are also charged with cybersecurity – creating potential conflicts of interest on a global scale.

That tension, of course, comes with the fact that the private tech sector is also enabling most of the positive benefits that AI can and will usher in, for individuals and societies, from helping to predict natural disasters to finding new warning signs for disease outbreaks. Thinking about how to ensure data liquidity and security will become ever more important as governments aim to reap such benefits.

What can be done?

The above reflections exhibit an entanglement of ethical and policy challenges that needs to be mapped, unveiled, and analysed to nurture an inclusive foresight discussion on the global governance of AI.

There are some innovative ways in which the UN can help build the kind of collaborative, transparent networks that may foster strategic foresight dialogues.

Spurred on by a mandate given to UNU in the Secretary-General’s Strategy on New Technologies, The Centre for Policy Research at the United Nations University has created this AI and Global Governance Platform  as an inclusive space for researchers, policy actors, corporate and thought leaders to explore the global policy challenges raised by artificial intelligence.

From global submissions by leaders in the field, the Platform aims to foster unique cross-disciplinary insights to inform existing debates through the lens of multilateralism, coupled with examples on the ground. These insights will support UN Member States, multilateral agencies, funds, programmes and other stakeholders as they consider both their own and their collective roles in shaping the governance of AI.