Learning to Live with Artificial Intelligence: “A Virtuous Circle or a Vicious One?”

Event Video
Photos

“How do we ensure that this transformative new technology delivers us the future we want rather than a future we never signed up for?” asked James Cockayne, Director of UNU’s Centre for Policy Research, introducing an all-day policy seminar on Governing Artificial Intelligence (AI) on June 22nd at IPI.

He cited widespread concern over whether developers will produce an AI that provides ways of ameliorating problems like inequality, climate change, disease, and hunger or one that will serve instead to “entrench” them. “The question for those of us on the outside looking in,” he said, “is whether that is going to be a virtuous circle or a vicious one?’”

A similarly stark duality was advanced by Fabrizio Hochschild, Assistant Secretary-General for Strategic Coordination in the Executive Office of the UN Secretary-General. “Some see the promise of elevating mankind above the shackles of tedious or menial labor, the opportunity for a new utopia,” he said, “while others see a potential dystopia, one where killer robots evade human control and where states control our every thought.” His view was that in fact there is “a little bit of both: states that use AI to further control over their citizens is a reality, and, of course, the use of AI to give medical treatment to people who would otherwise never see it is also a reality.”

He said that the options were so consequential and far reaching to so many people globally that the only effective path to the responsible development of AI would be “genuine international consensus, one that brings together governments, the scientists, industry leaders, and civil society representatives, the thinkers, ethicists and others who design, manage, use or are affected by AI.”

Accordingly, he said, the Secretary-General’s office was focused on four “basic requirements:” being inclusive and people-centered; avoiding harmful, unintended consequences or “malicious use;” making sure that AI “enhances our global collective values, as enshrined in the Charter,” and being prepared to “deal with AI in all of its technological, economic, and ethical dimensions.”

Asked to address the scale of the possible menace of AI to international peace and security, Seán ÓhÉigeartaigh, Executive Director of the Cambridge Centre for the Study of Existential Risk, said, “The biggest threat will be to the economies of countries, their scientific progress, their infrastructure, and how we make sure that everyone who’s going to be affected by AI has a voice at the table.” He added that security, while a key concern, “is only one of the questions alongside distribution, empowerment and justice.”

Amandeep Gill, India’s ambassador to the UN Conference on Disarmament in Geneva, said that AI, for all its benefits to medical science, also could end up concurrently enhancing target acquisition accuracy and creating “even more deadly instruments for warfare.”

Izumi Nakamitsu, Under-Secretary-General and High Representative for Disarmament Affairs of the UN Office for Disarmament Affairs (UNODA), said the dismaying notion of machines controlling weaponry had stirred the Secretary-General to make it “categorically clear that when it comes to the use of force, lethal force, humans should be in control at all times.” She at the same time cautioned people against being too “alarmist” and ignoring the “hugely positive aspect of the technological scientific developments.”

A number of participants spoke of the interaction of man and machine symbolized by AI and warned against confusing the two. “Artificial intelligence is many things to many people and even to our collective imaginations and fears, but there is one thing that it is not, and that is artificial humanity,” said Nicolas Economou, Chairman and CEO of H5. Ambassador Gill reminded the audience that the risk does not come from technology itself, it comes from “what happens around the technology. Technology is always socially constructed. The risk is human, the risk is us.”

John C. Havens, Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, referred to the oft-stated truism that machines are limited in not having the values or culture or judgment of humans but added, “How will machines know what we value if we don’t know ourselves?” Speaking along the same lines, Urs Gasser, Executive Director of the Berkman Klein Center for Internet & Society at Harvard, said, “Many of the questions now amplified by AI are really not questions about technology but questions about in what society do we want to live? And we haven’t figured out issues such as bias and discrimination in many contexts, and as long as we don’t figure out these questions, it will be very difficult for the machines to come up with the answers.”

Francesca Rossi, Global AI Ethics Leader at IBM and a professor at the University of Padova, suggested combining talents, saying, “There are many things machines can do even better than us, in the future, much better than us, there have been studies about that, so the best results are doing things together in a collaborative way.” Picking up on that theme was Lana Zaki Nusseibeh, the United Arab Emirates Permanent Representative to the UN, who said, “The international community has a window of opportunity to ensure that necessary information and ethical parameters are built into the design of artificial intelligence for sustainable development.” She proposed three areas of focus for AI governance—addressing the digital divides, establishing an appropriate regulatory environment with all relevant stakeholders, and developing partnerships.

Eleonore Pauwels, Research Fellow on Emerging Cybertechnologies at the UN University New York office, drew attention to the convergence of AI and biology and the potential benefit to public health. “One of the biggest challenges of our time is biocoding our genomes,” she said. “Applying deep learning to our genomes could help us understand which genes are most important to particular functions. Such insight would help us understand how particular disease occurs, why a virus has high transmissibility, why certain populations are more susceptible to certain viruses.”

Gary Marcus, Professor of Psychology and Neural Science at New York University, urged people not to over concern themselves with the dangers of AI. “I want you not to worry about superintelligence,” he said. “We are using statistics as a substitute for common sense, and it works most of the time, but it doesn’t work all the time, which means we can’t trust the machines.” What he worried about instead, he said, was “moderate intelligence with a lot of power.”

Greg Corrado, Principal Scientist, Director of Augmented Intelligence Research at Google, said that moving forward, people would have to “tackle directly” the issues of the safety and the fairness of AI systems. “The good news is that technologists are actually able to help policymakers, so there has been real, concrete, scholarly work being done on how we study safety, how we can make safety in AI as completely unsexy as safety in anti-lock braking systems and also how we can deal with issues of biases in the data they are fed. Because if machines learn by imitating,” he concluded, “they will learn the biases in the data they are fed.”

The event, cosponsored by the International Peace Institute (IPI) and the UN University Centre for Policy Research (UNU-CPR), featured five different panels with expert moderators. Other participants included:

  • Kai Sauer, Permanent Representative of Finland to the UN
  • Jing de Jong-Chen, Partner and General Manager of Global Cybersecurity Strategy, Microsoft
  • Seán ÓhÉigeartaigh, Executive Director, Cambridge Centre for the Study of Existential Risk (CSER)
  • Konstantinos Karachalios, Managing Director, IEEE Standards Association
  • Michael Sung, Chairman, CarbonBlue Innovation and Professor, Fudan University
  • David Li, Founder, Shenzhen Open Innovation Lab
  • Juan Sandoval-Mendiolea, Deputy Permanent Representative of Mexico to the UN
  • Craig Campbell, Special Adviser, New York City Mayor’s Office of Data Analytics
  • Andrew Gilmour, Assistant Secretary-General for Human Rights, Office of the UN High Commissioner for Human Rights
  • Dinah PoKempner, General Counsel, Human Rights Watch
  • Mark Nitzberg, Executive Director, UC Berkeley Center for Human- Compatible AI
  • Joy Buolamwini, Founder, Algorithmic Justice League and MIT Media Lab
  • Sheila Jasanoff, Pforzheimer Professor of Science and Technology Studies, Harvard Kennedy School
  • Danil Kerimi, Head of Information Technology and Electronics Industries, World Economic Forum
  • Ursula Wynhoven, ITU Representative to the United Nations
  • Adam Lupel, Vice President, IPI
  • Jon Benitez, Events Manager, IPI