'The conscience of AI': Why this Թϱ expert created a forum for AI researchers and entrepreneurs to discuss ethics
Markus Dubber’s first brush with artificial intelligence, or AI, occurred in an unlikely place: a performance of his daughter’s choir.
The director of the University of Toronto’s Centre for Ethics bumped into Ajay Agrawal, another proud parent who also happens to be a professor at Թϱ's Rotman School of Management and the founder of Թϱ’s Creative Destruction Lab – a seed stage accelerator that specializes in scaling startups that employ AI technologies.
It wasn’t long before Dubber was invited to attend the lab’s events to get a sense of the difficult ethical questions that inevitably arise when machines are asked to make decisions and exercise judgment.
“That’s how I realized, ‘My God, this would be a great opportunity for the centre, too,’” says Dubber, who is also a professor at Թϱ’s Faculty of Law.
“My sense is people in computer science want to talk about ethical issues, but they don’t always have the experience or framework to think about it in some kind of context.”
In a bid to broaden the discussion, Dubber kicked off the ethics centre’s fall speaker series with a talk about the “Ethics of AI in Context.” The presentation was delivered by Joe Halpern, a computer science professor at Cornell University who is also a Թϱ alumnus.
The talk drew about 80 people – more than double the usual attendance for such events.
So Dubber made the one-off talk into a series of five and counting. He was even forced to create a wait list. So far there’s been presentations from Mark Kingwell, a professor of philosophy in Թϱ’s Faculty of Arts & Science, and Dr. Sunit Das, a neurosurgeon at St. Michael’s Hospital and an assistant professor in the Faculty of Medicine.
Brian Cantwell Smith, a professor in Թϱ’s department of philosophy and a former dean of the Faculty of Information, is scheduled to speak this evening about AI and the difference between reckoning and judgment.
“My idea was to make it truly interdisciplinary,” Dubber says. “I just opened it up to anyone who wants to talk about it – and they’re coming.”
The popularity of the centre’s AI programming shouldn’t come as a surprise. Թϱ has emerged as a leading centre for AI research, producing such stars as Emeritus Geoffrey Hinton, a deep learning pioneer who also works at Google, and Raquel Urtasun, an associate professor of computer science who now heads up Uber’s self-driving vehicle lab in Toronto.
At the same time, technologies like deep learning, which allow computers to learn in much the same way as the human brain, tend to spawn a host of difficult questions. Should machine learning algorithms be trusted to make life-and-death medical decisions? What will be the social impacts of AI solutions that eliminate the need for human workers? Are machines that learn like humans deserving of rights?
“You are trying to recreate human judgment,” Dubber says. “It’s not just that you’re replacing humans with a machine, but you’re replacing it with one that looks and behaves and thinks like a human – or potentially even better than humans.
“That’s what gets the philosophers interested.”
One oft-talked about AI ethics problem focuses on self-driving cars: If faced with an imminent collision on a busy city street, should they be programmed to protect the lives of their occupants if it means swerving and mowing down a crowd of innocent pedestrians? In the journal Science last year, researchers from the U.S. and France that found respondents favoured sacrificing the driver in such a situation. However, respondents also said they would be unlikely to purchase such a car – raising a clear conundrum for Silicon Valley giants rushing to develop a driverless world.
Yet, while the technologies are new, the same can’t always be said of the underlying ethical issues. The aforementioned self-driving car problem, for example, is merely a modern version of what’s known in philosophy circles as “the trolley problem,” wherein a bystander has the option of pulling a lever to direct a runaway train down one of two sets of tracks – one with a single bystander standing on it and another with five.
“As far as I can tell, people rediscover the same questions – fundamental questions about the nature of judgment, for example,” Dubber says, adding that there are no easy answers when it comes to ethics.
While some have called for strict regulations and codes of conduct surrounding AI development, Dubber says most rules will become outdated long before they’re put into force and probably won’t be easily applicable to real-world problems anyway.
Instead, he favours a professional ethics approach in which AI builders and users exercise their judgment as they go about their day-to-day business.
“It ultimately comes down to questions of empathy and moral judgment, as well as the idea of being a professional and the duties you owe others,” Dubber says.
“It’s a very disquieting in some ways to learn that, in the end, what’s left is you and your attempt to do the right thing.”
So how can we prepare AI creators and users to make good decisions? Dubber says providing a space for AI researchers, entrepreneurs and others to come together – as Թϱ’s Centre for Ethics is doing – to talk about issues and expose themselves to different perspectives is a good start. Down the road he envisions adding AI ethics into Թϱ’s curriculum, perhaps even creating a standalone course.
, Dubber says it’s more important than ever for public institutions like Թϱ be involved in the development of a technology poised to revolutionize the world.
Moreover, he argues Թϱ is uniquely positioned to offer leadership in AI ethics. In addition to centres like the Vector Institute for AI research, Թϱ boasts deep expertise in medicine, finance, the humanities and law. The university is also across the street from the home of Ontario’s government, which is hoping to establish the province as a centre of AI research and innovation that attracts big foreign companies and spins out dozens of homegrown ones.
“We have everything we need to have this conversation," Dubber says. “Թϱ could become the conscience of AI.”