勛圖惇蹋

'Making uncertainty visible': 勛圖惇蹋 researcher says AI could help avoid improper denial of refugee claims

""
Avi Goldfarb is a professor at the University of Toronto's Rotman School of Management and a faculty affiliate at the Schwartz Reisman Institute for Technology and Society (photo courtesy of the Rotman School of Management)

Avi Goldfarb is an economist and data scientist specializing in marketing. So how is it that he came to publish a paper on reducing false denials of refugee claims through artificial intelligence?

Goldfarb, a professor at the Rotman School of Management at the University of Toronto and a faculty affiliate at the Schwartz Reisman Institute for Technology and Society, read Refugee Law's Fact-Finding Crisis: Truth, Risk, and the Wrong Mistake, a 2019 book by Hilary Evans Cameron, a 勛圖惇蹋 alumna and assistant professor at the Ryerson University Faculty of Law.

He found some remarkable overlaps with his own work, particularly the methodology he employs in his 2018 book, Prediction Machines: The Simple Economics of Artificial Intelligence.

It just so happened that Evans Cameron had read Goldfarbs book, too.

It turned out we effectively had the same classic decision theoretic framework, says Goldfarb, although hers applied to refugee law and problems with fact-finding in the Canadian refugee system, and mine applied to implementing AI in business.

Decision theory is a methodology often used in economics and some corners of philosophy in particular, the branch of philosophy known as formal epistemology. Its concern is figuring out how and why an agent (usually a person) evaluates and makes certain choices.

The main idea around which Evans Camerons and Goldfarbs thoughts coalesced was this: Human decision-makers who approve or deny refugee claims are, as Goldfarb noted in his research presentation at the Schwartz Reisman weekly seminar on Oct. 7, often unjustifiably certain in their beliefs.

In other words: people who make decisions about claimants seeking refugee status are more confident about the accuracy of their decisions than they should be.

Why? Because refugee claims are inherently uncertain, says Goldfarb. If youre a decision-maker in a refugee case, you have no real way of knowing whether your decision was the right one.

If a refugee claim is denied and the refugee is sent back to their home country where they may face persecution, there is often no monitoring or recording of that information.

Goldfarb was particularly struck by the opening lines of Evans Camerons book: Which mistake is worse? That is, denying a legitimate refugee claim or approving an unjustified one?

In Goldfarbs view, the answer is clear: sending a legitimate refugee back to their home country is a much greater harm than granting refugee status to someone who may not be eligible for it. This is what Goldfarb refers to as the wrong mistake.

So, from Goldfarbs perspective as an economist and data scientist with specialization in machine learning (ML), a type of artificial intelligence, he started to wonder: Could MLs well-known ability to reduce uncertainty help reduce incidences of the wrong mistake?

Goldfarbs collaboration with and Evans Cameron reflects the intersections between the four conversations that guide the Schwartz Reisman Institutes mission and vision. Their work asks not only how information is generated, but also who it benefits, and to what extent it aligns or fails to align with human norms and values.

ML has the ability to make uncertainty visible, says Goldfarb. Human refugee claim adjudicators may think they know the right answer, but if you can communicate the level of uncertainty [to them], it might reduce their overconfidence.

Refugee law expert Hilary Evans Cameron is a 勛圖惇蹋 alumna and an assistant professor at Ryerson Universitys Faculty of Law (photo courtesy of Ryerson University)

Goldfarb is careful to note that shedding light on the wrong mistake is only part of the battle. Using AI to reduce confidence would only work in the way described if accompanied by the changes to the law and legal reasoning that Evans Cameron recommends, he says.

When uncertainty is large, that does not excuse you from being callous or making a decision at all. Uncertainty should help you make a better-informed decision by helping you recognize that all sorts of things could happen as a result.

So, what can AI do to help people realize the vast and varied consequences of their decisions, reducing their overconfidence and helping them make better decisions?

AI prediction technology already provides decision support in all sorts of applications, from health to entertainment, says Goldfarb. But hes careful to outline AIs limitations: It lacks transparency and can introduce and perpetuate bias, among other things.

Goldfarb and Evans Cameron advocate for AI to play an assistive role  one in which the necessary statistical predictions of evaluating refugee claim decisions could be improved.

Fundamentally, this is a point about data science and stats. Yes, were talking about AI, but really the point is that statistical prediction tools can give us the ability to recognize uncertainty, reduce human overconfidence and increase protection of vulnerable populations.

So, how would AI work in this context? Goldfarb is careful to specify that this doesnt mean an individual decision-maker would immediately be informed whether they made a poor decision, and given the chance to reverse it. That level of precision and individual-level insight is not possible, he says. So, while we may not solve the wrong mistake overnight, he says AI could at least help us understand what shortfalls and data gaps were working with.

There are many challenges to implementing the researchers ideas. It would involve designing an effective user interface, changing legal infrastructure to conform with the information these new tools produce, ensuring accurate data-gathering and processing and firing up the political mechanisms necessary for incorporating these processes into existing refugee claim assessment frameworks.

While we may be far from implementing AI to reduce incidences of the wrong mistake in refugee claim decisions, Goldfarb highlights the interdisciplinary collaboration with Evans Cameron as a promising start to exploring what the future could bring.

It was really a fun process to work with someone in another field, he says. Thats something the Schwartz Reisman Institute is really working hard to facilitate between academic disciplines, and which will be crucial for solving the kinds of complex and tough problems we face in todays world.

The Bulletin Brief logo

Subscribe to The Bulletin Brief

Schwartz Reisman