While artificial intelligence could help bring greater effectiveness and fairness to social programs such as disability and health benefits, it could also be used to unjustly deny assistance to people in need, experts told a United Nations special rapporteur during a conference at Princeton University last week.
“The promise of these new technologies is very much shaped by power,” said Virginia Eubanks, a conference panelist and associate professor at the State University of New York at Albany. Eubanks, who has written on the development of technology and social welfare policies in the United States, said the introduction of technology must be understood in a broader context. She said that transparency in automated systems is important, and that people need to understand how the systems work. But she said deciding whether and how to deploy the technology remains a political decision.
The conference, “Social Protection by Artificial Intelligence: Decoding Human Rights in a Digital Age,” gathered social scientists, human rights activists, policy experts and computer scientists to discuss the impact of technology, and particularly algorithmic decision-making tools, on social protection programs across the world. Hosted by Princeton’s Center for Information Technology Policy, the April 12 meeting also served to offer expert opinions to Philip Alston, the UN special rapporteur on extreme poverty and human rights, who is preparing a report to the United Nations’ General Assembly on technologies used in social protection systems and their impact on the protection of human rights. Alston, the John Norton Pomeroy Professor of Law at New York University, is scheduled to deliver the report in October.
In discussions during two lengthy panel sessions and interaction with audience members, experts hashed out the potential benefits and pitfalls of the new technologies in relation to social protection programs. The stakes are high. Such decisions include some of the most fundamental societal questions: who receives disability benefits; what neighborhoods receive additional police protection; who loses custody of children; and who is eligible for housing assistance.
Some panelists pointed out that computer systems could correct injustices that currently mark many social protection programs, such as widely varied decisions among administrative judges who rule on Social Security disability applications. Artificial intelligence could also better answer questions about the effectiveness of various programs and allow policy makers to focus resources where they would do the most good.
But several social scientists on the panels noted that many existing aid systems are already overly burdensome and intrusive for recipients. Frequently, they said, the systems seek to restrict and deny benefits rather than genuinely assist people in need. Artificial intelligence could either help lessen those problems or reinforce them, depending on how they are designed and implemented.
Alston, the U.N. rapporteur, said in his address to the conference that the track record for artificial intelligence in social protection has not been promising.
“Human rights is almost always acknowledged when we start talking about the principles that should govern AI,” he said. “But it’s acknowledged as a veneer to provide some legitimacy and not as a framework.”
Alston said too often new technology is used to create burdensome requirements for recipients or to raise hurdles for receiving assistance in the name of greater efficiency and economy. He said the instinct is to narrow access to programs rather than ask the question of how best to serve the interest of people who need help.
“There is an overwhelming emphasis on the negative,” he said.
One troubling development, Alston said, is the implicit transfer of authority from the public sphere to private industry.
“Instead of the state being responsible to protect your human rights, suddenly its large tech companies,” he said.
Alston said the potential automation of social protection programs is a vital question for all members of society because the programs touch nearly all citizens. He said the risk is leaving behind the respect for human rights and basic principles of justice that were fought for in the 20th century.
“Now is the time to be scrutinizing much more carefully the sort of values that we are putting in place and the systems designed to promote those,” he said.
Edward Felten, the director of the Center for Information Technology Policy, acknowledged concerns about the potential risks of artificial intelligence but said the systems could be managed to do great good rather than harm. Felten, who advised the Obama administration on artificial intelligence and who has investigated computer voting systems and copyright plans, said the key was close and open cooperation between technical experts and policymakers.
“It is not only by working energetically on both sides of that line but also by thinking deeply and creatively about how to integrate those things that we will be able to make the kind of progress that we can make,” said Felten, the Robert E. Kahn Professor of Computer Science and Public Affairs. “Whether you are on the technical or policy side of this increasingly thin border, it’s important to be thinking about what you can do to enable practitioners on the other side of the line to do their job more effectively,”
Felten, who delivered the opening address of the conference, gave a quick but thorough primer on the development of AI from a seminal, 1943 paper laying out the first steps to a thinking machine to the deployment of modern systems. He said that rapid progress in many areas has led to a sense “that there is a fundamental change in our technology that is occurring now.”
Artificial intelligence offers great promise to assist social scientists in better understanding the causes and the possible solutions to poverty and related social ills, Felten said. Particularly by allowing policymakers to distinguish coincidence from causality, the systems can be extremely valuable in targeting social programs and evaluating their effectiveness.
“It can be common in these discussions to focus all of the attention on the negative implications, the risks and dangers of adopting AI in all sorts of programs, and we really must not lose the fact that there are tremendous gains to be had if we do this wisely,” Felten said.
But Felten said technical experts need guidance from policy makers to properly frame these systems to ensure they benefit society.
“And indeed it is only by fitting together the capabilities the strengths and weaknesses of these different modes of work that we will be able to do the best job,” he said. “That starts with getting people with these same interests into the same room and talking about the challenges which is what we are trying to do today.”