Citing the enormous power of artificial intelligence to benefit but also disrupt society, Microsoft President Brad Smith on Thursday, March 1, called for standards of accountability and a "Hippocratic oath" among technologists to do no harm with the emerging tools.
"Will we ensure that machines remain accountable to people? Will we ensure that the people who design these machines remain accountable to other people?" asked Smith, a Princeton University alumnus and trustee, speaking to a packed audience of more than 500 people in McCosh Hall.
In a talk that tempered optimism with sober warnings about unintended consequences, Smith offered glimpses of potential for artificial intelligence to rapidly advance medicine, aid people with disabilities, slow climate change, and help feed and ensure clean water for the world's burgeoning populations. He also outlined, while emphasizing that "there is no crystal ball," what areas of job loss and job security might arise due to AI-driven automation.
AI systems are now beginning to excel at processing visual information and analyzing speech and language, two skills previously solely in the province of humans, Smith said. These abilities open the way for automation of a wide range of jobs from truck drivers and fast food attendants to medical radiologists and translators. Jobs less likely to be automated are those that depend on human empathy, such as nurses, therapists and teachers, he said. Jobs in computer science and data science will of course be in demand but so will positions in ethics and technology policy.
In any case, the disruptions are likely to be large and to favor those with more advanced education, Smith said. In response to a question from the audience about how the economy could cope with massive job loss, Smith said that he did not personally favor the idea of a universal basic income but emphasized that "we will have to think about the social contract, the social safety net."
Smith put the coming changes in a historical context by noting the timeline and scale of disruptions brought on by the automobile. He recounted how Bertha Benz, wife and financier of automobile founder Karl Benz, made headlines and caused great excitement about cars in 1888 when she drove 66 miles to visit her mother. Smith showed a photo of Manhattan 20 years later, 1908, full of horse and buggies but no cars, followed by a photo of the same spot another 20 years later with all cars and no horses.
"What it shows you is that people start talking about something before it necessarily explodes," Smith said. "Sometimes we wind up waiting longer than we expect but once the technology diffuses, it moves very quickly. And it's like a tsunami that takes over and impacts everything."
In addition to large-scale changes from horse-and-buggy-related jobs to car-related jobs, an unexpected result of cars was the growth of consumer credit, which was needed to fund purchases of the expensive automobiles, he said.
Eventually, though, the global economy reeled. Smith cited research showing that a significant contributor to the Great Depression was the loss of horses. In 1905, 25 percent of U.S. agricultural output was geared toward feeding horses, and as farmers rapidly switched their crops away from hay toward wheat and corn, prices for those farm products plummeted. Farmers began defaulting on loans to rural banks, which started to fail, which built into widespread financial collapse.
"We need to be prepared for the unexpected," Smith said, "and we may need to be agile as we think about what we may need to call on governments to do, which I think is sobering because in some ways we don't live in a time with the most agile government in history in terms of politics, regardless of what side of the political spectrum you might be on."
A key question to ask, Smith said, is "How can AI empower people in new ways?" One of the greatest examples, he said, is helping people with disabilities, including blindness, autism and dyslexia. "A computer can look at the world and advise people about what's going on," he said, showing a Microsoft smartphone app, Seeing AI, for people with visual impairment.
Smith also cited the work of the Princeton Geniza Lab, which is using artificial intelligence to reconstruct ancient texts.
Smith's talk was part of the University's G. S. Beckwith Gilbert ’63 Lectures series. He was introduced by Ed Felten, who directs the University's Center for Information Technology Policy, a joint venture of the School of Engineering and Applied Science and the Woodrow Wilson School of Public and International Affairs.
Smith drew much of his discussion for the talk from a book, "The Future Computed: Artificial Intelligence and its Role in Society," recently published by Microsoft.
"One of the things we thought about when we put this book together was that ultimately the question is not what computers can do but what computers should do," he said. "That is really one of the great questions of our time."
Important principles for governing technology arose with the automobile (for example, law around reliability and safety) and with computers (principles of privacy and security), all of which will be important for artificial intelligence. But one of biggest questions on the horizon is fairness, Smith said.
"If we are going to rely on computers to make decisions, how do we ensure they are made fairly?" He cited research showing how biases can creep into artificial intelligence algorithms, and called for people of diverse backgrounds to participate in the field.
Another key principle, he said, will be transparency. Even though the mathematics behind artificial intelligence might not be accessible to everyone, the algorithms at least have to be explainable. Most important, Smith said, is accountability — ensuring that machines and the people who design them remain accountable to broadly held public values.
Beyond the six principles of reliability, safety, privacy, security, transparency and accountability, Smith called for a Hippocratic oath for artificial intelligence developers. "What will it take to ensure AI will do no harm? What does it take to train a new generation so they can work on AI with that kind of commitment and principle in mind?" The proliferation of do-it-yourself artificial intelligence tools "means that AI ethics are not just for a few people; they are for almost every part of society."
Working toward shared principles will help ensure "we don't wake up and find we have entrusted machines to make decisions that, frankly, horrify us."
For these reasons, the humanities and social sciences must play critical roles in technology revolutions, he said. "Many more people working in tech companies and many more people working on technology will come out of the liberal arts," he said, noting his own background majoring in the Woodrow Wilson School.
"Engineering isn't only for engineers anymore," he said. "Everyone who is in the liberal arts could benefit from learning a little bit of computer science, some data science, and every engineer is probably going to need some liberal arts in their future."
Smith closed with an observation from Albert Einstein in the 1930s that between the mid-1800s and the mid-1900s humanity's organizing ability had not kept pace with humanity's ingenuity when it came to technology. "So the most sobering question for all of us is: Can we can do better in the 21st century than the world did a century ago? Or will we have to go through some other calamity to wake up and do what the world requires? … We do need to wake up. And I don't think we are quite fully awake."