In his new book, Wharton professor Kartik Hosanagar wants to start a new conversation on AI that is focused on solutions rather than fear.
Hosanagar, a professor of Technology and Digital Business, examines in his book the increasing influence of algorithms over daily decisions from shopping to dating. The work, titled “A Human's Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control,” lays out the mechanics and dangers behind this trend, explaining how consumers can advocate for companies to create algorithms that are fair and transparent. Hosanagar’s book also calls for additional checks and balances on machine intelligence, including external algorithm design audits and the creation of regulatory boards to develop algorithm safety guidelines.
Hosanagar said his new book, released March 12, is designed to address the disconnect between AI researchers and academics studying the social implications of technology. In recent years, social scientists and philosophers have highlighted the risks of machine intelligence, discussing doomsday scenarios of robots overtaking humanity and using phrases like “algorithms of oppression.” On the other hand, engineers have continued building systems and technologies without always considering their broader applications.
“There isn’t that acknowledgement of the other,” Hosanagar said, adding that this leads to extreme fear on one side and sometimes unchecked developments in the other.
At Wharton, Hosanagar is a ten-time recipient of Wharton undergraduate and MBA teaching awards, and he has been recognized as one of the top 40 business professors under 40. He is known for providing mentorship to student entrepreneurs and has supported student startups such as Yodle, a small-business advertising company, and Milo, an online market for local products that was acquired by eBay for $75 million.
In the general public, fear of AI usually dominates, reinforced by countless movies and novels about robots run amuck and state surveillance. And while algorithms can help recommend products and information based on user information, Wharton Marketing professor Raghuram Iyengar said, they can also create echo chambers which reinforce bias and lies.
The fears are many, but “at its heart, it's about lack of control," Hosanagar said. His book focuses on how individuals and consumers can regain this control.
Hosanger said if consumers know how algorithms work, they can demand change: recently, Facebook changed its privacy policies as a result of user pushback.
“When we have people say things like ‘I'm going to uninstall Uber,’ that is action that companies respond to,” Hosanagar said. “People can vote with their wallets.”
However, Hosanagar emphasized the responsibility for demanding change should not fall solely on the consumers. He said he has been advocating for companies to adopt audit processes where external teams check algorithms for factors like privacy and fairness to society.
“Companies should do that and be proactive rather than wait for user pushback,” Hosanagar said.
In response to the threat of state surveillance and collection of personal data, Hosanagar recommended the creation of national algorithm safety boards with experts to oversee developments and advise regulators on policy-making. This could be extended to the international level, with countries creating a set of international standards for how AI can be used and taking collective action against rogue states.
“If we have the right checks and balances in place, we don't need to worry as much,” he said.
However, Hosanagar, Iyengar, and Marketing professor Z. John Zhang emphasized the importance of not grouping the entire landscape of machine intelligence together. Zhang explained that when businesses make decisions that are supported and not controlled by AI, it can help businesses and consumers. Using user information, for instance, allows businesses to better cater to consumers and increase competition.
Similarly, of the debate over the ethics of driverless cars, Hosanagar said it is "almost unethical" to let humans cause accidents through texting and alcohol consumption when driverless cars will soon be safer than any human driver.
“Anytime we say no to technology, we should ask two things," Hosanagar said. "One is what is the alternative without the technology, and second is is there a way to fix the technology and improve it. And I found almost always the answers to the questions are one, the alternative is very poor, and two, yes we can fix the problems."