While the artificial intelligence revolution holds great promise, what happens when the now-ubiquitous algorithms wreak havoc in our lives? Iowa law professors on the cutting edge of this field offer insights.
Wednesday, November 30, 2022

In this age of artificial intelligence, Isaac Asimov’s first law of robotics is receding in the rearview mirror.“A robot may not injure a human being ...” begins the directive laid out in the sci-fi visionary’s work I, Robot.

They may not look like the humanoid machines portrayed in classic science fiction, but independently operating robots surround us, working away in nearly every industry. They’re called artificial intelligence algorithms. Algorithms are the brains behind “robots” we can see, like self-driving cars and assembly-line robotic arms, but they more often work invisibly: approving or denying loans, setting insurance rates, matching mugshots with security camera footage, cranking out basic media articles.

Sometimes, those algorithms hurt people.

Just ask Robert Julian-Borchak Williams, wrongfully arrested when a facial recognition algorithm incorrectly matched his face with that of a shoplifter. Or middle school teacher Will Johnson, who was incorrectly identified as a white nationalist. Or ask the Black loan applicants who, along with countless other members of marginalized groups, missed out on the chance to buy homes or attend college because algorithms unfairly discriminated against them.

You can’t ask Elaine Herzberg. The 49-year-old died when an algorithm-driven Uber struck her in Tempe, Arizona, in 2018.

In fact, legal scholars now believe it’s impossible to enforce Asimov’s first law. We can’t completely prevent artificial intelligence from ever harming anyone. Instead, four Iowa College of Law professors tell us, we must create a legal framework where victims of algorithm harm can find justice, and where corporations are compelled to use care when deploying their AI bots.

You might be surprised to find out how little regulation or law currently constrains the use of algorithms, even those with the clear potential to harm.

 

ALGORITHM HARM CAN BE INVISIBLE

When an algorithm harms someone, the victim is often at a loss to get compensation or justice. In fact, victims might not even know they were harmed by an algorithm.

For instance, life insurance companies are banned from considering race when setting policy rates. But if an algorithm, tasked with assigning risk level to applicants, absorbs enough data about life expectancy across races, it might figure out ways to flag Black applicants and charge them more even if they never explicitly take into account the applicant’s race. Such proxy discrimination could easily be missed by the insurance company, customers and regulators.

“So many AI decisions are ‘black box,’ meaning that even the company using [them] might not know that anything is happening,” said Iowa College of Law Professor Anya Prince, whose research explores the privacy and discrimination issues surrounding genomic testing and big data, including the role of algorithms.

Algorithms may even pose invisible threats to the economy as a whole. In his paper “Securitizing Digital Debts,” Christopher Odinet, professor of law and the Michael and Brenda Sandler Fellow in Corporate Law, points out that lenders are using algorithms to qualify borrowers—despite the fact that these algorithms don’t have a long track record and that the people deploying them may not fully understand how the algorithms work.

Beyond the risk of accidental discrimination, Odinet points to an ominous fact: The startup lenders that tout their use of algorithms tend to finance their operations by selling bundles of consumer debt to investors as securities. This means that if the algorithms making loan decisions aren’t as good as the lenders think they are, the consequences could reach far beyond these companies.

“The broader concern is when major nodes of the economy become exposed to these complex products that are backed by inscrutable or difficult-to-scrutinize algorithms—and then they fail,” said Odinet, who studies commercial/consumer finance and property law.

Think 2008 and the securitization of mortgage loans.

 

WHO’S IN CHARGE OF THIS THING?

When an apparently malfunctioning algorithm-driven robot killed mechanic Wanda Holbrook at work in 2015, the harm was obvious—so obvious that the funeral home recommended a closed casket.

Less obvious: Who was responsible for Holbrook’s death? Of course, widower Bill Holbrook could not sue the algorithm controlling the robotic arm. Unlike a corporation, an algorithm is not legally considered a person. Nor could the police arrest the algorithm.

The solution to this roadblock might seem simple: Find the person who wrote an algorithm without including Asimov’s first law: “Don’t hurt anyone.” But in reality, finding a human responsible for an algorithm’s actions is rarely simple.

Holbrook’s case is an example of the “many hands problem” that Iowa Law Professor Mihailis Diamantis describes in his paper “Employed Algorithms: A Labor Model of Corporate Liability for AI.” A single algorithm might be designed by “distributed teams of hundreds or thousands of employees” within a single corporation, and multiple corporations could have a hand in creating, marketing and using various parts of a single algorithm-driven machine, like an industrial robot, explains Diamantis, who researches corporate crime and criminal theory.

Indeed, Bill Holbrook sued five robotics companies involved in the rogue machine—but seven years later, he has not been successful in obtaining compensation from any of them.

 

WHAT IF EVERYONE DID WHAT THEY WERE SUPPOSED TO DO?

Another challenge in legally confronting harm by algorithms: Tort law as used today generally requires evidence of negligence or intent in order to award compensation. But it’s possible that every human involved with an algorithm made every reasonable effort to create and operate it safely—yet something still went wrong.

“In the vast majority of cases, algorithms are designed to operate without causing injury. If it operates correctly 99 out of 100 times, the choice to use that algorithm looks pretty reasonable,” said Iowa College of Law Professor Cristina Tilley, who focuses on tort and media law. “But that’s cold comfort to the 1 out of 100 who was injured.”

Programmers shouldn’t be considered negligent simply for designing algorithms that sometimes act unpredictably, scholars agree, because intelligent algorithms’ unpredictability is the heart of their value.

“That’s why we use them—because they can do things better than we can in ways that we can’t anticipate,” Diamantis said. “But that also means that machine-learning algorithms are sometimes going to do things that we don’t want them to do and that we couldn’t anticipate. When you place too many barriers on an algorithm’s action, you end up limiting the power of the algorithm.”

"If it operate correctly 99 out of 100 times, the choice to use that algorithm looks pretty reasonable. But that’s cold comfort to the 1 out of 100 who was injured.”— Professor Cristina Tilley

 

WHAT TO DO?

If the conversation ended with the frustrated loan applicants, accident victims and the wrongly accused, the solution to harm by algorithm would be crystal clear: Get rid of the algorithms.

However, algorithms hold great promise for society. If self-driving technology can be perfected, traffic deaths should plummet. Machine-learning algorithms could help humans wrap our brains around huge, complex problems such as climate change, pandemics and maintaining healthy economies.

That promise makes a return to Asimov’s zero-tolerance policy a no-go.

Instead, Iowa Law’s scholars are exploring ways our current system could adapt to minimize the potential for algorithmic harm and to bring justice to victims when it does happen.

 

SOLUTION: VICARIOUS CORPORATE LIABILITY

Diamantis proposes a solution that would require no new legislation or regulation: vicarious corporate liability. That is, the law should hold corporations liable for the faulty actions of algorithms they use, much as they are held liable for the faulty actions of their human employees.

This does not mean, Diamantis stresses, that algorithms should be seen as employees. He calls them “employed algorithms.”

“It’s important to respect the human beings we have in our work-places. They have rights that I don’t think algorithms have,” he explained.

 

SOLUTION: STRICT LIABILITY

Another way that courts could address algorithmic harm: Embrace strict liability, an age-old concept little used in today’s courtrooms. Unlike with typical liability, a judge using the strict liability standard could hold a company using an algorithm liable for harm even if no one at the company acted carelessly. Tilley suggests that judges consider a corporation liable simply because they chose to outsource a task to a computer program, which is incapable of exercising human care.

Tilley reasons that deploying an algorithm resembles another situation where strict liability has been applied in the past: igniting dynamite.

But what do dynamite and algorithms have in common?

“Once they’re unleashed, they can’t be clawed back,” Tilley explained. She raises the example of a crew taking down a brick building when a toddler wanders near the site.

“If I’m taking them down by hand, I can call the workers off. But once I press the button on the dynamite, there’s nothing I can do to help that toddler.”

 

SOLUTION: TECH-SAVVY REGULATORS

In the realm of lending policy, Odinet thinks regulators such as the Consumer Financial Protection Bureau and the Treasury Department’s Financial Stability Oversight Council (FSOC) could help safeguard borrowers and the economy at large—if they had staff with the technical knowledge to understand what algorithms are doing and what they could do next.

“FSOC needs to become more apprised of the use of algorithms in the financial market. To do that, they need computer scientists who have the necessary expertise,” Odinet said.

 

SOLUTION: HARD-CODING FAIRNESS

Companies could address algorithm harm in-house by adjusting how they design and use the programs, Prince suggests.

Take the example of insurance algorithms discriminating against customers based on race. Remember that AI algorithms may independently identify and discriminate against applicants in ways that disproportionately impact protected groups, such as Black applicants, even when applicant race isn’t provided to the algorithm.

One way to prevent discrimination in this scenario, Prince says, is to proactively add the protected trait to the data set. Users would provide the algorithm with applicant race along with other information and allow the algorithm to calculate the rate using race as a factor. Then, they can remove that specific price difference.

“You control for the protected trait in your algorithm. And then at the back end, you take it out,” Prince explained.

For that to happen, industry groups might have to adopt new guidelines, or regulators might need to step in. Some laws would need to change.

“In many states right now, insurance companies can’t collect information about protected traits. You would have to flip the model on its head and change the law in those states to allow insurers to collect that data so that they can test the models,” Prince said.

 

CONCLUSION

Whether through the courts, laws and regulations, or the improvement of industry standards, it’s clear that some things must change if the future of intelligent algorithms is to deliver on the promise.

If the legal system holds back, customers and victims could be increasingly at risk as corporations transfer more and more tasks to algorithms, which can’t be sued or prosecuted. Yet if authorities are too heavy handed, they could rob our future of the many benefits algorithms could bring.

“We need to encourage corporations to invest in algorithms, but to do so responsibly,” Diamantis said.

 


 

WHAT IS AN ALGORITHM?

At its most basic, an algorithm is a set of instructions designed to accomplish something. A recipe is an algorithm; so is the process we all learned in school for long division. But today’s AI algorithms can be far more complex, learning and making decisions without human guidance. Algorithms used in many industries today have abilities surpassing those of the human mind: They can take in vast amounts of data and come up with solutions we could not predict. Even though humans create algorithms, the bots’ “thought” processes can be opaque to us. The result: Increasingly, algorithms can surprise us with their behavior.