The Daily Pennsylvanian is a student-run nonprofit.

Please support us by disabling your ad blocker on our site.


Statistics professor James Johndrow has helped develop a method to remove algorithmic biases.

Predictive modeling, or the use of data mining and probability to forecast outcomes, is often used to eliminate human bias when making important decisions. However, recent research has revealed that many algorithms contain the same biases they were created to remove. 

Statistics professor James Johndrow and his wife, statistician Kristian Lum, have developed a method to remove these biases, particularly in the field of criminology. Earlier this year, the two published a research report titled “Algorithm for Removing Sensitive Information: Application to Race-Independent Recidivism Prediction” through the Institute of Mathematical Statistics. The paper proposes both a probabilistic notion of algorithm bias as well as a method for removing this bias.

In criminal justice, algorithms are used to assess the risk of a person being charged when making decisions about sentencing, bail, and parole. Johndrow and Lum’s paper argues that models currently in use in the criminal justice system, deemed “race-neutral” because they omit race itself as a variable, are still inherently biased and result in “racially disparate predictions.” 

The new model proposed by Lum and Johndrow focuses on recidivism prediction, or the likelihood that one will commit another crime after being released from prison. The paper uses statistical analysis to remove all information regarding the targeted variable and gives this information to those responsible for training the new algorithm. 

In a podcast with Knowledge@Wharton, Johndrow said, “What we want, what our notion of fairness is, is that the predictions of the algorithm don’t differ by race or sex. What that means statistically is that when you make the predictions, there is no information on race left in the prediction.”

Along with Johndrow, other researchers are studying algorithms at Penn. This past March, Wharton professor of Technology and Digital Business Kartik Hosanagar published a book called “A Human's Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control.” The book looks at how consumers can advocate for big businesses to create fairer, more transparent algorithms.