Headshot of Alex Chohlas-Wood in a blue blazer against a red background.

I’m an assistant professor of computational social science at NYU Steinhardt’s Department of Applied Statistics, Social Science, and Humanities. I also co-direct the Computational Policy Lab (CPL) at Harvard Kennedy School.

I study how new computational methods can improve public policy, often by working directly with government agency partners.

Here’s what I’ve been up to recently:

  • Pretrial nudges

    Updated on October 1, 2025 · June 21, 2023

    An illustration of a text message reminder that encourages a fictional client to attend court.

    Failing to appear in court can land a person in jail and cause them to lose their job or housing. But many people fail to appear (FTA) simply because they forget about their court date.

    In a randomized controlled trial at the Santa Clara Public Defender’s Office, we found that text message reminders reduced FTA-related jail involvement by over 20%. Our findings extend previous studies which found that text message reminders can help people show up to court. Our study was published in Science Advances in 2025.

    We’re now testing whether the standard consequences-focused reminder hurts or helps certain clients, and whether monetary assistance can help clients overcome financial barriers to court attendance.

  • Blind charging

    Updated on January 1, 2025 · June 1, 2019

    A screenshot from a nightly newscast showing Alex Chohlas-Wood presenting race-blind charging at a press conference next to George Gascón.

    With colleagues at CPL, I designed an algorithm that uses computer vision models and LLMs to automatically mask race-related information in police reports. Prosecutors then review these redacted reports and make a race-blind decision to charge or dismiss each case.

    After we ran pilots at the San Francisco and Yolo District Attorney’s offices, California passed a law requiring prosecutors across the state to adopt our intervention.

    We are now studying the impacts of blind charging with a randomized controlled trial. Blind charging has been covered in numerous press articles. Learn more at blindcharging.org.

  • Learning to be fair

    December 18, 2024

    A plot of a Pareto curve, showing an inherent tradeoff in a ride assistance program. On one axis is the number of new court appearances; on the other axis is the average spending per Black client. The graph shows a downward sloping curve, illustrating that it is not possible to maximize both new court appearances and average spending per Black client. On the chart are four possible allocations, showing that several common algorithmic approaches do not maximize a stakeholder's assumed maximum utility.

    Many studies have framed algorithmic fairness as a mathematical problem, proposing axiomatic constraints without fully considering the objectives of an intervention.

    My coauthors and I devised a new approach that uses contextual bandits and convex optimization to achieve outcomes that align with policymakers’ preferences for how to make difficult tradeoffs. We demonstrate the advantages of this approach using data from the Santa Clara Public Defender in a paper in Management Science.

  • Equitable algorithms

    July 24, 2023

    Two plots of data side-by-side. On the left, two panes showing diverging and overlapping lines for risk assessment ratings for patients of different races or ethnicities. On the right, a stacked pair of histograms showing which patients would be referred to a diabetes exam.

    The last few years have seen an explosion in research on how to constrain algorithms to avoid inequitable decision-making.

    My colleagues and I wrote a short guide for Nature Computational Science that synthesizes this research, illustrates drawbacks to several widely cited approaches, and outlines practical steps people can take in their quest for equitable algorithms.

  • Assessing police stop policies

    March 1, 2022

    A chart showing changing rates of police stop and criminal activity from Nashville, Tennessee.

    My colleagues and I described how data analysis can assess the quality of police stop policies, complementing other research which investigates individual stop decisions.

    We gave applied examples from a handful of major cities across the U.S., including Nashville, New York, Chicago, and Philadelphia. Our paper was published in the University of Chicago Law Review, and I wrote a Twitter thread about it here.

    In related work, we worked with the Policing Project and the city of Nashville to demonstrate that traffic stops were an ineffective tool for fighting crime. After the release of our report, the city’s police department reduced the use of traffic stops by 70%.

  • Patternizr

    March 10, 2019

    A thumbnail from an WSJ story linked to in the post.

    When I was the director of analytics at the NYPD, I designed and deployed a tool called Patternizr that automatically suggests potential candidates to detectives who are trying to identify recent outbreaks of crime. (Detectives still have to submit these candidates for review and official approval before they are formally designated as patterns.)

    My colleague and I described our approach in a paper in the INFORMS Journal of Applied Analytics. Our study included a first-of-its-kind analysis from a police department that demonstrates the tool does not disproportionately recommend any specific race group. Patternizr was featured in several articles.

  • Auditron

    March 21, 2016

    A video still of the conference panel with Alex Chohlas-Wood as one of the presenters.

    I designed an algorithm for the NYPD that looked for crimes which were misclassified as felonies or misdemeanors. Likely misclassifications were sent to an internal team for auditing and correction.

    I presented my approach at NYU’s Tyranny of the Algorithm? Predictive Analytics & Human Rights conference.