People of ACM - Sorelle Friedler
December 2, 2025
Your PhD thesis explored geometric algorithms for objects in motion. What led to your current work in the fairness and transparency of AI algorithms?
After doing my PhD at the University of Maryland, I moved across the country to work for Google. There, I was part of a team working to solve the problem of locating individuals inside, where GPS signals don’t reach. We used machine learning to locate phones based on available WiFi signals and strengths. When I returned to academia as a professor, I continued to work on machine learning projects. At a machine learning conference, I attended a panel on the ethical implications of data mining, and in a lunchtime conversation with Suresh Venkatasubramanian and Carlos Scheidegger afterwards began what would become a long-running collaboration focused on fairness and interpretability of machine learning.
In one of your most cited papers, “Fairness and Abstraction in Sociotechnical Systems,” you (along with co-authors Andrew D. Selbst, Suresh Venkatasubramanian, Danah Boyd, and Janet Vertesi) contend that researchers working to develop fair ML systems can “abstract away the social context in which these systems will be deployed.” Will you briefly discuss this problem? What remedies do you propose in your paper?
We teach our students the importance of abstraction as a basic building block of computer science—this is one of the elegant and powerful key concepts of the field! However, it also presents a trap that it’s easy to fall into when designing systems that need to work in a real-world context, and especially systems like those in fair machine learning that need to take complex societal contexts into account. The trap is that you abstract away some of the societal context that’s actually important to deploying a system thoughtfully, or in making the choice whether it makes sense to deploy a machine learning system at all. This might mean that you assume human discretion will correct the mistakes of a system’s prediction and don’t account for automation bias—the human practice of deferring to a system’s predictions. Or it might mean that you assume that introducing a machine learning system to make predictions is appropriate in a scenario when it’s not, by not accounting for the importance we all place on being seen and receiving human consideration from others. Appropriately engaging such questions about the human involvement in a system’s deployment depends on not falling into the trap of abstracting away important societal context.
In 2018, why did you and your colleagues initiate the ACM FAccT conference? What has surprised you the most about FAccT’s development over the past 7 years?
FAccT grew out of a series of workshops beginning in 2014 called FAT/ML. I was at the first FAT/ML in Montreal associated with NeurIPS 2014, and it was a very small satellite event, with perhaps just 30 or so people attending. It's been astonishing to see the growth of the subfield since then, now as a separate conference with between 500 to 800 attendees yearly. One of the things I've appreciated about how the field has grown is its interdisciplinarity, with contributors from computer science as well as law, philosophy, sociology, and many other fields. Now that the field is more than a decade old, there are also researchers who identify FAccT as their home domain and conference, and whose work is characterized by this interdisciplinarity. Additionally, it's been wonderful to see the policy impact of the field and the way FAccT has helped to shift the narrative from assumed neutrality and acceptance of technology (and especially AI) to a broader public understanding that we need to proactively take steps to ensure these technologies work as we want them to.
Emerging AI technologies, including intelligent chatbots, have burst onto the scene. How will these more recent technologies present new challenges to AI responsibility efforts?
I believe that most of the earlier work on responsible AI carries over to today's chatbots and other large AI systems. The same needs for fairness, accountability, and transparency apply, as do many of the same difficulties of the opacity of these systems. One thing that is somewhat different is that it's harder for academic researchers to intervene directly in the training process of these systems. Instead, there's been a growth in important work on auditing as well as modifications to open models post training.
As the recently appointed Chair of USTPC, what are a few pressing policy issues you anticipate working on with your fellow committee members?
AI is of course on everyone's mind lately, and it's a useful place for USTPC members to lend our expertise. But there are important and current issues across the interests and expertise of committee members. For example, government use of technology is changing rapidly in recent years, and it's important that those changes are grounded in the reality of what technology can and cannot do. Privacy and surveillance concerns are also on the rise with the increased prevalence of security cameras and facial recognition, and cybersecurity needs are ever increasing. In addition to continuing committee work on these topics of long-term interest, I hope we can expand to also consider the environmental impact of computing. As data centers are being built across the country and the associated energy and water needs are straining local communities, it's important for ACM to weigh in on these real-world impacts.
Sorelle Friedler is the Shibulal Family Professor of Computer Science at Haverford College and a Nonresident Senior Fellow at the Brookings Institution. The core focus of her work is the fairness, accountability and transparency of machine learning. During her tenure as Assistant Director for Data and Democracy in the White Office of Science and Technology Policy, she co-authored the Blueprint for an AI Bill of Rights.
Friedler was also a co-founder of the ACM Conference on Fairness, Accountability and Transparency (ACM FAccT), and was recently named Chair of the ACM US Technology Committee (USTPC). USTPC comprises more than 170 members and serves as the focal point for ACM's interaction with the US government, the computing community, and the public on policy matters.