Ro Encarnación, a doctoral student at Penn Engineering, smiles at the camera while wearing a black long-sleeve blouse and her hair in braids.
Ro Encarnación, a Penn Engineering doctoral student, studies how algorithms can be held accountable and incorporate community concerns. (Eric Sucar)

By Ian Scheffler

For Ro Encarnación, a doctoral student in Computer and Information Science (CIS) at Penn Engineering, computers have always been a source of wonder. She still remembers seeing one for the first time at the home of a family friend. “I would just stand by that room,” Encarnación recalls, “wanting to use the computer so badly.” In-Tech Academy, a combined middle and high school in her native Bronx, New York, introduced her to web design. “Creating websites using computing languages was like magic to me,” Encarnación says. 

Today, Encarnación is one of the first doctoral students in Penn Engineering’s Human-Computer Interaction Lab, co-founded by Danaë Metaxa, Raj and Neera Singh Term Assistant Professor in CIS and Encarnación’s advisor, and Andrew Head, Assistant Professor in CIS. “Ro is an ideal founding lab member: driven, sharp and caring, with a powerful vision for her career,” says Metaxa. “In fact, she was just awarded the prestigious Graduate Research Fellowship from the National Science Foundation, a very well-deserved honor.”

Encarnación’s research focuses, broadly speaking, on algorithmic justice — the idea that people affected by algorithms in their day-to-day lives should have a say in how those algorithms are designed and used. “The goal is to make sure that these systems are designed in a way that adapts to community concerns,” says Encarnación.

Marginalized communities are not a monolith. Not everybody’s going to be the same — but that should not deter engineers from trying to create systems that are still accountable to users.

Ro Encarnación, Penn Engineering doctoral student

Prior to matriculating at Penn Engineering, Encarnación worked for a year on Capitol Hill as a TechCongress fellow for U.S. Representative Yvette D. Clarke. “Going through that experience helped me see how much research is necessary to provide evidence for tech policy,” says Encarnación, “especially when there is limited understanding of the underlying technology.” 

After working for Rep. Clarke, Encarnación decided to pursue a Ph.D. at Penn Engineering. “I saw a disconnect,” she says. “Researchers aren’t always reaching out to Congress to offer their expertise in generating policy.” As she saw it, conducting research could ultimately have a greater impact than staying in Congress. “Academics can really fill the gap.” 

In a forthcoming paper, Encarnación explores biases in TikTok’s AI manga filter, which makes users resemble characters in a Japanese cartoon. Perhaps unsurprisingly, the algorithm struggles to visualize non-white skin tones, frequently hypersexualizes users, occasionally misgenders users and frequently anthropomorphizes objects or empty space into thin, white women. 

However, rather than just cataloging harms, Encarnación and her coauthors decided to see how users themselves were actually responding to the algorithms’ biased results. The researchers coined a new term — “emergent auditing” — to describe this completely user-led approach to analyzing algorithms, in which understanding a system’s biases comes not from researchers, but directly from the system’s users. 

In this case, a majority of user videos in the study showed users engaging with the algorithms’ biased outputs in a surprisingly lighthearted manner. “A lot of it was very inappropriate,” says Encarnación, “but they were having fun with it, and uncovering how far you can go to make the algorithm do what you want, which honestly should be a goal for any system designer.” 

In the paper, Encarnación and her coauthors describe users’ findings as a form of “collective knowledge construction,” in which research on TikTok’s AI Manga filter was essentially completed by the users themselves, even if in an entirely uncoordinated fashion. 

Originally, the concept of collective knowledge construction comes from academic education literature, and describes knowledge gathered by students. “We found that the concept translated to the behavior we saw in this study,” says Encarnación. “Given no prior guidance on what to do with this filter, users constructed their own ideas and disseminated them by posting their theories on the platform.” 

In the future, Encarnación hopes to see more research that puts users at the center of studying algorithmic bias. “Marginalized communities are not a monolith,” she says. “Not everybody’s going to be the same — but that should not deter engineers from trying to create systems that are still accountable to users.”

To learn more about Encarnación’s research, please visit her website. To learn more about the Penn Engineering Human-Computer Interaction Lab, please visit the HCI website