Improving our Digital Environments with Citizen Science: J. Nathan Matias at the MIT Data + Feminism Lab
Last week I had the pleasure of hosting J. Nathan Matias, assistant professor of communication at Cornell University and director of the Citizens and Technology Lab, for a conversation at the Berkman Klein Center at Harvard. I also attended his public talk later that day at MIT DUSP, hosted by the Data + Feminism Lab. I’m sharing this post based on his talk as a resource for those who were not able to attend.
Nathan introduces a quote from John Dewey’s The Public and its Problems, on the question of technology and democracy: “Man, a child in understanding of himself, has placed in his hands … tools of incalculable power. Whether they work harm or good is largely a matter of accident.” Dewey’s view was that people are much better at building things than they are at understanding themselves and the world. Dewey was thinking about WWI, media and technology of his time, and the problem of democracy more broadly. But the mismatch he observed is still creating dilemmas for managing the technology we put into the world.
The things people have built and put into the digital environment have created outcomes for harm and good that are largely accidental. According to Pew, a growing proportion of Americans experience severe forms of harassment online which includes physical threats, stalking, sexual harassment, and sustained harassment. These behaviors are getting propagated by algorithms, which reproduce ethnic and gender biases that are common patterns in society. Many people have questions and concerns about the effects of these technologies.
Nathan’s driving question as a scientist is: how can science make digital environments reliably safer and fairer, and can we make the impacts of platforms and AI less accidental and more deliberately directed toward the common good? Questions about whether the digital environment could be safer unite the interests of different groups of people. People who worry about technology and care about their communities desperately need answers that apply to their immediate context. Policymakers need answers and evidence that they can apply to policy and regulation. And scientists who care about creating theory from generalizable, causal knowledge are invested in finding answers.
Despite the pressing need for research, tech companies have fewer and fewer incentives to engage in the kind of science that will give the public assurances about the safety of their technologies. Companies are restricting data access more and more as they worry about competitors scraping their data to train GenAI, which restricts ability to do accountability work. For instance, last year Reddit disabled API access that volunteer moderators and independent researchers use to work toward online safety. This is all happening amidst government inaction — like when the supreme court reversed a conviction against a perpetrator of sustained online harassment.
We’ve been here before: participatory science in food testing and product safety
This isn’t the first time we’ve been in a situation where people are concerned about new technologies, but it’s been hard to produce the science to address safety concerns. We’ve been here for public health, auto safety, and environmental justice — just to name a few. Nathan leads the Citizens and Technology Lab (CAT Lab), which is a community/citizen science lab that works with millions of people on research about our digital environments — independently from the tech industry. And at CAT Lab, researchers and community partners study safety in our digital environments by learning from what community science has already achieved for our planet’s biodiversity, air, water, food and drug safety, and other issues for the common good. Their work takes inspiration from scientists like Ellen Swallow Richards, Hugh DeHaven, and Elinor Ostrom, who pioneered independent testing and citizen action as ways to advance science and build power and accountability in these fields.
Consider Ellen Swallow Richards, a pioneer of pollution tests, food labels, and citizen science. In the late 19th century, before the development of the FDA, food producers cut costs and increased revenue by adding water to the milk, putting alum in the bread, and adding opioids to food and drugs to hook people on their products. Richards’ tests held polluters accountable and motivated the first US water treatment plants. Richards was one of the first women to study at MIT and eventually their first woman faculty member. At the time, women were not allowed to go in the chemistry lab. She founded the MIT Women’s Lab, open not only to women at MIT but all women in Boston. They opened chemistry as an endeavor to women in Boston who cared about food safety. They thought if women learned about chemistry, they could do scientific research that informed public health.
Public health became a big part of the women’s movement in the late 19th century, and informed the establishment of the FDA, which now regulates the safety of our food supply. At the time, people pilloried the idea that everyone would be a scientist testing their own food, but also lamented the state of food safety — i.e., the fact that people were so unsure about their food that they were even considering building the capacity to do at-home food testing. Good Housekeeping, a home economics membership organization and magazine founded in 1885, crowdsourced a chemistry lab where subscribers paid for testing — companies were routinely creating food that poisoned people and they weren’t even sure that they were doing it because there wasn’t enough testing. Six years later, when the US congress created the Food and Drug Administration, they appointed Harvey Wiley, head of the Good Housekeeping Labs, to be its first commissioner. Similar models for product safety exist today, like Consumer Reports. When governments decline to regulate, membership organizations still provide valuable information to the public that people trust.
CAT Lab: Understanding social impacts and testing ideas for change
This brings us back to the question: How do we learn from this history to hold digital power accountable? CAT Lab does this by organizing citizen science to test the social impacts of digital technologies and discover effective ideas for change.
CAT Lab is trying to take inspiration from these wider histories where the public has collaborated to collect information and produce systematic knowledge about their concerns and hopes about technology. This involves two parts: understanding impacts, and discovering/testing effective ideas for change. If all you do is find that all the food is poisoned or all the tech is harmful and stop there, you’re back at square one. Many communities that CAT Lab works with care deeply about testing ideas for change.
Understanding impacts
For instance, in 2015 Nathan and his collaborators were approached by Women, Action & the Media (WAM!), who organized hundreds of women who routinely faced online harassment. At the time, harassment campaigns like GamerGate were representative of the high volume of harassment that women experienced online. Companies and law enforcement were not responsive to direct threats of violence against women, and people like Anita Sarkeesian documented the large volume of messages they and others frequently faced.
WAM! made reports of harassment to Twitter, and collected data about how Twitter responded. These women felt their stories were not listened to, but also wanted to better understand the problem themselves. They wanted researchers to do analysis on data they were collecting. Together, they produced a report that described the harassment that women experienced and how little the companies responded. At the time, companies were not producing reports like this internally, so they had to work from the outside. This report was influential on policy as people tried to close the gap on enforcement.
But as they finished the report, the people at Twitter had refused to meet with them about the report or even to receive the document, even though Twitter had encouraged the project early on. We don’t know why, but it made Nathan realize it would be really important to do work that did not rely on the decisions of people inside tech companies for change to happen. Given that Twitter didn’t ultimately meet with the researchers, the internal environment may have made it even harder to do this work from the inside.
Testing ideas for change
What do people do when they receive harassment on Reddit? If the comment isn’t illegal, and it doesn’t violate Reddit’s policies, the site’s professional staff are unlikely to do anything. Other readers could downvote the comment, making it less prominent. Or someone could report it to community moderators, who may remove it. But you can’t undo the harm once the comment has been read.
Since 2015, Nathan has been doing ethnographic fieldwork with Reddit moderators and Twitter harassment reporters to understand the nature of the governance work they do, how they organize online and how they relate to platforms and the law. It was during that ethnographic research that volunteer moderators of a discussion community asked him how they could test ideas for preventing harassment, not just responding to it.
To do this, they turned to the idea of norms. Human behavior is guided by subjective perceptions about what is common or acceptable (i.e. norms). Together, Nathan and the moderators asked: How does normative information influence newcomer behavior? Reddit is very algorithm-driven: people might not know anything about norms and context but get dropped into conversations. Their proposed intervention was informed by theories of social norms in social psychology that established norms about what behavior was acceptable by naming the rules and consequences. They developed software that coordinated the study with the community’s consent. The found that posting the rules increased rule compliance by >8% and increased newcomer participation by 70%.
Because CAT Lab had reusable software infrastructure to run experiments, they were able to support other moderators to do replications on other communities and platforms. They were also able to ask questions that people cared about other than preventing harassment, including: limiting misinformation, managing conflict, moderating large q&as, creating networks of appreciation, and supporting newcomers. In the last 7 years, they’ve run ~20 field experiments with the CivilServant software: so many that they are bottlenecked on writing up the results. It’s now easier to produce experiments than it is to produce papers.
Organizing independent tech researchers for mutual aid and support
In addition to CAT Lab, Nathan helps organize the Coalition for Independent Technology Research. Companies and governments have gone after researchers doing accountability work. This includes Facebook’s actions to shut down the NYU Ad Observatory, and the Texas state legislature’s ban on TikTok which required University of Texas researchers to shut down research on the platform. The Coalition supports and defends independent research on tech and society. It has organized lawsuits to challenge the TikTok ban (which they unfortunately lost), filed amicus briefs, and collected evidence that have defeated attempts by X to threaten researchers who are studying online hate. They also organize peer support and pressure campaigns to create external pressure that helps internal people create better transparency and data sharing regimes for industry accountability (e.g. RIP Crowdtangle, responding to Meta shutting down a widely used transparency tool).