Wrote Doughty, “Defendants ‘significantly encouraged’ the social-media companies to such extent that the decisions (of the companies) should be deemed to be the decisions of the government.”
Doughty’s ban, which is now on hold as the White House appeals, attempts to set the bounds of acceptable conduct for government IRUs. It provides an exemption for officials to continue notifying social media companies about illegal activity or national security issues. Emma Llansó, director of the Free Expression Project at the Center for Democracy & Technology in Washington, DC, says that leaves much unsettled, because the line between thoughtful protection of public safety and unfair suppression of critics can be thin.
The EU’s new approach to IRUs also seems compromised to some activists. The Digital Services Act (DSA) requires each EU member to designate a national regulator by February that will take applications from government agencies, nonprofits, industry associations, or companies that want to become trusted flaggers that can report illegal content directly to Meta and other medium-to-large platforms. Reports from trusted flaggers have to be reviewed “without undue delay,” on pain of fines of up to 6 percent of a company’s global annual sales.
The law is intended to make IRU requests more accurate, by appointing a limited number of trusted flagging organizations with expertise in varying areas of illegal content such as racist hate speech, counterfeit goods, or copyright violations. And organizations will have to annually disclose how many reports they filed, to whom, and the results.
But the disclosures will have significant gaps, because they will include only requests related to content that is illegal in a EU state—allowing reports of content flagged solely for violating terms of service to go unseen. Though tech companies are not required to give priority to reports of content flagged for rule breaking, there’s nothing stopping them from doing so. And platforms can still work with unregistered trusted flaggers, essentially preserving the obscure practices of today. The DSA does require companies to publish all their content moderation decisions to an EU database without “undue delay,” but the identity of the flagger can be omitted.
“The DSA creates a new, parallel structure for trusted flaggers without directly addressing the ongoing concerns with actually existing flaggers like IRUs,” says Paddy Leerssen, a postdoctoral researcher at the University of Amsterdam who is involved in a project providing ongoing analysis of the DSA.
Two EU officials working on DSA enforcement, speaking on condition of anonymity because they were not authorized to speak to media, say the new law is intended to ensure that all 450 million EU residents benefit from the ability of trusted flaggers to send fast-track notices to companies that might not cooperate with them otherwise. Although the new trusted-flagger designation was not designed for government agencies and law enforcement authorities, nothing blocks them from applying, and the DSA specifically mentions internet referral units as possible candidates.
Rights groups are concerned that if governments participate in the trusted flagger program, it could be used to stifle legitimate speech under some of the bloc’s more draconian laws, such as Hungary’s ban (currently under court challenge) on promoting same-sex relationships in educational materials. Eliška Pírková, global freedom of expression lead at Access Now, says it will be difficult for tech companies to stand up to the pressure, even though states’ coordinators can suspend trusted flaggers deemed to be acting improperly. “It’s the total lack of independent safeguards,” she says. “It’s quite worrisome.”
Twitter barred at least one human rights organization from submitting to its highest-priority reporting queue a couple of years ago because it filed too many erroneous reports, the former Twitter employee says. But dropping a government certainly could be more difficult. Hungary’s embassy in Washington, DC, did not respond to a request for comment.
Tamás Berecz, general manager of INACH, a global coalition of nongovernmental groups fighting hate online, says some of its 24 EU members are contemplating applying for official trusted flagger status. But they have concerns, including whether coordinators in some countries will approve applications from organizations whose values don’t align with the government’s, like a group monitoring anti-gay hate speech in a country like Hungary, where same-sex marriage is forbidden. “We don’t really know what’s going to happen,” says Berecz, leaving room for some optimism. “For now, they are happy being in an unofficial trusted program.”
Read the full article here