Electronic Freedom Foundation

IRS-ICE Immigrant Data Sharing Agreement Betrays Data Privacy and Taxpayers’ Trust

EFF - Fri, 04/25/2025 - 1:22pm

In an unprecedented move, the U.S. Department of Treasury and the U.S. Department of Homeland Security (DHS) recently reached an agreement allowing the IRS to share with Immigration and Customs Enforcement (ICE) taxpayer information of certain immigrants. The redacted 15-page memorandum of understanding (MOU) was exposed in a court case, Centro de Trabajadores Unidos v. Bessent, which seeks to prevent the IRS from unauthorized disclosure of taxpayer information for immigration enforcement purposes. Weaponizing government data vital to the functioning and funding of public goods and services by repurposing it for law enforcement and surveillance is an affront to a democratic society. In addition to the human rights abuses this data-sharing agreement empowers, this move threatens to erode trust in public institutions in ways that could bear consequences for decades. 

Specifically, the government justifies the MOU by citing Executive Order 14161, which was issued on January 20, 2025. The Executive Order directs the heads of several agencies, including DHS, to identify and remove individuals unlawfully present in the country. Making several leaps, the MOU states that DHS has identified “numerous” individuals who are unlawfully present and have final orders of removal, and that each of these individuals is “under criminal investigation” for violation of federal law—namely, “failure to depart” the country under 8 U.S.C. § 1253(a)(1). The MOU uses this basis for the IRS disclosing to ICE taxpayer information that is otherwise confidential under the tax code.  

In practice, this new data-sharing process works like this: ICE makes a request for an individual’s name and address, taxable periods for which the return information pertains, the federal criminal statute being investigated, and reasons why disclosure of this information is relevant to the criminal investigation. Once the IRS receives this request from ICE, the agency reviews it to determine whether it falls under an exception to the statutory authority requiring confidentiality and provides an explanation if the request cannot be processed. 

But there are two big reasons why this MOU fails to pass muster. 

First, as the NYU Tax Law Center identified:

“While the MOU references criminal investigations, DHS recently reportedly told IRS officials that ‘they would hope to use tax information to help deport as many as seven million people.’ That is far more people than the government could plausibly investigate, or who are plausibly subject to criminal immigration penalties, and suggests DHS’s actual reason for pursuing the tax data is to locate people for civil deportation, making any ‘criminal investigation’ a false pretext to get around the law.” 

Second, it’s unclear how the IRS would verify the accuracy of ICE’s requests. Recent events have demonstrated that ICE’s deportation mandate trumps all else—with ICE obfuscating, ignoring, or outright lying about how they conduct their operations and who they target. While ICE has fueled narratives about deporting “criminals” to a notorious El Salvador prison, reports have repeatedly shown that most of those deported had no criminal histories. ICE has even arrested U.S. citizens based on erroneous information and blatant racial profiling. But ICE’s lack of accuracy isn’t new—in fact, a recent settlement in the case Gonzalez v. ICE bars ICE from relying on its network of erroneous databases to issue detainer requests. In that case, EFF filed an amicus brief identifying the dizzying array of ICE’s interconnected databases, many of which were out of date and incomplete and yet were still relied upon to deprive people of their liberty. 

In the wake of the MOU’s signing, several top IRS officials have resigned. For decades, the agency expressed interest in only collecting tax revenue and promised to keep that information confidential. Undocumented immigrants were encouraged to file taxes, despite being unable to reap benefits like Social Security because of their status. Many did, often because any promise of a future pathway to legalizing their immigration status hinged on having fulfilled their tax obligations. Others did because as part of mixed-status families, they were able to claim certain tax benefits for their U.S. citizen children. The MOU weaponizes that trust and puts immigrants in an impossible situation—either fail to comply with tax law or risk facing deportation if their tax data ends up in ICE’s clutches. 

This MOU is also sure to have a financial impact. In 2023, it was estimated that undocumented immigrants contributed $66 billion in federal and payroll taxes alone. Experts anticipate that due to the data-sharing agreement, fewer undocumented immigrants will file taxes, resulting in over $313 billion in lost tax revenue over 10 years. 

This move by the federal government not only betrays taxpayers and erodes vital trust in necessary civic institutions—it also reminds us of how little we have learned from U.S. history. After all, it was a piece of legislation passed in a time of emergency, the Second War Powers Act, that included the provision that allowed once-protected census data to assist in the incarceration of Japanese Americans during World War II. As the White House wrote in a report on big data in 2014, “At its core, public-sector use of big data heightens concerns about the balance of power between government and the individual. Once information about citizens is compiled for a defined purpose, the temptation to use it for other purposes can be considerable.” Rather than heeding this caution, this data-sharing agreement seeks to exploit it. This is yet another attempt by the current administration to sweep up and disclose large amounts of sensitive and confidential data. Courts must put a stop to these efforts to destroy data privacy, especially for vulnerable groups.

Leaders Must Do All They Can to Bring Alaa Home

EFF - Fri, 04/25/2025 - 4:24am

It has now been nearly two months since UK Prime Minister Starmer spoke with Egyptian President Abdel Fattah el-Sisi, yet there has been no tangible progress in the case of Alaa Abd El Fattah, the British-Egyptian writer, activist, and technologist who remains imprisoned in Egypt.

In yet another blow to his family and supporters, who have been tirelessly advocating for his release, we’ve now learned that Alaa has fallen ill while on a sustained hunger strike protesting his incarceration. Alaa’s sentence was due to end last September.

Alaa’s mother, Laila Soueif, initiated a hunger strike beginning on his intended release date to amplify demands for her son’s release. Soueif, too, is facing deteriorating health, having to shift from a full hunger strike to a partial strike allowing for 300 liquid calories a day after being hospitalized in London, and following Starmer’s subsequent call with el-Sisi. Risking serious complications, today  marks the 208th day of her hunger strike in protest at her son’s continued imprisonment in Egypt. Calling for her son’s freedom, Soueif has warned that she will resume a full hunger strike if progress is not made soon on Alaa’s case.

As of April 24, Alaa is on Day 55 of a hunger strike that he began on 1 March. He is surviving on a strict ration of herbal tea, black coffee, and rehydration salts, and is now being treated in Wadi El-Natrun prison for severe stomach pains. In a letter to his family on April 20, Alaa described worsening conditions and side effects from medications administered by prison doctors: “the truth is the inflammation is getting worse … all these medicines are making me dizzy and yesterday my vision was hazy and I saw distant objects double.”

Responding to Alaa’ illness in prison, Alaa’s sister Sanaa Seif stated in a press release: “We are all so exhausted. My mum and my brother are literally putting their bodies on the line, just to give Alaa the freedom he deserves. Their health is so precarious, I’m always afraid that we are on the verge of a tragedy. We need Keir Starmer to do all he can to bring Alaa home to us.”

Alaa’s case has galvanized support from across the UK political spectrum, with more than 50 parliamentarians urging immediate action. Prime Minister Starmer has publicly committed to pressing for Alaa’s release, but these words must now be matched by action. As Alaa’s health deteriorates, and his family’s ordeal drags on, the need for decisive intervention has never been more urgent. The time to secure Alaa’s freedom—and prevent further tragedy—is now.

EFF continues to work with the campaign to free Alaa: his case is a critical test of digital rights, free expression, and international justice. 

Digital Identities and the Future of Age Verification in Europe

EFF - Wed, 04/23/2025 - 4:48am

This is the first part of a three-part series about age verification in the European Union. In this blog post, we give an overview of the political debate around age verification and explore the age verification proposal introduced by the European Commission, based on digital identities. Part two takes a closer look at the European Commission’s age verification app, and part three explores measures to keep all users safe that do not require age checks. 

As governments across the world pass laws to “keep children safe online,” more times than not, notions of safety rest on the ability of platforms, websites, and online entities being able to discern users by age. This legislative trend has also arrived in the European Union, where online child safety is becoming one of the issues that will define European tech policy for years to come. 

Like many policymakers elsewhere, European regulators are increasingly focused on a range of online harms they believe are associated with online platforms, such as compulsive design and the effects of social media consumption on children’s and teenagers’ mental health. Many of these concerns lack robust scientific evidence; studies have drawn a far more complex and nuanced picture about how social media and young people’s mental health interact. Still, calls for mandatory age verification have become as ubiquitous as they have become trendy. Heads of state in France and Denmark have recently called for banning under 15 year olds from social media Europe-wide, while Germany, Greece and Spain are working on their own age verification pilots. 

EFF has been fighting age verification mandates because they undermine the free expression rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users’ privacy, anonymity, and security. We do not think that requiring service providers to verify users’ age is the right approach to protecting people online. 

Policy makers frame age verification as a necessary tool to prevent children from accessing content deemed unsuitable, to be able to design online services appropriate for children and teenagers, and to enable minors to participate online in age appropriate ways. Rarely is it acknowledged that age verification undermines the privacy and free expression rights of all users, routinely blocks access to resources that can be life saving, and undermines the development of media literacy. Rare, too, are critical conversations about the specific rights of young users: The UN Convention on the Rights of the Child clearly expresses that minors have rights to freedom of expression and access to information online, as well as the right to privacy. These rights are reflected in the European Charter of Fundamental Rights, which establishes the rights to privacy, data protection and free expression for all European citizens, including children. These rights would be steamrolled by age verification requirements. And rarer still are policy discussions of ways to improve these rights for young people.

Implicitly Mandatory Age Verification

Currently, there is no legal obligation to verify users’ age in the EU. However, different European legal acts that recently entered into force or are being discussed implicitly require providers to know users’ ages or suggest age assessments as a measure to mitigate risks for minors online. At EFF, we consider these proposals akin to mandates because there is often no alternative method to comply except to introduce age verification. 

Under the General Data Protection Regulation (GDPR), in practice, providers will often need to implement some form of age verification or age assurance (depending on the type of service and risks involved): Article 8 stipulates that the processing of personal data of children under the age of 16 requires parental consent. Thus, service providers are implicitly required to make reasonable efforts to assess users’ ages – although the law doesn’t specify what “reasonable efforts” entails. 

Another example is the child safety article (Article 28) of the Digital Services Act (DSA), the EU’s recently adopted new legal framework for online platforms. It requires online platforms to take appropriate and proportionate measures to ensure a high level of safety, privacy and security of minors on their services. The article also prohibits targeting minors with personalized ads. The DSA acknowledges that there is an inherent tension between ensuring a minor’s privacy, and taking measures to protect minors specifically, but it's presently unclear which measures providers must take to comply with these obligations. Recital 71 of the DSA states that service providers should not be incentivized to collect the age of their users, and Article 28(3) makes a point of not requiring service providers to collect and process additional data to assess whether a user is underage. The European Commission is currently working on guidelines for the implementation of Article 28 and may come up with criteria for what they believe would be effective and privacy-preserving age verification. 

The DSA does explicitly name age verification as one measure the largest platforms – so called Very Large Online Platforms (VLOPs) that have more than 45 million monthly users in the EU – can choose to mitigate systemic risks related to their services. Those risks, while poorly defined, include negative impacts on the protection of minors and users’ physical and mental wellbeing. While this is also not an explicit obligation, the European Commission seems to expect adult content platforms to adopt age verification to comply with their risk mitigation obligations under the DSA. 

Adding another layer of complexity, age verification is a major element of the dangerous European Commission proposal to fight child sexual abuse material through mandatory scanning of private and encrypted communication. While the negotiations of this bill have largely stalled, the Commission’s original proposal puts an obligation on app stores and interpersonal communication services (think messaging apps or email) to implement age verification. While the European Parliament has followed the advice of civil society organizations and experts and has rejected the notion of mandatory age verification in its position on the proposal, the Council, the institution representing member states, is still considering mandatory age verification. 

Digital Identities and Age Verification 

Leaving aside the various policy work streams that implicitly or explicitly consider whether age verification should be introduced across the EU, the European Commission seems to have decided on the how: Digital identities.

In 2024, the EU adopted the updated version of the so-called eIDAS Regulation, which sets out a legal framework for digital identities and authentication in Europe. Member States are now working on national identity wallets, with the goal of rolling out digital identities across the EU by 2026.

Despite the imminent roll out of digital identities in 2026, which could facilitate age verification, the European Commission clearly felt pressure to act sooner than that. That’s why, in the fall of 2024, the Commission published a tender for a “mini-ID wallet”, offering four million euros in exchange for the development of an “age verification solution” by the second quarter of 2025 to appease Member States anxious to introduce age verification today. 

Favoring digital identities for age verification follows an overarching trend to push obligations to conduct age assessments continuously further down in the stack – from apps to app stores to operating service providers. Dealing with age verification at the app store, device, or operating system level is also a demand long made by providers of social media and dating apps seeking to avoid liability for insufficient age verification. Embedding age verification at the device level will make it more ubiquitous and harder to avoid. This is a dangerous direction; digital identity systems raise serious concerns about privacy and equity.

This approach will likely also lead to mission creep: While the Commission limits its tender to age verification for 18+ services (specifically adult content websites), it is made abundantly clear that once available, age verification could be extended to “allow age-appropriate access whatever the age-restriction (13 or over, 16 or over, 65 or over, under 18 etc)”. Extending age verification is even more likely when digital identity wallets don’t come in the shape of an app, but are baked into operating systems. 

In the next post of this series, we will be taking a closer look at the age verification app the European Commission has been working on.

Florida’s Anti-Encryption Bill Is a Wrecking Ball to Privacy. There's Still Time to Stop It.

EFF - Tue, 04/22/2025 - 6:34pm

We've seen plenty of bad tech bills in recent years, often cloaked in vague language about "online safety." But Florida’s SB 868 doesn’t even pretend to be subtle: the state wants a backdoor into encrypted platforms if minors use them, and for law enforcement to have easy access to your messages.

This bill should set off serious alarm bells for anyone who cares about digital rights, secure communication, or simply the ability to message someone privately without the government listening. Florida lawmakers aren’t just chipping away at digital privacy—they're aiming a wrecking ball straight at it.

TAKE ACTION

SB 868 is a blatant attack on encrypted communication. Since we last wrote about the bill, the situation has gotten worse. The bill and its House companion have both sailed through their committees and are headed to a full vote. That means, if passed, SB 868 would:

  • Force social media platforms to decrypt teens’ private messages, breaking end-to-end encryption
  • Ban “disappearing” messages, a common privacy feature that helps users—especially teens—control their digital footprint
  • Allow unrestricted parental access to private messages, overriding Florida’s own two-party consent laws for surveillance
  • Likely pressure platforms to remove encryption for all minors, which also puts everyone they talk to at risk

In short: if your kid loses their right to encrypted communication, so does everyone they talk to. 

There Is No Safe Backdoor

If this all sounds impossible to do safely, that’s because it is. There’s no way to create a “just for law enforcement” access point into encrypted messages. Every backdoor is a vulnerability. It's only a matter of time before someone else—whether a hacker, abuser, or foreign government—finds it. Massive breaches like Salt Typhoon have already proven that surveillance tools don’t stay in the right hands for long. Encryption either protects everyone—or it protects no one. We must protect it.

Encryption Matters—Especially for Teens

Encryption isn’t optional in today’s internet—it’s essential. It protects your banking info, your health data, your personal chats, and yes, your kids' safety online. 

SB 868 pretends to “protect children,” but does the opposite. Teens often need encrypted messaging to talk to trusted adults, friends, and family—sometimes in high-stakes situations like abuse, mental health crises, or discrimination. Stripping away those safeguards makes them more vulnerable, not less.

Investigators already have powerful tools to pursue serious crimes, including the ability to access device-level data and rely on user reports. In fact, studies show user reporting is more effective at catching online abuse than mass surveillance. So why push a bill that makes everyone less safe, weakens encryption, and invites lawsuits? That’s a question we all deserve an answer to.

It’s Time to Speak Up

Florida’s SB 868 isn’t just a bad bill—it’s a dangerous blueprint for mass surveillance. Tell Florida Legislators: SB 868 is unsafe, unworkable, and unacceptable.

If you live in Florida, contact your lawmakers and demand they reject this attack on encryption

TAKE ACTION

If you're outside the state, you can still speak out—public pressure matters, and the more people who call out how egregious this bill is, the harder it becomes for lawmakers to quietly push it forward. Make sure you follow us on social media to track the bills’ progress and help amplify the message.

Privacy is worth fighting for. Let’s stop SB 868 before it becomes law.

Why the FTC v. Meta Trial Matters: Competition Gaps and Civil Liberties Opportunities

EFF - Mon, 04/21/2025 - 4:00pm

We’re in the midst of a long-overdue resurgence in antitrust litigation. In the past 12 months alone, there have been three landmark rulings against Google/Alphabet (in search, advertising, and payments). Then there’s the long-running FTC v. Meta case, which went to trial last week. Plenty of people are cheering these cases on, seeing them as a victories over the tech broligarchy (who doesn’t love to see a broligarch get their comeuppance?).

But we’re cautiously cheering for another, more fundamental reason: the Big Tech antitrust cases could and should lead to enforceable changes that will foster more vibrant online expression and more meaningful user privacy protections.

Antitrust doctrine isn’t just about prices – it’s about power. The cases are nothing less than a fight over who will control the future of the internet, and what that future will look like. Will social media platforms continue to consolidate and enshittify? Or will the courts create breathing room for new ways of connecting to emerge and thrive?

Take FTC v Meta: The FTC argues that Meta’s control over Facebook, WhatsApp and Instagram – the latter two being companies Facebook acquired in order to neutralize them as competitors— gives it unfair monopoly power in personal social media, i.e. communications with friends and family. Meta disputes that, of course, but even if you take Meta at their word, there’s no denying that this case is directly concerned with online expression. If the FTC succeeds, Meta could be broken up and forced to compete. More important than competition for its own sake is what competition can deliver: openings in the canopy that allow green shoots to sprout – new systems for talking with one another and forming communities under different and more transparent moderation policies, a break from the content moderation monoculture that serves no one well (except for corporate shareholders).

These antitrust cases aren’t the sole purview of government enforcers. Private companies have also brought significant cases with real implications for user rights.

Take Epic Games v Google, in which Google insists that the court order to open up its app store to competition will lead to massive security risks. This is a common refrain from tech giants like Google, who benefit from the system of “feudal security” in which users must depend on the whims of a monopolist to guarantee their safety. Google claims that its app store security measures keep its users safe – reprising the long-discredited theory of “security through obscurity.” As the eminent cryptographer (and EFF board member) Bruce Schneier says, “Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break.”

It’s true that Google often does a good job securing its users against external threats, but Google does a much worse job securing users against Google itself – for example, there’s no way to comprehensively block tracking for Google’s apps on Android. Competition might make Google clean up its act here, but only if they start worrying that there’s a chance you’ll switch to an upstart with a better privacy posture. Enabling competition—as these cases are trying to do—means we don’t have to rely on Google to get privacy religion. We can just switch to an independently vetted rival. Of course, you can only vote with your feet if you have somewhere else to go.

Related Cases: Epic Games v. Google

EFF to Congress: Here’s What A Strong Privacy Law Looks Like

EFF - Mon, 04/21/2025 - 1:45pm

Enacting strong federal consumer data privacy laws is among EFF’s highest priorities. For decades, EFF has advocated for federal privacy law that is concrete, ambitious, and fully protective of all Americans’ data privacy.

That’s why, when the House Committee on Energy and Commerce recently established a Privacy Working Group and asked for comments on what we’d like to see from a Data Security and Privacy Framework, EFF was pleased to offer our thoughts.

Our comments highlight several key points. For one, we urge Congress not to weaken current federal privacy law or create new policy that supplants stronger state laws. A law that overrides strong state protections would hurt consumers and prevent states from protecting their constituents. 

We also urge Congress to include the most important tool to ensure that privacy laws have real bite: the individual right to sue over privacy violations. As we say in our comments:

It is not enough for the government to pass laws that protect consumers from corporations that harvest and monetize their personal data. It is also necessary to ensure companies do not ignore them. The best way to do so is to empower consumers to bring their own lawsuits against the companies that violate their privacy rights. Strong “private rights of action” are among EFF’s highest priorities in any data privacy legislation.

Additionally, we reiterate that any strong privacy law must include these components:

  • No online behavioral ads.
  • Data minimization.
  • Opt-in consent.
  • User rights to access, port, correct, and delete information.
  • No preemption of stronger state laws.
  • Strong enforcement with a private right of action.
  • No pay-for-privacy schemes.
  • No deceptive design.

As we have said in our Privacy First white paper, a strong privacy law would also help us address online harms, protect children, support journalism, protect access to health care, foster digital justice, limit private data collection to train generative AI, limit foreign government surveillance, and strengthen competition.

EFF thanks the committee for the opportunity to weigh in. We invite further conversation to develop strong, comprehensive law that affirms the privacy and civil rights of all American consumers. You can read our full comments here: 

  • EFF Comments to the House Committee on Energy & Commerce - Privacy Working Group

Six Years of Dangerous Misconceptions Targeting Ola Bini and Digital Rights in Ecuador

EFF - Fri, 04/18/2025 - 1:55pm

Ola Bini was first detained in Quito’s airport six years ago, called a “Russian hacker,” and accused of “alleged participation in the crime of assault on the integrity of computer systems.” It wouldn't take long for Ecuadorean authorities to find out that he was Swedish and an internationally respected free software developer and computer expert. 

Lacking evidence, authorities rapidly changed the criminal offense underpinning the accusation against Bini and struggled to build a case based on a mere image that shows no wrongdoing. Yet, Bini remained arbitrarily detained for 70 days in 2019 and outrageously remains under criminal prosecution.

This week, the Observation Mission monitoring Ola Bini’s case is again calling out the prosecution’s inaccuracies and abuses that weaponize misunderstandings about computer security, undermining both Bini’s rights and digital security more broadly. The Observation Mission is comprised of digital and human rights organizations, including EFF. Specifically, we highlight how Ecuadorean law enforcement authorities have tried to associate the use of Tor, a crucial privacy protection tool, with inherently suspicious activity. 

Following a RightsCon 2025 session about the flaws and risks of such an interpretation, we are releasing this week a technical statement (see below) pointing out why Ecuadorean courts must reaffirm Bini’s innocence and repudiate misconceptions about technology and technical knowledge that only disguise the prosecutor’s lack of evidence supporting the accusations against Bini. 

Let’s not forget that Bini was unanimously acquitted in early 2023. Nonetheless, the Prosecutor’s Office appealed and the majority of the appeals court considered him guilty of attempted unauthorized access of a telecommunications system. The reasoning leading to this conclusion has many problems, including mixing the concepts of private and public IP addresses and disregarding key elements of the acquittal sentence.  

The ruling also refers to the use of Tor. Among other issues, the prosecution argued that Tor is not a tool known by any person except for technical experts since its purpose is to hide your identity on the internet while leaving no trace you're using it. As we stressed at RightsCon, this argument turns the use of a privacy-protective, security-enhancing technology into an indication of suspicious criminal activity, which is a dangerous extrapolation of the “nothing-to-hide argument.” 

The prosecutor’s logic, which the majority appeal ruling endorses, is if you’re keeping your online activities private it’s because you’re most likely doing something wrong, instead of we all have privacy rights, so we are entitled to use technologies that ensure privacy and security by default. 

Backing such an understanding in a court ruling sets an extremely worrying precedent for privacy and security online. The use of Tor must not be up for grabs when a prosecutor lacks actual evidence to sustain a criminal case.

Bini’s defense has appealed the unfounded conviction. We remain vigilant, hoping that the Ecuadorean judicial system will correct the course as per basic tenets of the right to a fair trial, recognizing the weakness of the case rather than surrendering to pressure and prejudice. It's past time for justice to prevail in this case. Six years of a lingering flimsy prosecution coupled with the undue restriction of Bini’s fundamental rights is already far too long.

Read the English translation of the statement below (see here the original one in Spanish):

TECHNICAL STATEMENT
Ola Bini’s innocence must be reaffirmed 

In the context of RightsCon Taipei 2025, the Observation Mission of the Ola Bini case and the Tor Project organized a virtual session to analyze the legal proceedings against the digital security expert in Ecuador and to discuss to what extent and with what implications the use of the Tor digital tool is criminalized1. In that session, which included organizations and speakers from civil society from different countries, we reached the following conclusions and technical consensuses: 

  1. The criminal case against Bini was initiated by political motivations and actors and has been marked by dozens of irregularities and illegalities that undermine its legal legitimacy and technical viability. Rather than a criminal case, this is a persecution. 
  2. The way the elements of conviction of the case were established sets a dangerous precedent for the protection of digital rights and expert knowledge in the digital realm in Ecuador and the region. 
  3. The construction of the case and the elements presented as evidence by the Ecuadorian Attorney General’s Office (EAG) are riddled with serious procedural distortions and/or significant technical errors2
  4. Furthermore, to substantiate the crime supposedly under investigation, the EAG has not even required a digital forensic examination that demonstrate whether any kind of system (be it computer, telematic, or telecommunications) was accessed without authorization. 
  5. The reasoning used by the Appeals Court to justify its guilty verdict lacks sufficient elements to prove that Ola Bini committed the alleged crime. This not only violates the rights of the digital expert but also creates precedents of arbitrariness that are dangerous for the rule of law3
  6. More specifically, because of the conviction, part of the Ecuadorian judiciary is creating a concerning precedent for the exercise of the rights to online security and privacy, by holding that the mere use of the Tor tool is sufficient indication of the commission of a criminal act. 
  7. Furthermore, contrary to the global trend that should prevail, this ruling could even inspire courts to criminalize the use of other digital tools used for the defense of human rights online, such as VPNs, which are particularly useful for key actors—like journalists, human rights defenders, academics, and others—in authoritarian political contexts. 
  8. Around the world, millions of people, including state security agencies, use Tor to carry out their activities. In this context, although the use of Tor is not the central focus of analysis in the present case, the current conviction—part of a politically motivated process lacking technical grounding—constitutes a judicial interpretation that could negatively impact the exercise of the aforementioned rights

For these reasons, and six years after the beginning of Ola Bini’s criminal case, the undersigned civil society organizations call on the relevant Ecuadorian judicial authorities to reaffirm Bini’s presumption of innocence at the appropriate procedural stage, as was the first instance ruling demonstrated.

The Observation Mission will continue monitoring the development of the case until its conclusion, to ensure compliance with due process guarantees and to raise awareness of the case’s implications for the protection of digital rights.

1. RightsCon is the leading global summit on human rights in the digital age, organized by Access Now

2. See https://www.accessnow.org/wp-content/uploads/2022/05/Informe-final-Caso-Ola-Bini.pdf 

3. The Tribunal is composed of Maritza Romero, Fabián Fabara and Narcisa Pacheco. The majority decision is from Fabara and Pacheco. 

Congress Moves Closer to Risky Internet Takedown Law | EFFector 37.4

EFF - Wed, 04/16/2025 - 1:21pm

Sorry, EFF doesn't hand out candy like the Easter Bunny, but we are here to keep you updated on the latest digital rights news with our EFFector newsletter!

This edition of EFFector explains how you can help us push back against the TAKE IT DOWN Act, an internet censorship law; why we oppose site-blocking legislation, found in two upcoming bills; and how to delete your data from 23andMe. 

You can read the full newsletter here, and even get future editions directly to your inbox when you subscribe! Additionally, we've got an audio edition of EFFector on the Internet Archive, or you can view it by clicking the button below:

LISTEN ON YouTube

EFFECTOR 37.4 - Congress Moves Closer to Risky Internet Takedown Law

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

EFF Urges Court to Avoid Fair Use Shortcuts in Kadrey v. Meta Platforms

EFF - Tue, 04/15/2025 - 2:19pm

EFF has filed an amicus brief in Kadrey v. Meta, one of the many ongoing copyright lawsuits against AI developers. Most of the AI copyright cases raise an important new issue: whether the copying necessary to train a generative AI model is a non-infringing fair use.

Kadrey, however, attempts to side-step fair use. The plaintiffs—including Sarah Silverman and other authors—sued Meta for allegedly using BitTorrent to download “pirated” copies of their books to train Llama, a large language model. In other words, their legal claims challenge how Meta obtained the training materials, not what it did with them.

But some of the plaintiffs’ arguments, if successful, could harm AI developers’ defenses in other cases, where fair use is directly at issue.

How courts decide this issue will profoundly shape the future of this transformative technology, including its capabilities, its costs, and whether its evolution will be shaped by the democratizing forces of the open market or the whims of an oligopoly.

A question this important deserves careful consideration on a full record—not the hyperbolic cries of “piracy” and the legal shortcuts that the plaintiffs in this case are seeking. As EFF explained to the court, the question of whether fair use applies to training generative AI is far too important to decide based on Kadrey’s back-door challenge.

And, as EFF explained, whether a developer can legally train an AI on a wide variety of creative works shouldn’t turn on which technology they used to obtain those materials. As we wrote in our brief, the “Court should not allow the tail of Meta’s alleged BitTorrent use to wag the dog of the important legal questions this case presents. Nor should it accept Plaintiffs’ invitation to let hyperbole about BitTorrent and 'unmitigated piracy' derail the thoughtful and fact-specific fair use analysis the law requires.”

We also urged the court to reject the plaintiffs’ attempt to create a carve out in copyright law for copies obtained using “BitTorrent.”

This dangerous argument seeks to categorically foreclose the possibility that even the most transformative, socially beneficial uses—such as AI training—could be fair use.

As EFF explained in its brief, adopting an exemption from the flexible, fact-specific fair use analysis for “BitTorrent,” “internet piracy,” “P2P downloading,” or something else, would defeat the purpose of the fair use doctrine as a safeguard for the application of copyright law to new technologies.

Privacy on the Map: How States Are Fighting Location Surveillance

EFF - Tue, 04/15/2025 - 12:01pm

Your location data isn't just a pin on a map—it's a powerful tool that reveals far more than most people realize. It can expose where you work, where you pray, who you spend time with, and, sometimes dangerously, where you seek healthcare. In today’s world, your most private movements are harvested, aggregated, and sold to anyone with a credit card. For those seeking reproductive or gender-affirming care, or visiting a protest or a immigration law clinic, this data is a ticking time bomb.

Last year, we sounded the alarm, urging lawmakers to protect individuals from the growing threats of location tracking tools—tools that are increasingly being used to target and criminalize people seeking essential reproductive healthcare.

The good news? Lawmakers in California, Massachusetts, Illinois and elsewhere are stepping up, leading the way to protect privacy and ensure that healthcare access and other exercise of our rights remain safe from invasive surveillance.

The Dangers of Location Data

Imagine this: you leave your home in Alabama, drop your kids off at daycare, and then drive across state lines to visit an abortion clinic in Florida. You spend two hours there before driving back home. Along the way, you used your phone’s GPS app to navigate or a free radio app to listen to the news. Unbeknownst to you, this “free” app tracked your entire route and sold it to a data broker. That broker then mapped your journey and made it available to anyone who would pay for it. This is exactly what happened when privacy advocates used a tool called Locate X, developed by Babel Street, to track a person’s device as they traveled from Alabama—where abortion is completely banned—to Florida, where abortion access is severely restricted but still available.

Despite this tool being marketed as solely for law enforcement use, private investigators were able to access it by falsely claiming they would work with law enforcement, revealing a major flaw in our data privacy system. In a time when government surveillance of private personal decisions is on the rise, the fact that law enforcement (and adversaries pretending to be law enforcement) can access these tools puts our personal privacy in serious danger.

The unregulated market for location data enables anyone, from law enforcement to anti-abortion groups, to access and misuse this sensitive information. For example, a data broker called Near Intelligence sold location data of people visiting Planned Parenthood clinics to an anti-abortion group. Likewise, law enforcement in Idaho used cell phone location data to charge a mother and her son with “aiding and abetting” abortion, a clear example of how this information can be weaponized to enforce abortion restrictions for patients and anyone else in their orbit. 

States Taking Action

As we’ve seen time and time again, the collection and sale of location data can be weaponized to target many vulnerable groups—immigrants, the LGBTQ+ community, and anyone seeking reproductive healthcare. In response to these growing threats, states like California, Massachusetts, and Illinois are leading the charge by introducing bills aimed at regulating the collection and use of location data. 

These bills are a powerful response to the growing threat. The bills are grounded in well-established principles of privacy law, including informed consent and data minimization, and they ensure that only essential data is collected, and that it’s kept secure. Importantly, they give residents—whether they reside in the state or are traveling from other states—the confidence to exercise their rights (such as seeking health care) without fear of surveillance or retaliation. 

This post outlines some of the key features of these location data privacy laws, to show authors and advocates of legislative proposals how best to protect their communities. Specifically, we recommend: 

  • Strong definitions,
  • Clear rules,
  • Affirmation that all location data is sensitive,
  • Empowerment of consumers through a strong private right of action,
  • Prohibition of “pay-for-privacy” schemes, and
  • Transparency through clear privacy policies.
Strong Definitions

Effective location privacy legislation starts with clear definitions. Without them, courts may interpret key terms too narrowly—weakening the law's intent. And in the absence of clear judicial guidance, regulated entities may exploit ambiguity to sidestep compliance altogether.

The following are some good definitions from the recent bills:

  • In the Massachusetts bill, "consent" must be “freely given, specific, informed, unambiguous, [and] opt-in.” Further, it must be free from dark patterns—ensuring people truly understand what they’re agreeing to. 
  • In the Illinois bill, a “covered entity” includes all manner of private actors, including individuals, corporations, and associations, exempting only individuals acting in noncommercial contexts. 
  • "Location information" must clearly refer to data derived from a device that reveals the past or present location of a person or device. The Massachusetts bill sets a common radius in defining protected location data: 1,850 feet (about one-third of a mile). The California bill goes much bigger: five miles. EFF has supported both radiuses.
  • A “permissible purpose” (which is key to the minimization rule) should be narrowly defined to include only: (1) delivering a product or service that the data subject asked for, (2) fulfilling an order, (3) complying with federal or state law, or (4) responding to an imminent threat to life.
Clear Rules

“Data minimization” is the privacy principle that corporations and other private actors must not process a person’s data except as necessary to give them what they asked for, with narrow exceptions. A virtue of this rule is that a person does not need to do anything in order to enjoy their statutory privacy rights; the burden is on the data processor to process less data. Together, these definitions and rules create a framework that ensures privacy is the default, not the exception.

One key data minimization rule, as in the Massachusetts bill, is: “It shall be unlawful for a covered entity to collect or process an individual’s location data except for a permissible purpose.” Read along with the definition above, this across-the-board rule means a covered entity can only collect or process someone’s location data to fulfil their request (with exceptions for emergencies and compliance with federal and state law).

Additional data minimization rules, as in the Illinois bill, back this up by restraining particular data practices:

  • Covered entities can not collect more precise data than strictly necessary, or use location data to make inferences beyond what is needed to provide the service. 
  • Data must be deleted once it’s no longer necessary for the permissible purpose. 
  • No selling, renting, trading, or leasing location data – full stop.
  • No disclosure of location data to government, except with a warrant, as required by state or federal law, on request of the data subject, or an emergency threat of serious bodily injury or death (defined to not include abortion). 
  • No other disclosure of location data, except as required for a permissible purpose or when requested by the individual. 

The California bill rests largely on data minimization rules like these. The Illinois and Massachestts bills place an additional limit: no collection or processing of location data absent opt-in consent from the data subject. Critically, consent in these two bills is not an exception to the minimization rule, but rather an added requirement. EFF has supported both models of data privacy legislation: just a minimization requirement; and paired minimization and consent requirements. 

All Location Data is Sensitive

To best safeguard against invasive location tracking, it’s essential to place legal restrictions on the collection and use of all location data—not just data associated with sensitive places like reproductive health clinics. Narrow protections may offer partial help, but they fall short of full privacy.

Consider the example at the beginning of the blog: if someone travels from Alabama to Florida for abortion care, and the law only shields data at sensitive sites, law enforcement in Alabama could still trace their route from home up to near the clinic. Once the person enters a protected “healthcare” zone, their device would vanish from view temporarily, only to reappear shortly after they leave. This gap in the tracking data could make it relatively easy to deduce where they were during that time, essentially revealing their clinic visit.

To avoid this kind of loophole, the most effective approach is to limit the collection and retention of all location data—no exceptions. This is the approach in all three of the bills highlighted in this post: California, Illinois, and Massachusetts.

Empowering Consumers Through a Strong PRA

To truly protect people’s location privacy, legislation must include a strong private right of action (PRA)—giving individuals the power to sue companies that violate their rights. A private right of action ensures companies can’t ignore the law and empowers people to seek justice directly when their sensitive data is misused. This is a top priority for EFF in any data privacy legislation.

The bills in Illinois and Massachusetts offer strong models. They make clear that any violation of the law is an injury and allow individuals to bring civil suits:“A violation of this [law] … regarding an individual’s location information constitutes an injury to that individual. … Any individual alleging a violation of this [law] … may bring a civil action …” Further, these bills provide a baseline amount of damages (sometimes called “liquidated” or “statutory” damages), because an invasion of statutory privacy rights is a real injury, even if it is hard for the injured party to prove out-of-pocket expenses from theft, bodily harm, or the like. Absent this kind of statutory language, some victims of privacy violations will lose their day in court.

These bills also override mandatory arbitration clauses that limit access to court. Corporations should not be able to avoid being sued by forcing their customers to sign lengthy contracts that nobody reads.

Other remedies include actual damages, punitive damages, injunctive relief, and attorney’s fees. These provisions give the law real teeth and ensure accountability can’t be signed away in fine print.

No Pay-for-Privacy Schemes

Strong location data privacy laws must protect everyone equally—and that means rejecting “pay-for-privacy” schemes that allow companies to charge users for basic privacy protections. Privacy is a fundamental right, not a luxury add-on or subscription perk. Allowing companies to offer privacy only to those who can afford to pay creates a two-tiered system where low-income individuals are forced to trade away their sensitive location data in exchange for access to essential services. These schemes also incentivize everyone to abandon privacy.

Legislation should make clear that companies cannot condition privacy protections on payment, loyalty programs, or any other exchange of value. This ensures that everyone—regardless of income—has equal protection from surveillance and data exploitation. Privacy rights shouldn’t come with a price tag.

We commend this language from the Illinois and Massachusetts bills: 

A covered entity may not take adverse action against an individual because the individual exercised or refused to waive any of such individual’s rights under [this law], unless location data is essential to the provision of the good, service, or service feature that the individual requests, and then only to the extent that this data is essential. This prohibition includes, but is not limited to: (1) refusing to provide a good or service to the individual; (2) charging different prices or rates for goods or services, including through the use of discounts or other benefits or imposing penalties; or (3) providing a different level of quality of goods or services to the individual.

Transparency Through Clear Privacy Policies

It is helpful for data privacy laws to require covered entities to be transparent about their data practices. All three bills discussed in this post require covered entities to make available a privacy policy to the data subject—a solid baseline. This ensures that people aren’t left in the dark about how their location data is being collected, used, or shared. Clear, accessible policies are a foundational element of informed consent and give individuals the information they need to protect themselves and assert their rights.

It is also helpful for privacy laws like these to require covered entities to prominently publish their privacy policies on their websites. This allows all members of the public – as well as privacy advocates and government enforcement agencies – to track whether data processors are living up to their promises.

Next Steps: More States Must Join

The bottom line is clear: location data is highly sensitive, and without proper protections, it can be used to harm those who are already vulnerable. The digital trail we leave behind can reveal far more than we think, and without laws in place to protect us, we are all at risk. 

While some states are making progress, much more needs to be done. More states need to follow suit by introducing and passing legislation that protects location data privacy. We cannot allow location tracking to be used as a tool for harassment, surveillance, or criminalization.

To help protect your digital privacy while we wait for stronger privacy protection laws, we’ve published a guide specifically for how to minimize intrusion from Locate X, and have additional tips on EFF’s Surveillance Self-Defense site. Many general privacy practices also offer strong protection against location tracking.

If you live in California, Illinois, Massachusetts – or any state that has yet to address location data privacy – now is the time to act. Contact your lawmakers and urge them to introduce or support bills that protect our sensitive data from exploitation. Demand stronger privacy protections for all, and call for more transparency and accountability from companies that collect and sell location data. Together, we can create a future where individuals are free to travel without the threat of surveillance and retaliation.

Florida’s New Social Media Bill Says the Quiet Part Out Loud and Demands an Encryption Backdoor

EFF - Fri, 04/11/2025 - 3:16pm

At least Florida’s SB 868/HB 743, “Social Media Use By Minors” bill isn’t beating around the bush when it states that it would require “social media platforms to provide a mechanism to decrypt end-to-end encryption when law enforcement obtains a subpoena.” Usually these sorts of sweeping mandates are hidden behind smoke and mirrors, but this time it’s out in the open: Florida wants a backdoor into any end-to-end encrypted social media platforms that allow accounts for minors. This would likely lead to companies not offering end-to-end encryption to minors at all, making them less safe online.

Encryption is the best tool we have to protect our communication online. It’s just as important for young people as it is for everyone else, and the idea that Florida can “protect” minors by making them less safe is dangerous and dumb.

The bill is not only privacy-invasive, it’s also asking for the impossible. As breaches like Salt Typhoon demonstrate, you cannot provide a backdoor for just the “good guys,” and you certainly cannot do so for just a subset of users under a specific age. After all, minors are likely speaking to their parents and other family members and friends, and they deserve the same sorts of privacy for those conversations as anyone else. Whether social media companies provide “a mechanism to decrypt end-to-end encryption” or choose not to provide end-to-end encryption to minors at all, there’s no way that doesn’t harm the privacy of everyone.

If this all sounds familiar, that’s because we saw a similar attempt from an Attorney General in Nevada last year. Then, like now, the reasoning is that law enforcement needs access to these messages during criminal investigations. But this doesn’t hold true in practice.

In our amicus brief in Nevada, we point out that there are solid arguments that “content oblivious” investigation methods—like user reporting— are “considered more useful than monitoring the contents of users’ communications when it comes to detecting nearly every kind of online abuse.” That remains just as true in Florida today.

Law enforcement can and does already conduct plenty of investigations involving encrypted messages, and even with end-to-end encryption, law enforcement can potentially access the contents of most messages on the sender or receiver’s devices, particularly when they have access to the physical device. The bill also includes measures prohibiting minors from accessing any sort of ephemeral messaging features, like view once options or disappearing messages. But even with those features, users can still report messages or save them. Targeting specific features does nothing to protect the security of minors, but it would potentially harm the privacy of everyone.

SB 868/HB 743 radically expands the scope of Florida’s social media law HB 3, which passed last year and itself has not yet been fully implemented as it currently faces lawsuits challenging its constitutionality. The state was immediately sued after the law’s passage, with challengers arguing the law is an unconstitutional restriction of protected free speech. That lawsuit is ongoing—and it should be a warning sign. Florida should stop coming up with bad ideas that can't be implemented.

Weakening encryption to the point of being useless is not an option. Minors, as well as those around them, deserve the right to speak privately without law enforcement listening in. Florida lawmakers must reject this bill. Instead of playing politics with kids' privacy, they should focus on real, workable protections—like improving consumer privacy laws to protect young people and adults alike, and improving digital literacy in schools.

Cybersecurity Community Must Not Remain Silent On Executive Order Attacking Former CISA Director

EFF - Fri, 04/11/2025 - 2:31pm

Cybersecurity professionals and the infosec community have essential roles to play in protecting our democracy, securing our elections, and building, testing, and safeguarding government infrastructure. It is critically important for us to speak up to ensure that essential work continues and that those engaged in these good faith efforts are not maligned by an administration that has tried to make examples of its enemies in many other fields. 

President Trump has targeted the former Director of the government’s Cybersecurity and Infrastructure Security Agency (CISA), Chris Krebs, with an executive order cancelling the security clearances of employees at SentinelOne, where Krebs is now the CIO, and launching a probe of his work in the White House. President Trump had previously fired Krebs in 2020 when, in his capacity as CISA Director, Krebs released a statement calling that election, which Trump lost, "the most secure in American history.” 

The executive order directed a review to “identify any instances where Krebs’ or CISA’s conduct appears to be contrary to the administration’s commitment to free speech and ending federal censorship, including whether Krebs’ conduct was contrary to suitability standards for federal employees or involved the unauthorized dissemination of classified information.” Krebs was, in fact, fired for his public stance. 

We’ve seen this playbook before: In March, Trump targeted law firm Perkins Coie for its past work on voting rights lawsuits and its representation of the President’s prior political opponents in a shocking, vindictive, and unconstitutional executive order. After that order, many in the legal profession, including EFF, pushed back, issuing public statements and filing friend of the court briefs in support of Perkins Coie, and other law firms challenging executive orders against them. This public support was especially important in light of the fact that a few large firms capitulated to Trump rather than fight the orders against them.

It is critical that the cybersecurity community now join together to denounce this chilling attack on free speech and rally behind Krebs and SentinelOne rather than cowering because they fear they will be next

The White House must not be given free reign to turn cybersecurity professionals into political scapegoats. EFF regularly defends the infosec community, protecting researchers through education, legal defense, amicus briefs, and involvement in the community with the goal of promoting innovation and safeguarding their rights, and we call on its ranks to join us in defending Chris Krebs and SentinelOne. An independent infosec community is fundamental to protecting our democracy, and to the profession itself.

Certbot 4.0: Long Live Short-Lived Certs!

EFF - Thu, 04/10/2025 - 6:50pm

When Let’s Encrypt, a free certificate authority, started issuing 90 day TLS certificates for websites, it was considered a bold move that helped push the ecosystem towards shorter certificate life times. Beforehand, certificate authorities normally issued certificate lifetimes lasting a year or more. With 4.0, Certbot is now supporting Let’s Encrypt’s new capability for six day certificates through ACME profiles and dynamic renewal at:

  • 1/3rd of lifetime left
  • 1/2 of lifetime left, if the lifetime is shorter than 10 days

There’s a few, significant reasons why shorter lifetimes are better:

  • If a certificate's private key is compromised, that compromise can't last as long.
  • With shorter life spans for the certificates, automation is encouraged. Which facilitates robust security of web servers.
  • Certificate revocation is historically flaky. Lifetimes 10 days and under prevent the need to invoke the revocation process and deal with continued usage of a compromised key.

There is debate on how short these lifetimes should be, but with ACME profiles you can have the default or “classic” Let’s Encrypt experience (90 days) or start actively using other profile types through Certbot with the --preferred-profile and --required-profile flags. For six day certificates, you can choose the “shortlived” profile.

These new options are just the beginning of the modern features the ecosystem can support and we are glad to have dynamic renewal times to start leveraging a more agile web that facilitates better security and flexible options for everyone. Thank you to the community and the Certbot team for making this happen!

Love ♥️ Certbot as much as us? Donate today to support this work.

Congress Takes Another Step Toward Enabling Broad Internet Censorship

EFF - Thu, 04/10/2025 - 10:54am

The House Energy and Commerce Committee on Tuesday advanced the TAKE IT DOWN Act (S. 146) , a bill that seeks to speed up the removal of certain kinds of troubling online content. While the bill is meant to address a serious problem—the distribution of non-consensual intimate imagery (NCII)—the notice-and-takedown system it creates is an open invitation for powerful people to pressure websites into removing content they dislike. 

As we’ve written before, while protecting victims of these heinous privacy invasions is a legitimate goal, good intentions alone are not enough to make good policy. 

take action

TELL CONGRESS: "Take It Down" Has No real Safeguards  

This bill mandates a notice-and-takedown system that threatens free expression, user privacy, and due process, without meaningfully addressing the problem it claims to solve. The “takedown” provision applies to a much broader category of content—potentially any images involving intimate or sexual content at all—than the narrower NCII definitions found elsewhere in the bill. The bill contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. 

The legislation’s 48-hour takedown deadline means that online service providers, particularly smaller ones, will have to comply quickly to avoid legal risks. That time crunch will make it impossible for services to verify the content is in fact NCII. Instead, services will rely on automated filters—infamously blunt tools that frequently flag legal content, from fair-use commentary to news reporting.

Communications providers that offer users end-to-end encrypted messaging, meanwhile, may be served with notices they simply cannot comply with, given the fact that these providers cannot view the contents of messages on their platforms. Platforms may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces. 

While several committee Members offered amendments to clarify these problematic provisions in the bill during committee consideration, committee leadership rejected all attempts to amend the bill. 

The TAKE IT DOWN Act is now expected to receive a floor vote in the coming weeks before heading to President Trump’s desk for his signature. Both the President himself and First Lady Melania Trump have been vocal supporters of this bill, and they have been urging Congress to quickly pass it. Trump has shown just how the bill can be abused, saying earlier this year that he would personally use the takedown provisions to censor speech critical of the president.

take action

TELL CONGRESS: "Take It Down" Has No real Safeguards  

Fast tracking a censorship bill is always troubling. TAKE IT DOWN is the wrong approach to helping people whose intimate images are shared without their consent. We can help victims of online harassment without embracing a new regime of online censorship.

Congress should strengthen and enforce existing legal protections for victims, rather than opting for a broad takedown regime that is ripe for abuse. 

Tell your Member of Congress to oppose censorship and to oppose S. 146.

Our Privacy Act Lawsuit Against DOGE and OPM: Why a Judge Let It Move Forward

EFF - Tue, 04/08/2025 - 4:28pm

Last week, a federal judge rejected the government’s motion to dismiss our Privacy Act lawsuit against the U.S. Office of Personnel Management (OPM) and Elon Musk’s “Department of Government Efficiency” (DOGE). OPM is disclosing to DOGE agents the highly sensitive personal information of tens of millions of federal employees, retirees, and job applicants. This disclosure violates the federal Privacy Act, a watershed law that tightly limits how the federal government can use our personal information.

We represent two unions of federal employees: the AFGE and the AALJ. Our co-counsel are Lex Lumina LLP, State Democracy Defenders Fund, and The Chandra Law Firm LLC.

We’ve already explained why the new ruling is a big deal, but let’s take a deeper dive into the Court’s reasoning.

Plaintiffs have standing

A plaintiff must show they have “standing” to bring their claim. Article III of the U.S. Constitution empowers courts to decide “cases” and “controversies.” Courts have long held this requires the plaintiff to show an “injury in fact” that is, among other things, “concrete.” In recent years, two Supreme Court decisions – Spokeo v. Robins (2016) and TransUnion v. Ramirez (2021) – addressed when an “intangible” injury, such as invasion of data privacy, is sufficiently concrete. They ruled that such injury must have “a close relationship to a harm traditionally recognized as providing a basis for a lawsuit in American courts.”

In our case, the Court held that our clients passed this test: “The complaint alleges concrete harms analogous to intrusion upon seclusion.” That is one of the common law privacy torts, long recognized in U.S. law. According to the Restatement of Torts, it occurs when a person “intrudes” on the “seclusion of another” in a manner “highly offensive to a reasonable person.”

The Court reasoned that the records at issue here “contain information about the deeply private affairs of the plaintiffs,” including “social security numbers, health history, financial disclosures, and information about family members.” The court also emphasized plaintiffs’ allegation that these records were “disclosed to DOGE agents in a rushed and insecure manner,” including “administrative access, enabling them to alter OPM records and obscure their own access to those records.”

The Court rejected defendants’ argument that our clients supposedly pled “only that DOGE agents were granted access to OPM’s data system,” and not also that “the DOGE agents in fact used that access to examine OPM records.” As a factual matter, plaintiffs in fact pled that “DOGE agents actually exploited their access to review, possess, and use OPM records.”

As a legal matter, such use is not required: “Exposure of the plaintiff’s personally identifiable information to unauthorized third parties, without further use or disclosure, is analogous to harm cognizable under the common law right to privacy.” So ruling, the Court observed: “at least four federal courts have found that the plaintiffs before them had made a sufficient showing of concrete injury, as analogous to common law privacy torts, when agencies granted DOGE agents access to repositories of plaintiffs’ personal information.”

To have standing, a plaintiff must also show that their “injury in fact” is “actual or imminent.” The Court held that our clients passed this test, too. It ruled that plaintiffs adequately alleged an actual injury: “ongoing unauthorized access by the DOGE agents to the plaintiffs’ data.” It also ruled that plaintiffs adequately alleged a separate, imminent injury: OPM’s disclosure to DOGE “has made the OPM data more vulnerable to hacking, identity theft, and other activities that are substantially harmful to the plaintiffs.” The Court emphasized the allegations of “sweeping and uncontrolled access to DOGE agents who were not properly vetted or trained,” as well as the notorious 2015 OPM data breach.

Finally, the Court held that our clients sufficiently alleged the remaining two elements of standing: that defendants caused plaintiffs’ injuries, and that an injunction would redress them.

Plaintiffs may proceed on their Privacy Act claims

The Court held: “The plaintiffs have plausibly alleged violations of two provisions of the Privacy Act: 5 U.S.C. § 552a(b), which prohibits certain disclosures of records, and 5 U.S.C. § 552a(e)(10), which imposes a duty to establish appropriate safeguards and ensure security and confidentiality of records.” The Court cited two other judges who had recently “found a likelihood that plaintiffs will succeed” in their wrongful disclosure claims.

Reprising their failed standing arguments, the government argued that to plead a violation of the Privacy Act’s no-disclosure rule, our clients must allege “not just transmission to another person but also review of the records by that individual.” Again, the Court rejected this argument for two independent reasons. Factually, “the complaint amply pleads that DOGE agents viewed, possessed, and used the OPM records.” Legally, “the defendants misconstrue the term ‘disclose.’” The Court looked to the OPM’s own regulations, which define the term to include “providing personal review of a record,” and an earlier appellate court opinion, interpreting the term to include “virtually all instances [of] an agency’s unauthorized transmission of a protected record.”

Next, the government asserted an exception from the Privacy Act’s no-disclosure rule, for disclosure “to those officers and employees of the agency which maintains the record who have a need for the record in the performance of their duties.” The Court observed that our clients disputed this exception on two independent grounds: “both because [the disclosures] were made to DOGE agents who were not officers or employees of OPM and because, even if the DOGE agents were employees of OPM, they did not have a need for those records in the performance of any lawful duty.” On both grounds, the plaintiffs’ allegations sufficed.

Plaintiffs may seek to enjoin Privacy Act violations

The Court ruled that our clients may seek injunctive and declaratory relief against the alleged Privacy Act violations, by means of the Administrative Procedure Act (APA), though not the Privacy Act itself. This is a win: What ultimately matters is the availability of relief, not the particular path to that relief.

As discussed above, plaintiffs have two claims that the government violated the Privacy Act: unlawful disclosures and unlawful cybersecurity failures. Plaintiffs also have an APA claim of agency action “not in accordance with law,” which refers back to these two Privacy Act violations.

To be subject to APA judicial review, the challenged agency action must be “final.” The Court found finality: “The complaint plausibly alleges that actions by OPM were not representative of its ordinary day-to-day operations but were, in sharp contrast to its normal procedures, illegal, rushed, and dangerous.”

Another requirement for APA judicial review is the absence of an “other adequate remedy.” The Court interpreted the Privacy Act to not allow the injunction our clients seek, but then ruled: “As a result, the plaintiffs have no adequate recourse under the Privacy Act and may pursue their request for injunctive relief under the APA.” The Court further wrote:

The defendants’ Kafkaesque argument to the contrary would deprive the plaintiffs of any recourse under the law. They contend that the plaintiffs have no right to any injunctive relief – neither under the Privacy Act nor under the APA. … This argument promptly falls apart under examination.

Plaintiffs may proceed on two more claims

The Court allowed our clients to move forward on their two other claims.

They may proceed on their claim that the government violated the APA by acting in an “arbitrary and capricious” manner. The Court reasoned: “The complaint alleges that OPM rushed the onboarding process, omitted crucial security practices, and thereby placed the security of OPM records at grave risk.”

Finally, our clients may proceed on their claim that DOGE acted “ultra vires,” meaning outside of its legal power, when it accessed OPM records. The Court reasoned: “The complaint adequately pleads that DOGE Defendants plainly and openly crossed a congressionally drawn line in the sand.”

Next steps

Congress passed the Privacy Act following the Watergate and COINTELPRO scandals to restore trust in government and prevent a future President from creating another “enemies list.” Congress found that the federal government’s increasing use of databases full of personal records “greatly magnified the harm to individual privacy,” and so it tightly regulated how agencies may use these databases.

The ongoing DOGE data grab may be the worst violation of the Privacy Act since its enactment in 1974. So it is great news that a judge has denied the government’s motion to dismiss our lawsuit. Now we will move forward to prove our case.

Related Cases: American Federation of Government Employees v. U.S. Office of Personnel Management

EFF, Civil Society Groups, Academics Call on UK Home Secretary to Address Flawed Data Bill

EFF - Tue, 04/08/2025 - 8:58am

Last week, EFF joined 30 civil society groups and academics in warning UK Home Secretary Yvette Cooper and Department for Science, Innovation & Technology Secretary Peter Kyle about the law enforcement risks contained within the draft Data Use and Access Bill (DUA Bill).

Clause 80 of the DUA Bill weakens the safeguards for solely automated decisions in the law-enforcement context and dilutes crucial data protection safeguards. 

Under sections 49 and 50 of the Data Protection Act 2018, solely automated decisions are prohibited from being made in the law enforcement context unless the decision is required or authorised by law. Clause 80 reverses this in all scenarios unless the data processing involves special category data. 

In short, this would enable law enforcement to use automated decisions about people regarding their socioeconomic status, regional or postcode data, inferred emotions, or even regional accents. This increases the already broad possibilities for bias, discrimination, and lack of transparency at the hands of law enforcement.

In the government’s own Impact Assessment for the DUA Bill, the Government acknowledged that “those with protected characteristics such as race, gender, and age are more likely to face discrimination from ADM due to historical biases in datasets.” Yet, politicians in the UK have decided to push forward with this discriminatory and dangerous agenda regardless. 

Further, given the already minimal transparency around automated decision making, individuals affected in the law enforcement context would have no or highly limited routes to redress.

The DUA Bill puts marginalised groups at risk of opaque, unfair and harmful automated decisions. Yvette Cooper and Peter Kyle must address the lack of safeguards governing law enforcement use of automated decision-making tools before time runs out.

The full letter can be found here

Judge Rejects Government’s Attempt to Dismiss EFF Lawsuit Against OPM, DOGE, and Musk

EFF - Thu, 04/03/2025 - 1:15pm
Court Confirms That, If Proven, DOGE’s Ongoing Access to Personnel Records Is Illegal

NEW YORK—A lawsuit seeking to stop the U.S. Office of Personnel Management (OPM) from disclosing tens of millions of Americans’ private, sensitive information to Elon Musk’s “Department of Government Efficiency” (DOGE) can continue, a federal judge ruled Thursday

Judge Denise L. Cote of the U.S. District Court for the Southern District of New York partially rejected the defendants’ motion to dismiss the lawsuit, which was filed Feb. 11 on behalf of two labor unions and individual current and former government workers across the country. This decision is a victory: The court agreed that the claims that OPM illegally disclosed highly personal records of millions of people to DOGE agents can move forward with the goal of stopping that ongoing disclosure and requiring that any shared information be returned. 

Cote ruled current and former federal employees "may pursue their request for injunctive relief under the APA [Administrative Procedure Act]. ...  The defendants’ Kafkaesque argument to the contrary would deprive the plaintiffs of any recourse under the law." 

"The complaint plausibly alleges that actions by OPM were not representative of its ordinary day-to-day operations but were, in sharp contrast to its normal procedures, illegal, rushed, and dangerous,” the judge wrote.  

The Court added: “The complaint adequately pleads that the DOGE Defendants 'plainly and openly crossed a congressionally drawn line in the sand.'" 

OPM maintains databases of highly sensitive personal information about tens of millions of federal employees, retirees, and job applicants. The lawsuit by EFF, Lex Lumina LLP, State Democracy Defenders Fund, and The Chandra Law Firm argues that OPM and OPM Acting Director Charles Ezell illegally disclosed personnel records to DOGE agents in violation of the federal Privacy Act of 1974, a watershed anti-surveillance statute that prevents the federal government from abusing our personal information. 

The lawsuit’s union plaintiffs are the American Federation of Government Employees AFL-CIO and the Association of Administrative Law Judges, International Federation of Professional and Technical Engineers Judicial Council 1 AFL-CIO

“Today’s legal victory sends a crystal-clear message: Americans’ private data stored with the government isn't the personal playground of unelected billionaires,” said AFGE National President Everett Kelley. “Elon Musk and his DOGE cronies have no business rifling through sensitive data stored at OPM, period. AFGE and our allies fought back – and won – because we will not compromise when it comes to protecting the privacy and security of our members and the American people they proudly serve.” 

As the federal government is the nation’s largest employer, the records held by OPM represent one of the largest collections of sensitive personal data in the country. In addition to personally identifiable information such as names, social security numbers, and demographic data, these records include work information like salaries and union activities; personal health records and information regarding life insurance and health benefits; financial information like death benefit designations and savings programs;  nondisclosure agreements; and information concerning family members and other third parties referenced in background checks and health records.  

OPM holds these records for tens of millions of Americans, including current and former federal workers and those who have applied for federal jobs. OPM has a history of privacy violations—an OPM breach in 2015 exposed the personal information of 22.1 million people—and its recent actions make its systems less secure.  

With few exceptions, the Privacy Act limits the disclosure of federally maintained sensitive records on individuals without the consent of the individuals whose data is being shared. It protects all Americans from harms caused by government stockpiling of our personal data. This law was enacted in 1974, the last time Congress acted to limit the data collection and surveillance powers of an out-of-control President. The judge ruled that the request for an injunction under the Privacy Act claims can go forward under the Administrative Procedures Act, but not directly under the Privacy Act.  

For the order denying the motion to dismiss: https://www.eff.org/document/afge-v-opm-opinion-and-order-motion-dismiss 

For the complaint: https://www.eff.org/document/afge-v-opm-complaint 

For more about the case: https://www.eff.org/cases/american-federation-government-employees-v-us-office-personnel-management 

Contacts 

Electronic Frontier Foundation: press@eff.org 

Lex Lumina LLP: Managing Partner Rhett Millsaps, rhett@lex-lumina.com 

EFF Joins Amicus Brief Supporting Perkins Coie Law Firm Against Unconstitutional Executive Order

EFF - Thu, 04/03/2025 - 12:28pm

EFF has joined the American Civil Liberties Union and other legal advocacy organizations across the ideological spectrum in filing an amicus brief asking a federal judge to strike down President Donald Trump’s executive order targeting law firm Perkins Coie for its past work on voting rights lawsuits and its representation of the President’s prior political opponents. 

As a legal organization that has fought in court to defend the rights of technology users for almost 35 years, including numerous legal challenges to federal government overreach, EFF unequivocally supports Perkins Coie’s challenge to this shocking, vindictive, and unconstitutional executive order. In punishing the law firm for its zealous advocacy on behalf of its clients, the March 6 order offends the First Amendment, the rule of law, and the legal profession broadly in numerous ways. We commend Perkins Coie and other targeted law firms that have chosen to do so (and their legal representatives) for fighting back.  

“If allowed to stand, these pressure tactics will have broad and lasting impacts on Americans' ability to retain legal counsel in important matters, to arrange their business and personal affairs as they like, and to speak their minds,” our brief says. 

Lawsuits against the federal government are a vital component of the system of checks and balances that undergirds American democracy. They reflect a confidence in both the judiciary to decide such matters fairly and justly, and the executive to abide by the court’s determination. They are a backstop against autocracy and a sustaining feature of American jurisprudence since Marbury v. Madison, 5 U.S. 137 (1803).   

The executive order, if enforced, would upend that system and set an appalling precedent: Law firms that represent clients adverse to a given administration can and will be punished for doing their jobs.   

This is a fundamental abuse of executive power.   

The constitutional problems are legion, but here are a few:   

  • The First Amendment bars the government from “distorting the legal system by altering the traditional role of attorneys” by controlling what legal arguments lawyers can make. See Legal Services Corp. v. Velasquez, 531 U.S. 533, 544 (2001). “An informed independent judiciary presumes an informed, independent bar.” Id. at 545.  
  • The executive order is also unconstitutional retaliation for Perkins Coie’s engaging in constitutionally protected speech during the course of representing its clients. See Lozman v. City of Riviera Beach, 585 U.S. 87, 90 (2018). 
  • The executive order violates fundamental precepts of separation of powers and the Fifth and Sixth Amendment rights of litigants to select the counsel of their choice. See United States v. Gonzalez-Lopez, 548 U.S. 140, 147–48 (2006).  

An independent legal profession is a fundamental component of democracy and the rule of law. As a nonprofit legal organization that frequently sues the federal government, we well understand the value of this bedrock principle and how it – and First Amendment rights more broadly – are threatened by President Trump’s executive orders targeting Perkins Coie and other law firms. It is especially important that the whole legal profession speak out against the executive orders in light of the capitulation by a few large law firms. 

The order must be swiftly nullified by the U.S. District Court for the District of Columbia, and must be uniformly vilified by the entire legal profession. 

The ACLU’s press release with quotes from fellow amici can be found here.

Calyx Institute: A Case Study in Grassroots Innovation

EFF - Thu, 04/03/2025 - 9:36am

Technologists play a huge role in building alternative tools and resources when our right to privacy and security are undermined by governments and major corporations. This direct resistance ensures that even in the face of powerful adversaries, communities can find some safety and autonomy through community-built tools.

One of the most renowned names in this work is the Calyx Institute, a New York based 501(c)3 nonprofit founded by Nicholas Merrill, after a successful and influential constitutional challenge to the National Security Letter (NSL) statute in the USA Patriot Act. Today Calyx’s mission is to defend digital privacy, advance connectivity, and strive for a future where everyone has access to the resources and tools they need to remain securely connected. Their work is made possible thanks to the generous donations of their over 12,000 grassroots members.

More recently, Calyx joined EFF’s network of grassroots organizations across the US, the Electronic Frontier Alliance (EFA). Members of the alliance are not-for-profit local organizations dedicated to EFA’s five guiding principles: privacy, free expression, access to knowledge, creativity, and security. Calyx has since been an exceptional ally, lifting up and collaborating with fellow members.

If you’re inspired by Calyx to start making a difference in your community, you can get started with our organizer toolkits. Once you’re ready, we hope you consider applying to join the alliance.

JOIN EFA

Defend Digital Rights Locally

We corresponded with Calyx over email to discuss the group's ambitious work, and what the future holds for Calyx. Here are excerpts from our conversation:

Thanks for chatting with us, to get started could you tell us a bit about Calyx’s current work?

Calyx focuses on three areas: (1) developing a privacy-respecting software ecosystem, (2) bridging the digital divide with affordable internet access, and (3) sustaining our community through grants, and research, and educational initiatives.

We build and maintain a digital ecosystem of free and open-source software (FOSS) centering on CalyxOS, an Android operating system that encrypts communications, combats invasive metadata collection, and protects users from geolocation tracking. The Calyx Internet Membership Program offers mobile hotspots so people have a way to stay connected despite limited resources or a lack of viable alternatives. Finally, Calyx actively engages with diverse stakeholder groups to build a shared understanding of privacy and expand digital-security literacy and provide grants to directly support aligned organizations. By partnering with our peers, funders, and service providers, we hope to drive collective action toward a privacy-and-rights-respecting future of technology.

Calyx projects work with a wide range of technologies. What are some barriers Calyx runs into in this work?

Our biggest challenge is one shared by many tech communities, particularly FOSS advocates: it is difficult to balance privacy and security with usability in tool development. On the one hand, the current data-mining business model of the tech sector makes it extremely hard to provide FOSS solutions to proprietary tech while keeping the tool intuitive and easy to use. On the other, there is a general lack of momentum for funding and growing an alternative digital ecosystem.

As a result, many digital rights enthusiasts are left with scarce resources and a narrow space within which to work on technical solutions. We need more people to work together and collectively advocate for a privacy-respecting tech ecosystem that cares about all communities and does not marginalize anyone.

Take CalyxOS, for example. Before it became a tangible project, our founder Nick spent years thinking about an alternative mobile operating system that put privacy first. Back in 2012, Nick spoke to Moxie Marlinspike, the creator of the Signal messaging app, about his idea. Moxie shared several valid concerns that almost led Nick to stop working on it. Fortunately, these warnings, which came from Moxie’s experience and success with Signal, made Nick even more determined, and he recruited an expert global team to help realize his idea.

What do you see as the role of technologists in defending civil liberties with local communities?

Technologists are enablers—they build tools and technical infrastructures, fundamental parts of the digital ecosystem within which people exercise their rights and enjoy their lives. A healthy digital ecosystem consists of technologies that liberate people. It is an arena where people willingly and actively connect and share their expertise, confident in the shared protocols that protect everyone’s rights and dignity. That is why Calyx builds and advocates for people-centered, privacy-focused FOSS tools.

How has Calyx supported folks in NYC? What have you learned from it?

It’s a real privilege to be part of the NYC tech community, which has such a wealth of technologists, policy experts, human rights watchdogs, and grassroots activists. In recent years, we joined efforts led by multiple networks and organizations to mobilize against unjustifiable mass surveillance and other digital threats faced by millions of people of color, immigrants, and other underrepresented groups.

We’re particularly proud of the support we provided to another EFA member, Surveillance Technology Oversight Project, on the Ban the Scan campaign to ban facial recognition in NYC, and CryptoHarlem to sustain their work bringing digital privacy and cybersecurity education to communities in Harlem and beyond. Most recently, we funded Sunset Spark—a small nonprofit offering free education in science and technology in the heart of Brooklyn—to develop a multipurpose curriculum focused on privacy, internet infrastructure, and the roles of the public and private sectors in our digital world.

These experiences deeply inspired us to shape a funding philosophy that centers the needs of organizations and groups with limited resources, helps local communities break barriers and build capacity, and grows reciprocal relationships between each member of the community.

You mentioned a grantmaking program, which is a really unique project for an EFA member. Could you tell us a bit about your theory of change for the program?

Since 2020, the Calyx Institute has been funding the development of digital privacy and security tools, research on mass surveillance systems, and training efforts to equip people with the knowledge and tools they need to protect their right to privacy and connectivity. In 2022, Calyx launched the Fusion Center Research Fund to aid investigations into law enforcement harvesting of personal data through intelligence-sharing centers. This effort, with nearly $200,000 disbursed to grantees, helped reveal the deleterious impact of surveillance technology on privacy and freedom of expression.

These efforts have led to the Sepal Fund, Calyx’s pilot program to offer small groups unrestricted and holistic grants. This program will provide five organizations, collectives, or projects a yearly grant of up to $50,000 for a total of three years. In addition, we will provide our grantees opportunities for professional development, as well as other resources. Through this program, we hope to sustain and elevate research, tool development, and education that will support digital privacy and defend internet freedom.


Could you tell us a bit about how people can get involved?

All our projects are, at their core, community projects, and we welcome insights and involvement from anyone to whom our work is relevant. CalyxOS offers a variety of ways to connect, including a CalyxOS Matrix room and GitLab repository where users and programmers interact in real time to troubleshoot and discuss improvements. Part of making CalyxOS accessible is ensuring that it’s as widely available as possible, so anyone who would like to be part of that translation and localization effort should visit our weblate site.

What does the future look like for Calyx?

We are hoping that the future holds big things for us, like CalyxOS builds on more affordable and globally available mobile devices so that people in different locations with varied resources can equally enjoy the right to privacy. We are also looking forward to updating our visual communication—we have been “substance over style” for so long that it will be exciting to see how a refreshed look will help us reach new audiences.

Finally, what’s your “moonshot”? What’s the ideal future Calyx wants to build?

The Calyx dream is accessible digital privacy, security, and connectivity for all, regardless of budget or tech background, centering communities that are most in need.

We want a future where everyone has access to the resources and tools they need to remain securely connected. To get there, we’ll need to work on building a lot of capacity, both technological and informational. Great tools can only fulfill their purpose if people know why and how to use them. Creating those tools and spreading the word about them requires collaboration, and we are proud to be working toward that goal alongside all the organizations that make up the EFA.

Our thanks to the Calyx Institute for their continued efforts to build private and secure tools for targeted groups, in New York City and across the globe. You can find and support other Electronic Frontier Alliance affiliated groups near you by visiting eff.org/fight.

Site-Blocking Legislation Is Back. It’s Still a Terrible Idea.

EFF - Wed, 04/02/2025 - 11:53am

More than a decade ago, Congress tried to pass SOPA and PIPA—two sweeping bills that would have allowed the government and copyright holders to quickly shut down entire websites based on allegations of piracy. The backlash was immediate and massive. Internet users, free speech advocates, and tech companies flooded lawmakers with protests, culminating in an “Internet Blackout” on January 18, 2012. Turns out, Americans don’t like government-run internet blacklists. The bills were ultimately shelved. 

Thirteen years later, as institutional memory fades and appetite for opposition wanes, members of Congress in both parties are ready to try this again. 

take action

Act Now To Defend the Open Web  

The Foreign Anti-Digital Piracy Act (FADPA), along with at least one other bill still in draft form, would revive this reckless strategy. These new proposals would let rights holders get federal court orders forcing ISPs and DNS providers to block entire websites based on accusations of infringing copyright. Lawmakers claim they’re targeting “pirate” sites—but what they’re really doing is building an internet kill switch.

These bills are an unequivocal and serious threat to a free and open internet. EFF and our supporters are going to fight back against them. 

Site-Blocking Doesn’t Work—And Never Will 

Today, many websites are hosted on cloud infrastructure or use shared IP addresses. Blocking one target can mean blocking thousands of unrelated sites. That kind of digital collateral damage has already happened in AustriaRussia​, and in the US.

Site-blocking is both dangerously blunt and trivially easy to evade. Determined evaders can create the same content on a new domain within hours. Users who want to see blocked content can fire up a VPN or change a single DNS setting to get back online. 

These workarounds aren’t just popular—they’re essential tools in countries that suppress dissent. It’s shocking that Congress is on the verge of forcing Americans to rely on the same workarounds that internet users in authoritarian regimes must rely on just to reach mislabeled content. It will force Americans to rely on riskier, less trustworthy online services. 

Site-Blocking Silences Speech Without a Defense

The First Amendment should not take a back seat because giant media companies want the ability to shut down websites faster. But these bills wrongly treat broad takedowns as a routine legal process. Most cases would be decided in ex parte proceedings, with no one there to defend the site being blocked. This is more than a shortcut–it skips due process entirely. 

Users affected by a block often have no idea what happened. A blocked site may just look broken, like a glitch or an outage. Law-abiding publishers and users lose access, and diagnosing the problem is difficult. Site-blocking techniques are the bluntest of instruments, and they almost always punish innocent bystanders. 

The copyright industries pushing these bills know that site-blocking is not a narrowly tailored fix for a piracy epidemic. The entertainment industry is booming right now, blowing past its pre-COVID projections. Site-blocking legislation is an attempt to build a new American censorship system by letting private actors get dangerous infrastructure-level control over internet access. 

EFF and the Public Will Push Back

FADPA is already on the table. More bills are coming. The question is whether lawmakers remember what happened the last time they tried to mess with the foundations of the open web. 

If they don’t, they’re going to find out the hard way. Again. 

take action

Tell Congress: No To Internet Blacklists  

Site-blocking laws are dangerous, unnecessary, and ineffective. Lawmakers need to hear—loud and clear—that Americans don’t support government-mandated internet censorship. Not for copyright enforcement. Not for anything.

Pages