EFF
Please Drone Responsibly: C-UAS Legislation Needs Civil Liberties Safeguards
Today, the Senate Judiciary Committee is holding a hearing titled “Defending Against Drones: Setting Safeguards for Counter Unmanned Aircraft Systems Authorities.” While the government has a legitimate interest in monitoring and mitigating drone threats, it is critical that those powers are narrowly tailored. Robust checks and oversight mechanisms must exist to prevent misuse and to allow ordinary, law-abiding individuals to exercise their rights.
Unfortunately, as we and many other civil society advocates have highlighted, past proposals have not addressed those needs. Congress should produce well-balanced rules that address all these priorities, not grant de facto authority to law enforcement to take down drone flights whenever they want. Ultimately, Congress must decide whether drones will be a technology that mainly serves government agencies and big companies, or whether it might also empower individuals.
To make meaningful progress in stabilizing counter unmanned aerial system (“C-UAS”) authorities and addressing emerging issues, Congress should adopt a more comprehensive approach that considers the full range of risks and implements proper safeguards. Future C-UAS legislation include the following priorities, which are essential to protecting civil liberties and ensuring accountability:
- Strong and explicit safeguards for First Amendment-protected activities
- Ensure transparency and require detailed reporting
- Provide due process and recourse for improper counter-drone activities
- Require C-UAS mitigation to involve least-invasive methods
- Maintain reasonable retention limits on data collection
- Maintain sunset for C-UAS powers as drone uses continue to evolve
Congress can—and should—address public safety concerns without compromising privacy and civil liberties. C-UAS authorities should only be granted with the clear limits outlined above to help ensure that counter-drone authorities are wielded responsibly.
The American Civil Liberties Union (ACLU), Center for Democracy & Technology (CDT), Electronic Frontier Foundation (EFF), and Electronic Privacy Information Center (EPIC) shared these concerns with the Committee in a joint Statement For The Record.
Security Theater REALized and Flying without REAL ID
After multiple delays of the REAL ID Act of 2005 and its updated counterpart, the REAL ID Modernization Act, in the United States, the May 7th deadline of REAL ID enforcement has finally arrived. Does this move our security forward in the skies? The last 20 years says we got along fine without it. There were and are issues along the way that REAL ID does impose on everyday people, such as potential additional costs and rigid documentation, even if you already have a state issued ID. While TSA states this is not a national ID or a federal database, but a set of minimum standards required for federal use, we are still watchful of the mechanisms that have pivoted to potential privacy issues with the expansion of digital IDs.
But you don’t need a REAL ID just to fly domestically. There are alternatives.
The most common alternatives are passports or passport cards. You can use either instead of a REAL ID, which might save you an immediate trip to the DMV. And the additional money for a passport at least provides you the extra benefit of international travel.
Passports and passport cards are not the only alternatives to REAL ID. Additional documentation is also accepted as well: (this list is subject to change by the TSA):
- REAL ID-compliant driver's licenses or other state photo identity cards issued by Department of Motor Vehicles (or equivalent and this excludes a temporary driver’s license)
- State-issued Enhanced Driver's License (EDL) or Enhanced ID (EID)
- U.S. passport
- U.S. passport card
- DHS trusted traveler cards (Global Entry, NEXUS, SENTRI, FAST)
- U.S. Department of Defense ID, including IDs issued to dependents
- Permanent resident card
- Border crossing card
- An acceptable photo ID issued by a federally recognized Tribal Nation/Indian Tribe, including Enhanced Tribal Cards (ETCs)
- HSPD-12 PIV card
- Foreign government-issued passport
- Canadian provincial driver's license or Indian and Northern Affairs Canada card
- Transportation Worker Identification Credential (TWIC)
- U.S. Citizenship and Immigration Services Employment Authorization Card (I-766)
- U.S. Merchant Mariner Credential
- Veteran Health Identification Card (VHIC)
Foreign government-issued passports are on this list. However, using a foreign-government issued passport may increase your chances of closer scrutiny at the security gate. REAL ID and other federally accepted documents are supposed to be about verifying your identity, not about your citizenship status. Realistically, interactions with secondary screening and law enforcement are not out of the realm of possibility for non-citizens. The power dynamics of the border has now been brought to flying domestically thanks to REAL ID. The privileges of who can and can’t fly are more sensitive now.
REAL ID and other federally accepted documents are supposed to be about verifying your identity, not about your citizenship status
Mobile Driver’s Licenses (mDLs)
Many states have rolled out the option for a Mobile Driver's License, which acts as a form of your state-issued ID on your phone and is supposed to come with an exception for REAL ID compliance. This is something we asked for since mDLs appear to satisfy their fears of forgery and cloning. But the catch is that states had to apply for this waiver:
“The final rule, effective November 25, 2024, allows states to apply to TSA for a temporary waiver of certain REAL ID requirements written in the REAL ID regulations.”
TSA stated they would publish the list of states with this waiver. But we do not see it on the website where they stated it would be. This bureaucratic hurdle appears to have rendered this exception useless, which is disappointing considering the TSA pushed for mDLs to be used first in their context.
Google ID Pass
Another exception appears to bypass state issued waivers, Google Wallet’s “ID Pass”. If a state partnered with Google to issue mDLs, or if you have a passport, then that is acceptable to TSA. This is a large leap in terms of reach of the mDL ecosystem expanding past state scrutiny to partnering directly with a private company to bring acceptable forms of ID for TSA. There’s much to be said on our worries with digital ID and the rapid expansion of them outside of the airport context. This is another gateway that highlights how ID is being shaped and accepted in the digital sense.
Both with ID Pass and mDLs, the presentation flow allows for you to tap with your phone without unlocking it. Which is a bonus, but it is not clear if TSA has the tech to read these IDs at all airports nationwide and it is still encouraged to bring a physical ID for additional verification.
A lot of the privilege dynamics of flying appear through types of ID you can obtain, whether your shoes stay on, how long you wait in line, etc. This is mostly tied to how much you can spend on traveling and how much preliminary information you establish with TSA ahead of time. The end result is that less wealthy people are subjected to the most security mechanisms at the security gate. For now, you can technically still fly without a REAL ID, but that means being subject to additional screening to verify who you are.
REAL ID enforcement has some leg room for those who do not want or can’t get a REAL ID. But the progression of digital ID is something we are keeping watch of that continues to be presented as the solution to worries of fraud and forgery. Governments and private corporations alike are pushing major efforts for rapid digital ID deployments and more frequent presentation of one’s ID attributes. Your government ID is one of the narrowest, static verifications of who you are as a person. Making sure that information is not used to create a centralized system of information was as important yesterday with REAL ID as it is today with digital IDs.
Standing Up for LGBTQ+ Digital Safety this International Day Against Homophobia
Lawmakers and regulators around the world have been prolific with passing legislation restricting freedom of expression and privacy for LGBTQ+ individuals and fueling offline intolerance. Online platforms are also complicit in this pervasive ecosystem by censoring pro-LGBTQ+ speech, forcing LGBTQ+ individuals to self-censor or turn to VPNs to avoid being profiled, harassed, doxxed, or criminally prosecuted.
The fight for the safety and rights of LGBTQ+ people is not just a fight for visibility online (and offline)—it’s a fight for survival. This International Day Against Homophobia, Biphobia, and Transphobia, we’re sharing four essential tips for LGBTQ+ people to stay safe online.
Using Secure Messaging Services For Every CommunicationAll of us, at least occasionally, need to send a message that’s safe from prying eyes. This is especially true for people who face consequences should their gender or sexual identity be revealed without their consent.
To protect your communications from being seen by others, install an encrypted messenger app such as Signal (for iOS or Android). Turn on disappearing messages, and consider shortening the amount of time messages are kept in the app if you are actually attending an event. If you have a burner device with you, be sure to save the numbers for emergency contacts.
Don’t wait until something sensitive arises: make these apps your default for all communications. As a side benefit, the messages and images sent to family and friends in group chats will be safe from being viewed by automated and human scans on services like Telegram and Facebook Messenger.
Consider The Content You Post On Social MediaOur decision to send messages, take pictures, and interact with online content has a real offline impact. And whilst we cannot control every circumstance, we can think about how our social media behaviour impacts those closest to us and those in our proximity, especially if these people might need extra protection around their identities.
Talk with your friends about the potentially sensitive data you reveal about each other online. Even if you don’t have a social media account, or if you untag yourself from posts, friends can still unintentionally identify you, report your location, and make their connections to you public. This works in the offline world too, such as sharing precautions with organizers and fellow protesters when going to a demonstration, and discussing ahead of time how you can safely document and post the event online without exposing those in attendance to harm.
If you are organizing online or conversing on potentially sensitive issues, choose platforms that limit the amount of information collected and tracking undertaken. We know this is not always possible as perhaps people cannot access different applications. In this scenario, think about how you can protect your community on the platform you currently engage on. For example, if you currently use Facebook for organizing, work with others to keep your groups as private and secure as possible.
Create Incident Response PlansDeveloping a plan for if or when something bad happens is a good practice for anyone, but especially for LGBTQ+ people who face increased risk online. Since many threats are social in nature, such as doxxing or networked harassment, it’s important to strategize with your allies around what to do in the event of such things happening. Doing so before an incident occurs is much easier than when you’re presently facing a crisis.
Only you and your allies can decide what belongs on such a plan, but some strategies might be:
- Isolating the impacted areas, such as shutting down social media accounts and turning off affected devices
- Notifying others who may be affected
- Switching communications to a predetermined more secure alternative
- Noting behaviors of suspected threats and documenting these
- Outsourcing tasks to someone further from the affected circle who is already aware of this potential responsibility.
Given the increase in targeted harassment and vandalism towards LGBTQ+ people, it’s important to consider counterprotesters showing up at various events. Since the boundaries between events like pride parades and protest might be blurred, precautions are necessary. Our general guide for attending a protest covers the basics for protecting your smartphone and laptop, as well as providing guidance on how to communicate and share information responsibly. We also have a handy printable version available here.
This includes:
- Removing biometric device unlock like fingerprint or FaceID to prevent police officers from physically forcing you to unlock your device with your fingerprint or face. You can password-protect your phone instead.
- Logging out of accounts and uninstalling apps or disabling app notifications to avoid app activity in precarious legal contexts from being used against you, such as using queer dating apps in places where homosexuality is illegal.
- Turning off location services on your devices to avoid your location history from being used to identify your device’s comings and goings. For further protections, you can disable GPS, Bluetooth, Wi-Fi, and phone signals when planning to attend a protest.
Consider your digital safety like you would any aspect of bodily autonomy and self determination—only you get to decide what aspects of yourself you share with others, how you present to the world, and what things you keep private. With a bit of care, you can maintain privacy, safety, and pride in doing so.
And in the meantime, we’re fighting to ensure that the internet can be a safe (and fun!) place for all LGBTQ+ people. Now more than ever, it’s essential for allies, advocates, and marginalized communities to push back against these dangerous laws and ensure that the internet remains a space where all voices can be heard, free from discrimination and censorship.
House Moves Forward With Dangerous Proposal Targeting Nonprofits
This week, the U.S. House Ways and Means Committee moved forward with a proposal that would allow the Secretary of the Treasury to strip any U.S. nonprofit of its tax-exempt status by unilaterally determining the organization is a “Terrorist Supporting Organization.” This proposal, which places nearly unlimited discretion in the hands of the executive branch to target organizations it disagrees with, poses an existential threat to nonprofits across the U.S.
This proposal, added to the House’s budget reconciliation bill, is an exact copy of a House-passed bill that EFF and hundreds of nonprofits across the country strongly opposed last fall. Thankfully, the Senate rejected that bill, and we urge the House to do the same when the budget reconciliation bill comes up for a vote on the House floor.
The goal of this proposal is not to stop the spread of or support for terrorism; the U.S. already has myriad other laws that do that, including existing tax code section 501(p), which allows the government to revoke the tax status of designated “Terrorist Organizations.” Instead, this proposal is designed to inhibit free speech by discouraging nonprofits from working with and advocating on behalf of disadvantaged individuals and groups, like Venezuelans or Palestinians, who may be associated, even completely incidentally, with any group the U.S. deems a terrorist organization. And depending on what future groups this administration decides to label as terrorist organizations, it could also threaten those advocating for racial justice, LGBTQ rights, immigrant communities, climate action, human rights, and other issues opposed by this administration.
On top of its threats to free speech, the language lacks due process protections for targeted nonprofit organizations. In addition to placing sole authority in the hands of the Treasury Secretary, the bill does not require the Treasury Secretary to disclose the reasons for or evidence supporting a “Terrorist Supporting Organization” designation. This, combined with only providing an after-the-fact administrative or judicial appeals process, would place a nearly insurmountable burden on any nonprofit to prove a negative—that they are not a terrorist supporting organization—instead of placing the burden where it should be, on the government.
As laid out in letter led by ACLU and signed by over 350 diverse nonprofits, this bill would provide the executive branch with:
“the authority to target its political opponents and use the fear of crippling legal fees, the stigma of the designation, and donors fleeing controversy to stifle dissent and chill speech and advocacy. And while the broadest applications of this authority may not ultimately hold up in court, the potential reputational and financial cost of fending off an investigation and litigating a wrongful designation could functionally mean the end of a targeted nonprofit before it ever has its day in court.”
Current tax law makes it a crime for the President and other high-level officials to order IRS investigations over policy disagreements. This proposal creates a loophole to this rule that could chill nonprofits for years to come.
There is no question that nonprofits and educational institutions – along with many other groups and individuals – are under threat from this administration. If passed, future administrations, regardless of party affiliation, could weaponize the powers in this bill against nonprofits of all kinds. We urge the House to vote down this proposal.
The U.S. Copyright Office’s Draft Report on AI Training Errs on Fair Use
Within the next decade, generative AI could join computers and electricity as one of the most transformational technologies in history, with all of the promise and peril that implies. Governments’ responses to GenAI—including new legal precedents—need to thoughtfully address real-world harms without destroying the public benefits GenAI can offer. Unfortunately, the U.S. Copyright Office’s rushed draft report on AI training misses the mark.
The Report Bungles Fair UseReleased amidst a set of controversial job terminations, the Copyright Office’s report covers a wide range of issues with varying degrees of nuance. But on the core legal question—whether using copyrighted works to train GenAI is a fair use—it stumbles badly. The report misapplies long-settled fair use principles and ultimately puts a thumb on the scale in favor of copyright owners at the expense of creativity and innovation.
To work effectively, today’s GenAI systems need to be trained on very large collections of human-created works—probably millions of them. At this scale, locating copyright holders and getting their permission is daunting for even the biggest and wealthiest AI companies, and impossible for smaller competitors. If training makes fair use of copyrighted works, however, then no permission is needed.
Right now, courts are considering dozens of lawsuits that raise the question of fair use for GenAI training. Federal District Judge Vince Chhabria is poised to rule on this question, after hearing oral arguments in Kadrey v. Meta Platforms. The Third Circuit Court of Appeals is expected to consider a similar fair use issue in Thomson Reuters v. Ross Intelligence. Courts are well-equipped to resolve this pivotal issue by applying existing law to specific uses and AI technologies.
Courts Should Reject the Copyright Office’s Fair Use AnalysisThe report’s fair use discussion contains some fundamental errors that place a thumb on the scale in favor of rightsholders. Though the report is non-binding, it could influence courts, including in cases like Kadrey, where plaintiffs have already filed a copy of the report and urged the court to defer to its analysis.
Courts need only accept the Copyright Office’s draft conclusions, however, if they are persuasive. They are not.
The Office’s fair use analysis is not one the courts should follow. It repeatedly conflates the use of works for training models—a necessary step in the process of building a GenAI model—with the use of the model to create substantially similar works. It also misapplies basic fair use principles and embraces a novel theory of market harm that has never been endorsed by any court.
The first problem is the Copyright Office’s transformative use analysis. Highly transformative uses—those that serve a different purpose than that of the original work—are very likely to be fair. Courts routinely hold that using copyrighted works to build new software and technology—including search engines, video games, and mobile apps—is a highly transformative use because it serves a new and distinct purpose. Here, the original works were created for various purposes and using them to train large language models is surely very different.
The report attempts to sidestep that conclusion by repeatedly ignoring the actual use in question—training —and focusing instead on how the model may be ultimately used. If the model is ultimately used primarily to create a class of works that are similar to the original works on which it was trained, the Office argues, then the intermediate copying can’t be considered transformative. This fundamentally misunderstands transformative use, which should turn on whether a model itself is a new creation with its own distinct purpose, not whether any of its potential uses might affect demand for a work on which it was trained—a dubious standard that runs contrary to decades of precedent.
The Copyright Office’s transformative use analysis also suggests that the fair use analysis should consider whether works were obtained in “bad faith,” and whether developers respected the right “to control” the use of copyrighted works. But the Supreme Court is skeptical that bad faith has any role to play in the fair use analysis and has made clear that fair use is not a privilege reserved for the well-behaved. And rightsholders don’t have the right to control fair uses—that’s kind of the point.
Finally, the Office adopts a novel and badly misguided theory of “market harm.” Traditionally, the fair use analysis requires courts to consider the effects of the use on the market for the work in question. The Copyright Office suggests instead that courts should consider overall effects of the use of the models to produce generally similar works. By this logic, if a model was trained on a Bridgerton novel—among millions of other works—and was later used by a third party to produce romance novels, that might harm series author Julia Quinn’s bottom line.
This market dilution theory has four fundamental problems. First, like the transformative use analysis, it conflates training with outputs. Second, it’s not supported by any relevant precedent. Third, it’s based entirely on speculation that Bridgerton fans will buy random “romance novels” instead of works produced by a bestselling author they know and love. This relies on breathtaking assumptions that lack evidence, including that all works in the same genre are good substitutes for each other—regardless of their quality, originality, or acclaim. Lastly, even if competition from other, unique works might reduce sales, it isn’t the type of market harm that weighs against fair use.
Nor is lost revenue from licenses for fair uses a type of market harm that the law should recognize. Prioritizing private licensing market “solutions” over user rights would dramatically expand the market power of major media companies and chill the creativity and innovation that copyright is intended to promote. Indeed, the fair use doctrine exists in part to create breathing room for technological innovation, from the phonograph record to the videocassette recorder to the internet itself. Without fair use, crushing copyright liability could stunt the development of AI technology.
We’re still digesting this report, but our initial review suggests that, on balance, the Copyright Office’s approach to fair use for GenAI training isn’t a dispassionate report on how existing copyright law applies to this new and revolutionary technology. It’s a policy judgment about the value of GenAI technology for future creativity, by an office that has no business making new, free-floating policy decisions.
The courts should not follow the Copyright Office’s speculations about GenAI. They should follow precedent.
In Memoriam: John L. Young, Cryptome Co-Founder
John L. Young, who died March 28 at age 89 in New York City, was among the first people to see the need for an online library of official secrets, a place where the public could find out things that governments and corporations didn’t want them to know. He made real the idea – revolutionary in its time – that the internet could make more information available to more people than ever before.
John and architect Deborah Natsios, his wife, in 1996 founded Cryptome, an online library which collects and publishes data about freedom of expression, privacy, cryptography, dual-use technologies, national security, intelligence, and government secrecy. Its slogan: “The greatest threat to democracy is official secrecy which favors a few over the many.” And its invitation: “We welcome documents for publication that are prohibited by governments worldwide.”
Cryptome soon became known for publishing an encyclopedic array of government, court, and corporate documents. Cryptome assembled an indispensable, almost daily chronicle of the ‘crypto wars’ of the 1990s – when the first generation of internet lawyers and activists recognized the need to free up encryption from government control and undertook litigation, public activism and legislative steps to do so. Cryptome became required reading for anyone looking for information about that early fight, as well as many others.
John and Cryptome were also among the early organizers and sponsors of WikiLeaks, though like many others, he later broke with that organization over what he saw as its monetization. Cryptome later published Wikileaks’ alleged internal emails. Transparency was the core of everything John stood for.
John was one of the early, under-recognized heroes of the digital age.
John was a West Texan by birth and an architect by training and trade. Even before he launched the website, his lifelong pursuit of not-for-profit, public-good ideals led him to seek access to documents about shadowy public development entities that seemed to ignore public safety, health, and welfare. As the digital age dawned, this expertise in and passion for exposing secrets evolved into Cryptome with John its chief information architect, designing and building a real-time archive of seminal debates shaping cyberspace’s evolving information infrastructures.
The FBI and Secret Service tried to chill his activities. Big Tech companies like Microsoft tried to bully him into pulling documents off the internet. But through it all, John remained a steadfast if iconoclastic librarian without fear or favor.
John served in the United States Army Corps of Engineers in Germany (1953–1956) and earned degrees in philosophy and architecture from Rice University (1957–1963) and his graduate degree in architecture from Columbia University in 1969. A self-identified radical, he became an activist and helped create the community service group Urban Deadline, where his fellow student-activists initially suspected him of being a police spy. Urban Deadline went on to receive citations from the Citizens Union of the City of New York and the New York City Council.
John was one of the early, under-recognized heroes of the digital age. He not only saw the promise of digital technology to help democratize access to information, he brought that idea into being and nurtured it for many years. We will miss him and his unswerving commitment to the public’s right to know.
The Kids Online Safety Act Will Make the Internet Worse for Everyone
The Kids Online Safety Act (KOSA) is back in the Senate. Sponsors are claiming—again—that the latest version won’t censor online content. It isn’t true. This bill still sets up a censorship regime disguised as a “duty of care,” and it will do what previous versions threatened: suppress lawful, important speech online, especially for young people.
KOSA Will silence kids and adults
KOSA Still Forces Platforms to Police Legal SpeechAt the center of the bill is a requirement that platforms “exercise reasonable care” to prevent and mitigate a sweeping list of harms to minors, including depression, anxiety, eating disorders, substance use, bullying, and “compulsive usage.” The bill claims to bar lawsuits over “the viewpoint of users,” but that’s a smokescreen. Its core function is to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to one of these harms.
When the safest legal option is to delete a forum, platforms will delete the forum.
This bill won’t bother big tech. Large companies will be able to manage this regulation, which is why Apple and X have agreed to support it. In fact, X helped negotiate the text of the last version of this bill we saw. Meanwhile, those companies’ smaller competitors will be left scrambling to comply. Under KOSA, a small platform hosting mental health discussion boards will be just as vulnerable as Meta or TikTok—but much less able to defend itself.
To avoid liability, platforms will over-censor. It’s not merely hypothetical. It’s what happens when speech becomes a legal risk. The list of harms in KOSA’s “duty of care” provision is so broad and vague that no platform will know what to do regarding any given piece of content. Forums won’t be able to host posts with messages like “love your body,” “please don’t do drugs,” or “here’s how I got through depression” without fearing that an attorney general or FTC lawyer might later decide the content was harmful. Support groups and anti-harm communities, which can’t do their work without talking about difficult subjects like eating disorders, mental health, and drug abuse, will get caught in the dragnet.
When the safest legal option is to delete a forum, platforms will delete the forum.
There’s Still No Science Behind KOSA’s Core ClaimsKOSA relies heavily on vague, subjective harms like “compulsive usage.” The bill defines it as repetitive online behavior that disrupts life activities like eating, sleeping, or socializing. But here’s the problem: there is no accepted clinical definition of “compulsive usage” of online services.
There’s no scientific consensus that online platforms cause mental health disorders, nor agreement on how to measure so-called “addictive” behavior online. The term sounds like settled medical science, but it’s legislative sleight-of-hand: an undefined concept given legal teeth, with major consequences for speech and access to information.
Carveouts Don’t Fix the First Amendment ProblemThe bill says it can’t be enforced based on a user’s “viewpoint.” But the text of the bill itself preferences certain viewpoints over others. Plus, liability in KOSA attaches to the platform, not the user. The only way for platforms to reduce risk in the world of KOSA is to monitor, filter, and restrict what users say.
If the FTC can sue a platform because minors saw a medical forum discussing anorexia, or posts about LGBTQ identity, or posts discussing how to help a friend who’s depressed, then that’s censorship. The bill’s stock language that “viewpoints are protected” won’t matter. The legal incentives guarantee that platforms will silence even remotely controversial speech to stay safe.
Lawmakers who support KOSA today are choosing to trust the current administration, and future administrations, to define what youth—and to some degree, all of us—should be allowed to read online.
KOSA will not make kids safer. It will make the internet more dangerous for anyone who relies on it to learn, connect, or speak freely. Lawmakers should reject it, and fast.
EFF to California Lawmakers: There’s a Better Way to Help Young People Online
We’ve covered a lot of federal and state proposals that badly miss the mark when attempting to grapple with protecting young people’s safety online. These include bills that threaten to cut young people off from vital information, infringe on their First Amendment rights to speak for themselves, subject them (and adults) to invasive and insecure age verification technology, and expose them to danger by sharing personal information with people they may not want to see it.
Several such bills are moving through the California legislature this year, continuing a troubling years-long trend of lawmakers pushing similarly problematic proposals. This week, EFF sent a letter to the California legislature expressing grave concerns with lawmakers’ approach to regulating young people’s ability to speak online.
We’re far from the only ones who have issues with this approach. Many of the laws California has passed attempting to address young people’s online safety have been subsequently challenged in court and stopped from going into effect.
Our letter outlines the legal, technical, and policy problems with proposed “solutions” including age verification mandates, age gating, mandatory parental controls, and proposals that will encourage platforms to take down speech that’s even remotely controversial.
There are better paths that don’t hurt young people’s First Amendment rights.
We also note that the current approach completely ignores what we’ve heard from thousands of young people: the online platforms and communities they frequent can be among the safest spaces for them in the physical or digital world. These responses show the relationship between social media and young people’s mental health is far more nuanced than many lawmakers are willing to believe.
While our letter is addressed to California’s Assembly and Senate, they are not the only state lawmakers taking this approach. All lawmakers should listen to the people they’re trying to protect and find ways to help young people without hurting the spaces that are so important to them.
There are better paths that don’t hurt young people’s First Amendment rights and still help protect them against many of the harms that lawmakers have raised. In fact, elements of such approaches, such as data minimization, are already included in some of these otherwise problematic bills. A well-crafted privacy law that empowers everyone—children and adults—to control how their data is collected and used would be a crucial step in curbing many of these problems.
We recognize that many young people face real harms online, that families are grappling with how to deal with them, and that tech companies are not offering much help.
However, many of the California legislature’s proposals—this year, and for several years—miss the root of the problem. We call on lawmakers work with us to enact better solutions.
Keeping People Safe Online – Fundamental Rights Protective Alternatives to Age Checks
This is the final part of a three-part series about age verification in the European Union. In part one, we give an overview of the political debate around age verification and explore the age verification proposal introduced by the European Commission, based on digital identities. Part two takes a closer look at the European Commission’s age verification app, and part three explores measures to keep all users safe that do not require age checks.
When thinking about the safety of young people online, it is helpful to remember that we can build on and learn from the decades of experience we already have thinking through risks that can stem from content online. Before mandating a “fix,” like age checks or age assurance obligations, we should take the time to reflect on what it is exactly we are trying to address, and whether the proposed solution is able to solve the problem.
The approach of analyzing, defining and mitigating risks is a helpful one in this regard as it allows us to take a holistic look at possible risks, which includes thinking about the likelihood of a risk materializing, the severity of a certain risk and how risks may affect different groups of people very differently.
In the context of child safety online, mandatory age checks are often presented as a solution to a number of risks potentially faced by minors online. The most common concerns to which policymakers refer in the context of age checks can be broken down into three categories of risks:
- Content risks: This refers to the negative implications from the exposure to online content that might be age-inappropriate, such as violent or sexually explicit content, or content that incites dangerous behavior like self-harm.
- Conduct risks: Conduct risks involve behavior by children or teenagers that might be harmful to themselves or others, like cyberbullying, sharing intimate or personal information or problematic overuse of a service.
- Contact risks: This includes potential harms stemming from contact with people that might pose a risk to minors, including grooming or being forced to exchange sexually explicit material.
Taking a closer look at these risk categories, we can see that mandatory age checks are an ineffective and disproportionate tool to mitigate many risks at the top of policymakers’ minds.
Mitigating risks stemming from contact between minors and adults usually means ensuring that adults are barred from spaces designated for children. Age checks, especially age verification depending on ID documents like the European Commission’s mini-ID wallet, are not a helpful tool in this regard as children routinely do not have access to the kind of documentation allowing them to prove their age. Adults with bad intentions, on the other hand, are much more likely to be able to circumvent any measures put in place to keep them out.
Conduct risks have little to do with how old a specific user is, and much more to do with social dynamics and the affordances and constraints of online services. Differently put: Whether a platform knows a user’s age will not change how minor users themselves decide to behave and interact on the platform. Age verification won’t prevent users from choosing to engage in harmful or risky behavior, like freely posting personal information or spending too much time online.
Finally, mitigating risks related to content deemed inappropriate is often thought of as shutting minors out from accessing certain information. Age check mandates seek to limit access to services and content without much granularity. They don’t allow for a nuanced weighing of the ways in which accessing the internet and social media can be a net positive for young people, and the ways in which it can lead to harm. This is complicated by the fact that although arguments in favour of age checks claim that the science on the relationship between the internet and young people is clear, the evidence on the effects of social media on minors is unsettled, and researchers have refuted claims that social media use is responsible for wellbeing crises among teenagers. This doesn’t mean that we shouldn’t consider the risks that may be associated with being young and online.
But it’s clear that banning all access to certain information for an entire age cohort interferes with all users’ fundamental rights, and is therefore not a proportionate risk mitigation strategy. Under a mandatory age check regime, adults are also required to upload identifying documents just to access websites, interfering with their speech, privacy and security online. At the same time, age checks are not even effective at accomplishing what they’re intended to achieve. Assuming that all age check mandates can and will be circumvented, they seem to do little in the way of protecting children but rather undermine their fundamental rights to privacy, freedom of expression and access to information crucial for their development.
At EFF, we have been firm in our advocacy against age verification mandates and often get asked what we think policymakers should do instead to protect users online. Our response is a nuanced one, recognizing that there is no easy technological fix for complex, societal challenges: Take a holistic approach to risk mitigation, strengthen user choice, and adopt a privacy-first approach to fighting online harms.
Taking a Holistic Approach to Risk MitigationIn the European Union, the past years have seen the adoption of a number of landmark laws to regulate online services. With new rules such as the Digital Services Act or the AI Act, lawmakers are increasingly pivoting to risk-based approaches to regulate online services, attempting to square the circle by addressing known cases of harm while also providing a framework for dealing with possible future risks. It remains to be seen how risk mitigation will work out in practice and whether enforcement will genuinely uphold fundamental rights without enabling overreach.
Under the Digital Services Act, this framework also encompasses rights-protective moderation of content relevant to the risks faced by young people using their services. Platforms may also come up with their own policies on how to moderate legal content that may be considered harmful, such as hate speech or violent content. Robust enforcement of their own community guidelines is one of the most important tools at the disposal of online platforms, but unfortunately often lacking – also for categories of content harmful to children and teenagers, like pro-anorexia content.
To counterbalance potential negative implications on users’ rights to free expression, the DSA puts boundaries on platforms’ content moderation: Platforms must act objectively and proportionately and must take users’ fundamental rights into account when restricting access to content. Additionally, users have the right to appeal content moderation decisions and can ask platforms to review content moderation decisions they disagree with. Users can also seek resolution through out-of-court dispute settlement bodies, at no cost, and can ask nonprofits to represent them in the platform’s internal dispute resolution process, in out-of-court dispute settlements and in court. Platforms must also publish detailed transparency reports, and give researchers and non-profits access to data to study the impacts of online platforms on society.
Beyond these specific obligations on platforms regarding content moderation, the protection of user rights, and improving transparency, the DSA obliges online platforms to take appropriate and proportionate measures to protect the privacy, security and safety of minors. Upcoming guidelines will hopefully provide more clarity on what this means in practice, but it is clear that there are a host of measures platforms can adopt before resorting to approaches as disproportionate as age verification.
The DSA also foresees obligations on the largest platforms and search engines – so called Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs) that have more than 45 million monthly users in the EU – to analyze and mitigate so-called systemic risks posed by their services. This includes analyzing and mitigating risks to the protection of minors and the rights of the child, including freedom of expression and access to information. While we have some critiques of the DSA’s systemic risk governance approach, it is helpful for thinking through the actual risks for young people that may be associated with different categories of content, platforms and their functionalities.
However, it is crucial that such risk assessments are not treated as mere regulatory compliance exercises, but put fundamental rights – and the impact of platforms and their features on those rights – front and center, especially in relation to the rights of children. Platforms would be well-advised to use risk assessments responsibly for their regular product and policy assessments when mitigating risks stemming from content, design choices or features, like recommender systems, ways of engaging with content and users and or online ads. Especially when it comes to possible negative and positive effects of these features on children and teenagers, such assessments should be frequent and granular, expanding the evidence base available to both platforms and regulators. Additionally, platforms should allow external researchers to challenge and validate their assumptions and should provide extensive access to research data, as mandated by the DSA.
The regulatory framework to deal with potentially harmful content and protect minors in the EU is a new and complex one, and enforcement is still in its early days. We believe that its robust, rights-respecting enforcement should be prioritized before eyeing new rules and legal mandates.
Strengthening Users’ ChoiceMany online platforms also deploy their own tools to help families navigate their services, including parental control settings and apps, specific offers tailored to the needs of children and teens, or features like reminders to take a break. While these tools are certainly far from perfect, and should not be seen as a sufficient measure to address all concerns, they do offer families an opportunity to set boundaries that work for them.
Academic and civil society research underlines that better and more granular user controls can also be an effective tool to minimize content and contact risks: Allowing users to integrate third-party content moderation systems or recommendation algorithms would enable families to alter their childrens’ online experiences according to their needs.
The DSA takes a first helpful step in this direction by mandating that online platforms give users transparency about the main parameters used to recommend content to users, and to allow users to easily choose between different recommendation systems when multiple options are available. The DSA also obliges VLOPs that use recommender systems to offer at least one option that is not based on profiling users, thereby giving users of large platforms the choice to protect themselves from the often privacy-invasive personalization of their feeds. However, forgoing all personalization will likely not be attractive to most users, and platforms should give users the choice to use third-party recommender systems that better mirror their privacy preferences.
Giving users more control over which accounts can interact with them, and in which ways, can also help protect children and teenagers against unwanted interactions. Strengthening users’ choice also includes prohibiting companies from implementing user interfaces that have the intent or substantial effect of impairing autonomy and choice. This so-called “deceptive design” can take many forms, from tricking people into giving consent to the collection of their personal data, to encouraging the use of certain features. The DSA takes steps to ban dark patterns, but European consumer protection law must make sure that this prohibition is strictly enforced and that no loopholes remain.
A Privacy First Approach to Addressing Online HarmsWhile rights-respecting content moderation and tools to strengthen parents’ and childrens’ self-determination online are part of the answer, we have long advocated for a privacy-focused approach to fighting online harms.
We follow this approach for two reasons: On the one hand, privacy risks are complex and young people cannot be expected to predict risks that may materialize in the future. On the other hand, many of the ways in which children and teenangers can be harmed online are directly linked to the accumulation and exploitation of their personal data.
Online services collect enormous amounts of personal data and personalize or target their services – displaying ads or recommender systems – based on that data. While the systems that target and display ads and curate online content are distinct, both are based on the surveillance and profiling of users. In addition to allowing users to choose a recommender system, settings for all users should by default turn off recommender systems based on behavioral data. To protect all users’ privacy and data protection rights, platforms should have to ask for users’ informed, specific, voluntary, opt-in consent before collecting their data to personalize recommender systems. Privacy settings should be easily accessible and allow users to enable additional protections.
Data collection in the context of online ads is even more opaque. Due to the large number of ad tech actors and data brokers involved, it is practically impossible for users to give informed consent for the processing of their personal data. This data is used by ad tech companies and data brokers to profile users to draw inferences about what they like, what kind of person they are (including demographics like age and gender), and what they might be interested in buying, seeing, or engaging with. This information is then used by ad tech companies to target advertisements, including for children. Beyond undermining children’s privacy and autonomy, the online behavioral ad system teaches users from a young age that data collection, tracking, and profiling are evils that come with using the web, thereby normalizing being tracked, profiled, and surveilled.
This is why we have long advocated for a ban of online behavioral advertising. Banning behavioral ads would remove a major incentive for companies to collect as much data as they do. The DSA already bans targeting minors with behavioral ads, but this protection should be extended to everyone. Banning behavioral advertising will be the most effective path to disincentivize the collection and processing of personal data and end the surveillance of all users, including children, online.
Similarly, pay-for-privacy schemes should be banned, and we welcome the recent decision by the European Commission to fine Meta for breaching the Digital Markets Act by offering its users a binary choice between paying for privacy or having their personal data used for ads targeting. Especially in the face of recent political pressure from the Trump administration to not enforce European tech laws, we applaud the European Commission for taking a clear stance and confirming that the protection of privacy online should never be a luxury or privilege. And especially vulnerable users like children should not be confronted with the choice between paying extra (something that many children will not be able to do) or being surveilled.
Stopping States From Passing AI Laws for the Next Decade is a Terrible Idea
This week, the U.S. House Energy and Commerce Committee moved forward with a proposal in its budget reconciliation bill to impose a ten-year preemption of state AI regulation—essentially saying only Congress, not state legislatures, can place safeguards on AI for the next decade.
We strongly oppose this. We’ve talked before about why federal preemption of stronger state privacy laws hurts everyone. Many of the same arguments apply here. For one, this would override existing state laws enacted to mitigate against emerging harms from AI use. It would also keep states, which have been more responsive on AI regulatory issues, from reacting to emerging problems.
Finally, it risks freezing any regulation on the issue for the next decade—a considerable problem given the pace at which companies are developing the technology. Congress does not react quickly and, particularly when addressing harms from emerging technologies, has been far slower to act than states. Or, as a number of state lawmakers who are leading on tech policy issues from across the country said in a recent joint open letter, “If Washington wants to pass a comprehensive privacy or AI law with teeth, more power to them, but we all know this is unlikely.”
Even if Congress does nothing on AI for the next ten years, this would still prevent states from stepping into the breach.
Even if Congress does nothing on AI for the next ten years, this would still prevent states from stepping into the breach. Given how different the AI industry looks now from how it looked just three years ago, it’s hard to even conceptualize how different it may look in ten years. State lawmakers must be able to react to emerging issues.
Many state AI proposals struggle to find the right balance between innovation and speech, on the one hand, and consumer protection and equal opportunity, on the other. EFF supports some bills to regulate AI and opposes others. But stopping states from acting at all puts a heavy thumb on the scale in favor of companies.
Stopping states will stop progress. As the big technology companies have done (and continue to do) with privacy legislation, AI companies are currently going all out to slow or roll back legal protections in states.
For example, Colorado passed a broad bill on AI protections last year. While far from perfect, the bill set down basic requirements to give people visibility into how companies use AI to make consequential decisions about them. This year, several AI companies lobbied to delay and weaken the bill. Meanwhile, POLITICO recently reported that this push in Washington, D.C. is in direct response to proposed California rules.
We oppose the AI preemption language in the reconciliation bill and urge Congress not to move forward with this damaging proposal.
Montana Becomes First State to Close the Law Enforcement Data Broker Loophole
Montana has done something that many states and the United States Congress have debated but failed to do: it has just enacted the first attempt to close the dreaded, invasive, unconstitutional, but easily fixed “data broker loophole.” This is a very good step in the right direction because right now, across the country, law enforcement routinely purchases information on individuals it would otherwise need a warrant to obtain.
What does that mean? In every state other than Montana, if police want to know where you have been, rather than presenting evidence and sending a warrant signed by a judge to a company like Verizon or Google to get your geolocation data for a particular set of time, they only need to buy that same data from data brokers. In other words, all the location data apps on your phone collect —sometimes recording your exact location every few minutes—is just sitting for sale on the open market. And police routinely take that as an opportunity to skirt your Fourth Amendment rights.
Now, with SB 282, Montana has become the first state to close the data broker loophole. This means the government may not use money to get access to information about electronic communications (presumably metadata), the contents of electronic communications, contents of communications sent by a tracking devices, digital information on electronic funds transfers, pseudonymous information, or “sensitive data”, which is defined in Montana as information about a person’s private life, personal associations, religious affiliation, health status, citizen status, biometric data, and precise geolocation. This does not mean information is now fully off limits to police. There are other ways for law enforcement in Montana to gain access to sensitive information: they can get a warrant signed by a judge, they can get consent of the owner to search a digital device, they can get an “investigative subpoena” which unfortunately requires far less justification than an actual warrant.
Despite the state’s insistence on honoring lower-threshold subpoena usage, SB 282 is not the first time Montana has been ahead of the curve when it comes to passing privacy-protecting legislation. For the better part of a decade, the Big Sky State has seriously limited the use of face recognition, passed consumer privacy protections, added an amendment to their constitution recognizing digital data as something protected from unwarranted searches and seizures, and passed a landmark law protecting against the disclosure or collection of genetic information and DNA.
SB 282 is similar in approach to H.R.4639, a federal bill the EFF has endorsed, introduced by Senator Ron Wyden, called the Fourth Amendment is Not for Sale Act. H.R.4639 passed through the House in April 2024 but has not been taken up by the Senate.
Absent the United States Congress being able to pass important privacy protections into law, states, cities, and towns have taken it upon themselves to pass legislation their residents sorely need in order to protect their civil liberties. Montana, with a population of just over one million people, is showing other states how it’s done. EFF applauds Montana for being the first state to close the data broker loophole and show the country that the Fourth Amendment is not for sale.
How Signal, WhatsApp, Apple, and Google Handle Encrypted Chat Backups
Encrypted chat apps like Signal and WhatsApp are one of the best ways to keep your digital conversations as private as possible. But if you’re not careful with how those conversations are backed up, you can accidentally undermine your privacy.
When a conversation is properly encrypted end-to-end, it means that the contents of those messages are only viewable by the sender and the recipient. The organization that runs the messaging platform—such as Meta or Signal—does not have access to the contents of the messages. But it does have access to some metadata, like the who, where, and when of a message. Companies have different retention policies around whether they hold onto that information after the message is sent.
What happens after the messages are sent and received is entirely up to the sender and receiver. If you’re having a conversation with someone, you may choose to screenshot that conversation and save that screenshot to your computer’s desktop or phone’s camera roll. You might choose to back up your chat history, either to your personal computer or maybe even to cloud storage (services like Google Drive or iCloud, or to servers run by the application developer).
Those backups do not necessarily have the same type of encryption protections as the chats themselves, and may make those conversations—which were sent with strong, privacy-protecting end-to-end encryption—available to read by whoever runs the cloud storage platform you’re backing up to, which also means they could hand them at the request of law enforcement.
With that in mind, let’s take a look at how several of the most popular chat apps handle backups, and what options you may have to strengthen the security of those backups.
How Signal Handles BackupsThe official Signal app doesn’t offer any way to back up your messages to a cloud server (some alternate versions of the app may provide this, but we recommend you avoid those, as there don’t exist any alternatives with the same level of security as the official app). Even if you use a device backup, like Apple’s iCloud backup, the contents of Signal messages are not included in those.
Instead, Signal supports a manual backup and restore option. Basically, messages are not backed up to any cloud storage, and Signal cannot access them, so the only way to transfer messages from one device to another is manually through a process that Signal details here. If you lose your phone or it breaks, you will likely not be able to transfer your messages.
How WhatsApp Handles BackupsWhatsApp can optionally back up the contents of chats to either a Google Account on Android, or iCloud on iPhone, and you have a choice to back up with or without end-to-end encryption. Here are directions for enabling end-to-end encryption in those backups. When you do so, you’ll need to create a password or save a 64-digit key.
How Apple’s iMessages Handles BackupsCommunication between people with Apple devices using Apple’s iMessage (blue bubbles in the Messages app), are end-to-end encrypted, but the backups of those conversations are not end-to-end encrypted by default. This is a loophole we’ve routinely demanded Apple close.
The good news is that with the release of the Advanced Data Protection feature, you can optionally turn on end-to-end encryption for almost everything stored in iCloud, including those backups (unless you’re in the U.K., where Apple is currently arguing with the government over demands to access data in the cloud, and has pulled the feature for U.K. users).
How Google Messages Handles BackupsSimilar to Apple iMessages, Google Messages conversations are end-to-end encrypted only with other Google Messages users (you’ll know it’s enabled when there’s a small lock icon next to the send button in a chat).
You can optionally back up Google Messages to a Google Account, and as long as you have a passcode or lock screen password, the backup of the text of those conversations is end-to-end encrypted. A feature to turn on end-to-end encrypted backups directly in the Google Messages app, similar to how WhatsApp handles it, was spotted in beta last year but hasn’t been officially announced or released.
Everyone in the Group Chat Needs to Get EncryptedNote that even if you take the extra step to turn on end-to-end encryption, everyone else you converse with would have to do the same to protect their own backups. If you have particularly sensitive conversations on apps like WhatsApp or Apple Messages, where those encrypted backups are an option but not the default, you may want to ask those participants to either not back up their chats at all, or turn on end-to-end encrypted backups.
Ask Yourself: Do I Need Backups Of These Conversations?Of course, there’s a reason people want to back up their conversations. Maybe you want to keep a record of the first time you messaged your partner, or want to be able to look back on chats with friends and family. There should not be a privacy trade-off for those who want to save those conversations, but unfortunately you do need to weigh whether or not it’s worth saving your chats with the potential of them being exposed in your security plan.
But also it’s worth considering that we don’t typically need every conversation we have stored forever. Many chat apps, including WhatsApp and Signal, offer some form of “disappearing messages,” which is a way to delete messages after a certain amount of time. This gets a little tricky with backups in WhatsApp. If you create a backup before a message disappears, it’ll be included in the backup, but deleted when you restore later. Those messages will remain there until you back up again, which may be the next day, or may not be many days, if you don’t connect to Wi-Fi.
You can change these disappearing messaging settings on a per-conversation basis. That means you can choose to set the meme-friendly group chat with your friends to delete after a week, but retain the messages with your kids forever. Google Messages and Apple Messages don’t offer any such feature—but they should, because it’s a simple way to protect our conversations that gives more control over to the people using the app.
End-to-end encrypted chat apps are a wonderful tool for communicating safely and privately, but backups are always going to be a contentious part of how they work. Signal’s approach of not offering cloud storage for backups at all is useful for those who need that level of privacy, but is not going to work for everyone’s needs. Better defaults and end-to-end encrypted backups as the only option when cloud storage is offered would be a step forward, and a much easier solution than going through and asking every one of your contacts how or if they back up their chats.