Electronic Freedom Foundation

Federal Court Agrees: Prosecutors Can’t Keep Forensic Evidence Secret from Defendants

EFF - Fri, 02/26/2021 - 5:44pm

When the government tries to convict you of a crime, you have a right to challenge its evidence. This is a fundamental principle of due process, yet prosecutors and technology vendors have routinely argued against disclosing how forensic technology works.

For the first time, a federal court has ruled on the issue, and the decision marks a victory for civil liberties.

EFF teamed up with the ACLU of Pennsylvania to file an amicus brief arguing in favor of defendants’ rights to challenge complex DNA analysis software that implicates them in crimes. The prosecution and the technology vendor Cybergenetics opposed disclosure of the software’s source code on the grounds that the company has a commercial interest in secrecy.

The court correctly determined that this secrecy interest could not outweigh a defendant’s rights and ordered the code disclosed to the defense team. The disclosure will be subject to a “protective order” that bars further disclosure, but in a similar previous case a court eventually allowed public scrutiny of source code of a different DNA analysis program after a defense team found serious flaws.

This is the second decision this year ordering the disclosure of the secret TrueAllele software. This added scrutiny will help ensure that the software does not contribute to unjust incarceration.

From Creativity to Exclusivity: The German Government's Bad Deal for Article 17

EFF - Fri, 02/26/2021 - 1:25pm

The implementation process of Article 17 (formerly Article 13) of the controversial Copyright Directive into national laws is in full swing, and it does not look good for users' rights and freedoms. Several EU states have failed to present balanced copyright implementation proposals, ignoring the concerns off EFF, other civil society organizations, and experts that only strong user safeguards can help preventing Article 17 from turning tech companies and online services operators into copyright police.

A glimpse of hope was presented by the German government in a recent discussion paper. While the draft proposal fails to prevent the use of upload filters to monitor all user uploads and assess them against the information provided by rightsholders, it showed creativity by giving users the option of pre-flagging uploads as "authorized" (online by default) and by setting out exceptions for everyday uses. Remedies against abusive removal requests by self-proclaimed rightsholders were another positive feature of the discussion draft.

Inflexible Rules in Favor of Press Publishers

However, the recently adopted copyright implementation proposal by the German Federal Cabinet has abandoned the focus on user rights in favor of inflexible rules that only benefit press publishers. Instead of opting for broad and fair statutory authorization for non-commercial minor uses, the German government suggests trivial carve-outs for "uses presumably authorized by law," which are not supposed to be blocked automatically by online platforms. However, the criteria for such uses are narrow and out of touch with reality. For example, the limit for minor use of text is 160 characters.

By comparison, the maximum length of a tweet is 280 characters, which is barely enough substance for a proper quote. As those uses are only presumably authorized, they can still be disputed by rightsholders and blocked at a later stage if they infringe copyright. However, this did not prevent the German government from putting a price tag on such communication as service providers will have to pay the author an "appropriate remuneration." There are other problematic elements in the proposal, such as the plan to limit the use of parodies to uses that are "justified by the specific purpose"—so better be careful about being too playful.

The German Parliament Can Improve the Bill

It's now up to the German Parliament to decide whether to be more interested in the concerns of press publishers or in the erosion of user rights and freedoms. EFF will continue to reach out to Members of Parliament to help them make the right decision.

The SAFE Tech Act Wouldn't Make the Internet Safer for Users

EFF - Thu, 02/25/2021 - 7:17pm

Section 230, a key law protecting free speech online since its passage in 1996, has been the subject of numerous legislative assaults over the past few years. The attacks have come from all sides. One of the latest, the SAFE Tech Act, seeks to address real problems Internet users experience, but its implementation would harm everyone on the Internet. 

The SAFE Tech Act is a shotgun approach to Section 230 reform put forth by Sens. Mark Warner, Mazie Hirono and Amy Klobuchar earlier this month. It would amend Section 230 through the ever-popular method of removing platform immunity from liability arising from various types of user speech. This would lead to more censorship as social media companies seek to minimize their own legal risk. The bill compounds the problems it causes by making it more difficult to use the remaining immunity against claims arising from other kinds of user content. 

Addressing Big Tech’s surveillance-based business models can’t, and shouldn’t, be done through amendments to Section 230—but that doesn’t mean it shouldn’t be done at all. 


The act would not protect users’ rights in a way that is substantially better than current law. And it would, in some cases, harm marginalized users, small companies, and the Internet ecosystem as a whole. Our three biggest concerns with the SAFE Tech Act are: 1) its failure to capture the reality of paid content online, 2) the danger that an affirmative defense requirement creates and 3) the lack of guardrails around injunctive relief that would open the door for a host of new suits that simply remove certain speech.

Section 230 Benefits Everyone

Before considering what this bill would change, it’s useful to take a look at the benefits that Section 230 provides for all internet users. The Internet today allows people everywhere to connect and share ideas—whether that’s for free on social media platforms and educational or cultural platforms like Wikipedia and the Internet Archive, or on paid hosting services like Squarespace or Patreon. Section 230’s legal protections benefit Internet users in two ways. 

Section 230 Protects Intermediaries That Host Speech: Section 230 enables services to host the content of other speakers—from writing, to videos, to pictures, to code that others write or upload—without those services generally having to screen or review that content before being published. Without this partial immunity, all of the intermediaries who help the speech of millions and billions of users reach their audiences would face unworkable content moderation requirements that inevitably lead to large scale censorship. The immunity has some important exceptions, including for violations of federal criminal law and intellectual property claims. But the legal immunity’s protections extend to services far beyond social media platforms. Thus everyone who sends an email, makes a Kickstarter, posts on Medium, shares code on Github, protects their site from DDOS attacks with Cloudflare, makes friends on Meetup, or posts on Reddit, benefits from Section 230’s immunity for all intermediaries. 

Section 230 Protects Users Who Create Content: Section 230 directly protects Internet users who themselves act as online intermediaries from being held liable for the content created by others. So when people publish a blog and allow reader comments, for example, Section 230 protects them. This enables Internet users to create their own platforms for others’ speech, such as when an Internet user created the Shitty Media Men list that allowed others to share their own experiences involving harassment and sexual assault. 

The SAFE Tech Act Fails to Capture the Reality of Paid Content Online

In what appears to be an attempt to limit deceptive advertising, the SAFE Tech Act would amend Section 230 to remove the service’s immunity for user-generated content when that content is paid speech. According to the senators, the goal of this change is to stop Section 230 from applying to ads, “ensuring that platforms cannot continue to profit as their services are used to target vulnerable consumers with ads enabling frauds and scams.” 

With this change, even if the defendant ultimately prevails against a plaintiff’s claims, they will have to defend themselves in court for longer, driving up their costs.

But the language in the bill is much broader than just ads. The bill says Section 230’s platform immunity for user-generated content does not apply if, “the provider or user has accepted payment to make the speech available or, in whole or in part, created or funded the creation of the speech.” Much, much more of the Internet is likely included behind this definition than advertising, and it is unclear how much paid or sponsored content this language would sweep up. This change would undoubtedly force a massive, and dangerous, overhaul to Internet services at every level. 

Although much of the legislative conversation around Section 230 reform focuses on the dominant social media services that are generally free to users, most of the intermediaries people rely on involve some form of payment or monetization: from more obvious content that sits behind a paywall on sites like Patreon, to websites that pay for hosting from providers like GoDaddy, to the comment section of a newspaper only available to subscribers. If all companies that host speech online and whose businesses depend on user payments lose Section 230 protections, the relationship between users and many intermediaries will change significantly, in several unintended ways:

Harm to Data Privacy: Services that previously accepted payments from users may decide to change to a different business model based on collecting and selling users’ personal information. So in seeking to regulate advertising, the SAFE TECH Act may perversely expand the private surveillance business model to other parts of the Internet, just so those services can continue to maintain Section 230’s protections. 

Increased Censorship: Those businesses that continue to accept payments will have to make new decisions about what speech they can risk hosting and how they vet users and screen their content. They would be forced to monitor and filter all content that appears whenever money has exchanged hands—a dangerous and unworkable solution that would find much important speech disappeared, and would turn everyone from web hosts to online newspapers into censors. The only other alternative—not hosting user speech—would also not be a step forward. 

As we’ve said many times, censorship has been shown to amplify existing imbalances in society. History shows us that when faced with the prospect of having to defend lawsuits, online services (like offline intermediaries before them) will opt to remove and reject user speech rather than try to defend it, even when it is strongly defensible. These decisions, as history has shown us, are applied disproportionately against the speech of marginalized speakers. Immunity, like that provided by Section 230, alleviates that prospect of having to defend such lawsuits. 

Unintended Burdens on a Complex Ecosystem: While minimizing dangerous or deceptive advertising may be a worthy goal, and even if the SAFE Tech Act were narrowed to target ads in particular, it would not only burden sites like Facebook that function as massive online advertising ecosystems; it would also burden the numerous companies that comprise the complex online advertising ecosystem. There are numerous intermediaries between the user seeing an ad on a website and the ad going up. It is unclear which companies would lose Section 230 immunity under the SAFE TECH Act; arguably it would be all of them. The bill doesn’t reflect or account for the complex ways that publishers, advertisers, and scores of middlemen actually exchange money in today’s online ad ecosystem, which happens often in a split second through Real-Time Bidding protocols. It also doesn’t account for more nuanced advertising regimes. For example, how would an Instagram influencer—someone who is paid by a company to share information about a product—be affected by this loss of immunity? No money has exchanged hands with Instagram, and therefore one can imagine influencers and other more covert forms of advertising becoming the norm to protect advertisers and platforms from liability. 

For a change in Section 230 to work as intended and not spiral into a mass of unintended consequences, legislators need to have a greater understanding of the Internet ecosystem of paid and content, and the language needs to be more specifically and narrowly tailored.

The Danger That an Affirmative Defense Requirement Creates 

The SAFE Tech Act also would alter the legal procedure around when Section 230’s immunity for user-generated content would apply in a way that would have massive practical consequences for users’ speech. Many people upset about user-generated content online bring cases against platforms, hosts, and other online intermediaries. Congressman Devin Nunes’ repeated lawsuits against Twitter for its users’ speech are a prime example of this phenomenon. 

The increased legal costs of even meritless lawsuits will have serious consequences for users’ speech. 

Under current law, Section 230 operates as a procedural fast-lane for online services—and users who publish another user’s content—to get rid of frivolous lawsuits. Platforms and users subjected to these lawsuits can move to dismiss the cases before having to even respond to the legal complaint or going through the often expensive fact-gathering portion of a case, known as discovery. Right now, if it’s clear from the face of a legal complaint that the underlying allegations are based on a third party’s content, the statute’s immunity requires that the case against the platform or user who hosted the complained-of content be dismissed. Of course, this has not stopped plaintiffs from bringing (often unmeritorious) lawsuits in the first place. But in those cases, Section 230 minimizes the work the court must go through to grant a motion to dismiss the case, and minimizes costs for the defendant. This protects not only platforms but users; it is the desire to avoid litigation costs that leads intermediaries to default to censoring user speech.

The SAFE Tech Act would subject both provider and user defendants to much more protracted and expensive litigation before a case could be dismissed. By downgrading Section 230’s immunity to an “affirmative defense … that an interactive computer service provider has a burden of proving by a preponderance of the evidence,” defendants could no longer use Section 230 to dismiss cases at the beginning of a suit and would be required to prove—with evidence—that Section 230 applies. Right now, Section 230 saves companies and users significant legal costs when they are subjected to frivolous lawsuits. With this change, even if the defendant ultimately prevails against a plaintiff’s claims, they will have to defend themselves in court for longer, driving up their costs.

The increased legal costs of even meritless lawsuits will have serious consequences for users’ speech. An online service that cannot quickly get out of frivolous litigation based on user-generated content is likely going to take steps to prevent such content from becoming a target of litigation in the first place, including screening user’s speech or prohibiting certain types of speech entirely. And in the event that someone upset by a user’s speech sends a legal threat to an intermediary, the service is likely to be much more willing to remove the speech—even when it knows the speech cannot be subject to legal liability—just to avoid the new, larger expense and time to defend against the lawsuit.

As a result, the SAFE Tech Act would open the door for a host of new suits that by design are not filed to vindicate a legal wrong but simply to remove certain speech from the Internet—also called SLAPP lawsuits. These would remove a much greater volume of speech that does not, in fact, violate the law. Large services may find ways to absorb these new costs. But for small intermediaries and growing platforms that may be competing with those large companies, a single costly lawsuit, even if the defendant small company eventually prevails, may be the difference between success and failure. This is not to mention the many small businesses who use social media to market their company or service to respond to (and moderate) comments on their pages or sites, and who would likely be in danger of losing immunity from liability under this change. 

No Guardrails Around Injunctive Relief Would Open the Door to Dangerous Takedowns

The SAFE Tech Act also modifies Section 230’s immunity in another significant way, by permitting aggrieved individuals to seek non-monetary relief from platforms whose content has harmed them. Under the bill, Section 230 would not apply when a plaintiff seeks injunctive relief to require an online service to remove or restrict user-generated content that is “likely to cause irreparable harm.” 

The SAFE Tech Act’s injunctive relief carveout fails to account for how the provision will be misused to suppress lawful speech.

This extremely broad change may be designed to address a legitimate concern about Section 230. Some people who are harmed online simply want the speech taken down instead of seeking monetary compensation. While giving certain Internet users an effective remedy that they currently lack under 230, the SAFE Tech Act’s injunctive relief carveout fails to account for how the provision will be misused to suppress lawful speech.

The SAFE Tech Act’s language appears to permit enforcement of all types of injunctive relief at any stage in a case. Litigants often seek emergency and temporary injunctive relief at an extremely early stage of the case, and judges frequently grant it without giving the speaker or platform an opportunity to respond. Courts already issue these kinds of takedown orders against online platforms, and they are prior restraints in violation of the First Amendment. If Section 230 does not bar these types of preliminary takedown orders, plaintiffs are likely to misuse the legal system to force down legal content without a final adjudication about the actual legality of the user-generated content.

Also, the injunctive relief carveout could be abused in another type of case, known as a default judgment, to remove speech without any judicial determination that the content is illegal. Default judgments are when the defendant does not fight the case, allowing the plaintiff to win without any examination of the underlying merits. In many cases, defendants avoid litigation simply because they don’t have the time or money for it. 

Because of its one-sided nature, default judgments are subject to great fraud and abuse. Others have documented the growing phenomenon of fraudulent default judgments, typically involving defamation claims, in which a meritless lawsuit is crafted for the specific purpose of getting a default judgment and to avoid a consideration of its merits. If the SAFE Tech Act were to become law, fraudulent lawsuits like these would be incentivized and become more common, because Section 230 would no longer provide a barrier against their use to legally compel intermediaries to remove lawful speech.

A recent Section 230 case called Hassel v. Bird illustrates how a broad injunctive relief carveout to the law that would apply to default judgments would incentivize censorship of protected user speech. In Hassel, a lawyer sued a user of Yelp (Bird) who gave her law office a bad review, claiming defamation. The court never ruled on whether the speech was defamatory, but because the reviewer did not defend the lawsuit, the trial judge entered a default judgment against the reviewer, ordering the removal of the post.  Section 230 prevented a court from ordering Yelp to remove the post. 

Despite the potential for litigants to abuse the SAFE Tech Act’s injunctive relief carveout, the bill contains no guardrails for online intermediaries hosting legitimate speech targeted for removal. As it stands, the injunctive relief exception to Section 230 poses a real danger to legitimate speech. 

In Conclusion, For Safer Tech, Look Beyond Section 230

This only scratches the surface of the SAFE Tech Act. But the bill’s shotgun approach to amending Section 230, and the broadness of its language, make it impossible to support as it stands. 

If legislators take issue with deceptive advertisers, they should use existing laws to protect users from them. Instead of making sweeping changes to Section 230, they should update antitrust law to stop the flood of mergers and acquisitions that have made competition in Big Tech an illusion, creating much of the problems we see in the first place. If they want to make Big Tech more responsive to the concerns of consumers, they should pass a strong consumer data privacy law with a robust private right of action.

If they disagree with the way that large companies like Facebook benefit from Section 230, they should carefully consider that changes to Section 230 will mostly burden smaller platforms and entrench the large companies that can absorb or adapt to the new legal landscape (large companies continue to support amendments to Section 230, even as those companies simultaneously push back against substantive changes that actually seek to protect users, and therefore harm their bottom line). Addressing Big Tech’s surveillance-based business models can’t, and shouldn’t, be done through amendments to Section 230—but that doesn’t mean it shouldn’t be done at all. 

It’s absolutely a problem that just a few tech companies wield such immense control over what speakers and messages are allowed online. And it’s a problem that those same companies fail to enforce their own policies consistently or offer users meaningful opportunities to appeal bad moderation decisions. But this bill would not create a fairer system.

Virginia's Weak Privacy Bill Is Just What Big Tech Wants

EFF - Thu, 02/25/2021 - 6:31pm

Virginia’s legislature has passed a bill meant to protect consumer privacy—but the bill, called the Virginia Consumer Data Protection Act, really protects the interests of business far more than the interests of everyday consumers.

Take Action

Virginia: Speak Up for Real Privacy

The bill, which both Microsoft and Amazon supported, is now headed to the desk of Governor Ralph Northam. This week, EFF joined with the Virginia Citizens Consumer Council, Consumer Federation of America, Privacy Rights Clearinghouse, U.S. PIRG to ask for a veto on this bill, or for the governor to add a reenactment clause—a move that would send the bill back to the legislature to try again.

If you’re in Virginia and care about true privacy protections, let the governor know that the VCDA doesn’t give consumers the protections they need. In fact, it stacks the deck against them, by offering an “opt-out” framework that doesn’t protect privacy by default, allowing companies to force consumers that exercise their privacy rights to pay higher prices or accept a lower quality of service, and offering no meaningful enforcement—making it very unlikely that consumers will be able to hold companies to account if any of the few rights this bill grants them are violated.

As passed by the legislature, the bill is set to go into effect in 2023 and will establish a working group to make improvements between now and then. That offers some chance for improvements—but it likely won’t be enough to get real consumer protections. As we noted in a joint press release, “These groups appreciate that Governor Northam’s office has engaged with the concerns of consumer groups and committed to a robust stakeholder process to improve this bill. Yet the fundamental problems with the CDPA are too big to be fixed after the fact.”

Consumer privacy rights must be the foundation of any real privacy bill. The CDPA was written without meaningful input from consumer advocates; in fact, as Protocol reported, it was handed to the bill’s sponsor by an Amazon lobbyist. Some have suggested the Virginia bill could be a model for other states or for federal legislation. That’s bad for Virginia and bad for all of us.

Virginians, it’s time to take a stand. Tell Governor Northam that this bill is not good enough, and urge him to veto it or send it back for another try.  

TAKE ACTION

VIRGINIA: SPEAK UP FOR REAL PRIVACY

Interoperability Gains Support at House Hearing on Big Tech Competition

EFF - Thu, 02/25/2021 - 5:26pm

With a new year and a new Congress, the House of Representatives’ subcommittee covering antitrust has turned its attention to “reviving competition.” On Thursday, the first in a series of hearings was held, focusing on how to help small businesses challenge Big Tech. One very good idea kept coming up, backed by both parties. And it is one EFF also considers essential: interoperability.

This was the first hearing since the House Judiciary Committee issued its antitrust report from its investigation into the business practices of Big Tech companies. This week’s hearing was exclusively focused on how to re-enable small businesses to disrupt the dominance of Big Tech. A critical aspect of the Internet EFF calls the life cycle of competition has vanished from the Internet as small new entrants no longer seek (nor could even if they tried) to displace well-established giants, but rather seek to be acquired by them.

Strong Bipartisan Support for Interoperability

Across the committee Members of Congress appeared to agree that some means of requiring Big Tech to grant access to competitors through interoperability will be an essential piece of the competition puzzle. The need is straightforward, the larger these networks became, the more their value rose, making it harder for a new business to enter into direct competition. One expert witness, Public Knowledge’s Competition Policy Director Charlotte Slaiman, noted that these “network effects” meant that one company with double the network size as a competitor wasn’t twice as attractive, it was exponentially more attractive to users.

But even in cases where you have large competitors with sizeable networks, Big Tech companies are using their dominance in other markets as a means to push out existing competitors. One of the most powerful testimonies in favor of interoperability provided to Congress was by the CEO of Mapbox, Eric Gunderson who detailed how Google is leveraging its dominance in search to exert dominance in Google Maps. Specifically, Google through a colorful trademark “brand confusion” contract term requires developers who wish to use Google Search to only integrate their products with Google Maps. Mr. Gunderson made clear that this tying of products that really do not need to be tied together at all is not only foreclosing on market opportunities for Mapbox, but it is also forcing their existing clients to abandon doing anything that doesn’t use Google Maps outright.

The solution to this type of corporate incumbent anticompetitive behavior is not revolutionary and has deep roots in tech history. As Ranking Member Ken Buck (R-CO) stated, “interoperability is a time-honored practice in the tech industry that allows competing technologies to speak to one another so that consumers can make a choice without being locked into any one technology.” We at EFF have long agreed that interoperability will be essential to reopening the Internet market to vibrant competition and recently published a white paper laying out in detail how we can get to a more competitive future. Seeing growing consensus from Congress is encouraging, but doing it right will require careful calibration in policy.

EFF joins Dozens of Organizations Urging More Government Transparency

EFF - Thu, 02/25/2021 - 1:21pm

EFF has joined 42 other organizations, including the ACLU, the Knight Institute, and the National Security Archive calling for the new Biden administration to fulfill its promise to “bring transparency and truth back to government.” 

Specifically, these organizations are asking the administration and the federal government at large to update policy and implementation regarding the collection, retention, and dissemination of public records as dictated in the Freedom of Information Act (FOIA), the Federal Records Act (FRA), and the Presidential Records Act (PRA).

Our call for increased transparency with the administration comes in the wake of many years of extreme secrecy and increasingly unreliable enforcement of record retention and freedom of information laws. 

The letter request that the following actions be taken by the Biden administration:

  • Emphasize to All Federal Employees the Obligation to Give Full Effect to Federal Transparency Laws.
  • Direct Agencies to Adopt New FOIA Guidelines That Prioritize Transparency and the Public Interest.
  • Direct DOJ to Fully Leverage its Central Role in Agencies’ FOIA Implementation. 
  • Issue New FOIA Guidance by the Office of Management and Budget (OMB) and Update the National FOIA Portal.
  • Assess, Preserve, and Disclose the Key Records of the Previous Administration. 
  • Champion Funding Increases for the Public Records Laws.
  • Endorse Legislative Improvements for the Public Records Laws.
  • Embrace Major Reforms of Classification and Declassification. 
  • Issue an Executive Order Reforming the Prepublication Review System. 

You can read the full letter here: 

Coded Resistance: Freedom Fighting and Communication

EFF - Wed, 02/24/2021 - 7:55pm

It’s nearing the end of Black History Month, and that history is inherently tied to strife, resistance, and organizing related to government surveillance and oppression. Even though programs like COINTELPRO are more well-known now, the other side of these kinds of stories are the ways the Black community has fought back through intricate networks and communication aimed at avoiding surveillance.

The Borderland Network

The Trans-Atlantic Slave Trade was as a dark, cruel time in the history of much of the Americas. The horrors of slavery still casts their shadow through systemic racism today. One of the biggest obstacles enslaved Africans faced when trying to organize and fight was the fact that they were closely watched, along with being separated, abused, tortured, and brought onto a foreign land to work until their death for free. They often spoke different languages from each other, with different cultures, and beliefs. Organizing under these conditions seemed impossible. Yet even under these conditions including overbearing surveillance, they developed a way to fight back. Much of this is attributed to the brilliance of these Africans using everything they had to develop communications with each other under chattel slavery. The continued fight today reflects much of the history that was established from dealing with censorship and authoritarian surveillance.

“The white folks down south don’t seem to sleep much, nights. They are watching for runaways, and to see if any other slaves come among theirs, or theirs go off among others.” - Former Runaway, Slavery’s Exiles - Sylviane A. Diouf

As Sylvane Diouf chronicled in the book, Slavery’s Exiles, slavery was not only catastrophic for many Africans, but also thankfully never a peaceful time for white owners and overseers either. Those captured from Africa and brought to the Americas seldom gave their captors a night of rest. Through rebellion, resistance, and individual sabotage with everyday life during this horrible period, freedom remained an objective. And with that objective came a deep history of secret communications and cunning intelligence.

Runaways often returned to plantations at night for years unnoticed and undetected, mostly to stay connected to family or relay information. One married couple, as Diouf tells it,  had a simple yet effective signaling system where the wife placed a garment in a particular spot that was visible from her husband’s covert. Ben and his wife (whose name is unknown) had other systems in place if it was too dark to see. For example, shining a bright light through the cracks in their cabin for an instant, and then repeating it at intervals of two or three minutes, three or four times.

These close-proximity runaways were deemed “Borderland Maroons''. They’d create tight networks of communication from plantation to plantation. Information, like the amount of reward for capture and punishment, traveled quickly through the grapevine of the Borderland Maroons. Based on this intelligence, many would make plans around either traveling away completely or staying around longer to gather others. Former Georgia Delegates from the Continental Congress recounted:

“The negroes have a wonderful art of communicating intelligence among themselves” it will run several hundred miles in a week or fortnight”

These networks often gained runaways years out of captivity and thus the ability to maintain a network among the enslaved. Coachmen, draymen, boatmen, and others who were allowed to move around off plantations were the backbone for this chain of intelligence. The shadow network of the Borderlands was the entry point of organizing for potential runaways. So even if someone was captured, they could tap into this network again later. No one would be getting rest or sleep. As Diouf recounts, keeping a high level of surveillance took a lot of resources from the slaveholders, and that fact was well-exploited by the enslaved.

Moses

Perhaps the most famous artisan of secret communications during this period is the venerable Harriet Tubman. Her character and will is undisputed, and her impeccable timing and remarkable intuition strengthened the Underground Railroad.

Dr. Bryan Walls notes much of her written and verbal communication was through plain language that acted as a metaphor:

  • “tracks” (routes fixed by abolitionist sympathizers)
  • “stations” or “depots” (hiding places)
  • “conductors” (guides on the Underground Railroad)
  • “agents” (sympathizers who helped the slaves connect to the Railroad)
  • “station masters” (those who hid slaves in their homes)
  • “passengers,” “cargo,” “fleece,” or “freight” (escaped slaves)
  • “tickets” (indicated that slaves were traveling on the Railroad)
  • “stockholders” (financial supporters who donated to the Railroad)
  • “the drinking gourd” (the Big Dipper constellation—a star in this constellation pointed to the North Star, located on the end of the Little Dipper’s handle)

The most famous example of verbal communication on plantations was the usage of song. The tradition of verbal history and storytelling remained strong among the enslaved, and acted as a way to “hide in plain sight”. Tubman said she changed the tempo of the songs to indicate whether it was safe to come out or not.

Harriet Tubman’s famous claim is “she never lost a passenger.” This rang true not only as she freed others, but also when she acted as a spy during the Civil War aiding the Union. As the first and only woman to organize and lead a military operation during the Civil War, her reputation was solidified as an expert in espionage. Her information was so detailed and accurate it often saved Black troops in the Union from harm.

Many of these tactics won’t be found written down, but passed verbally. It was illegal or prohibited for Black people to read and write. Therefore, it was a lethal risk to write more traditional ciphertext as communications.

Language as Resistance

Even though language was a barrier in the beginning and written communication was out of the question, over time English was forced onto enslaved Africans and many found a way to each other by creating an entirely new language on their own—Creole. There are many different kinds of Creole across the African Diaspora, which served as not only a way to communicate and develop a “home” language-wise, but also a way to communicate information to each other under the eyes of overseers.

"Anglican clergy were still reporting that Africans spoke little or no English but stood around in groups talking among themselves in “strange languages". ([Lorena] Walsh 1997:96–97)  -  Notes on the Origins and Evolution of African American Language

Coded Resistance in the African Diaspora

Of course, resistance against slavery didn’t just occur in the U.S., but also in Central and South America. Under domineering surveillance, many tactics had to be devised quickly and planned under the eye of white supremacy. Quilombos, or what can be viewed as the “Maroons” of Brazil, developed a way to fight against the Portuguese rule of that time:

“Prohibited from celebrating their cultural customs and strictly forbidden from practicing any martial arts, capoeira is thought to have emerged as a way to bypass these two imposing laws.” - Disguised in Dance: The Secret History of Capoeira

The rebellions in Jamaica, Haiti, and Mexico had extensive planning. They were not, as they are sometimes portrayed, merely the product of spontaneous and rightful rage against their oppressors. Some rebellions, such as Tacky’s War in Jamaica, were documented to be in the works for over a year before the first strike.

Modern Communication, Subversion, and Circumvention Radio

As technology progressed, the oppressed adapted. During the height of the Civil Rights Movement, radio became an integral part of informing supporters of the movement. While churches may have been centers of gathering outside of worship, the radio was present even in these churches to give signals and other vital info. As Brian Ward notes in Radio and the Struggle for Civil Rights in the South, this info was conveyed in covert ways as well. Such as reporting traffic jams to indicate police roadblocks.

Radio made information accessible to those who could not afford newspapers or who were denied access to literacy education due to Jim Crow. Black DJs relayed information about protests, misinformation, and police checkpoints. Keeping the community informed and as safe as possible became these DJ’s mission outside of music and propelled them into civic engagement, from protest to walking new Black voters through the voting procedure and system. Radio became a central place to enter a different world past Jim Crow.

WATS Phone Lines

Wide Area Telephone Services (WATS) also became a vital tool for the Civil Rights Movement to disperse information during important moments that often meant life or death. To circumvent the monopolistic Bell System (“Ma Bell”) that only employed white operators and colluded with law enforcement, vital civil rights organizations used WATS phone lines. These numbers were dedicated and paid lines such as 800 numbers. Directly patching through to organizations like the Student Nonviolent Coordinating Committee (SNCC), Congress of Racial Equality (CORE), Council of Federated Organizations (COFO), and the Southern Christian Leadership Conference (SCLC). These organization’s bases had code names to refer to when relaying information to another base either via WATS or radio.

CORE Radio Rules, Dick Tinsley. CORE

SNCC WATS Line Instructions & Policies, James Forman. SNCC. June 24-26, 1964

Looking at Today: Reverse Surveillance

While Black and other marginalized communities still struggle to communicate despite surveillance, we do have digital tools to help. With encryption widely available, we can now use protected communications with each other for sensitive information. Of course, not everyone today is free to roam or use these services equally. Encryption itself is also under constant risk of being undermined in different areas of the world. Technology can feel nefarious and “Big Tech'' seems to have a constant eye on millions.

In addition, just as with the DJs of the past, current activists like Black Lives Matter are used this hypervisibility under Big Tech to get police brutality highlighted in the mainstream conversation and in real life. The world has seen police brutality up close because of on-site video, live recordings from phones and police scanners. Databases like EFF’s Atlas of Surveillance increasingly map police technology in your city.  And all of us, whether activists or not, can use tools to scan for the probing of communications during protests.

Atlas of Surveillance Map of Police Technology https://atlasofsurveillance.org/atlas, 2021-2-24

The Black community has been fighting what essentially is the technological militarization of the police force since the 1990s. While the struggle continues, we have seen recent wins where police use of facial recognition technology is now being limited or banned in many areas in the U.S., with support from groups around the country, we can help close this especially dangerous window of surveillance. 

Being able to communicate with each other and organize is embedded in the roots of resistance around the world, but it has a long and important history in the Black community in the United States. Whether online or off, we are keeping a public eye on those who are sworn to serve and protect us, with the hope one day we can freely move without the chains of surveillance and white supremacy. Until then, we’ll continue to see, and to celebrate, the spirit of resistance as well as the creativity of efforts to build and keep a strong line of communication despite surveillance and repression.

Happy Black History Month.

Student Surveillance Vendor Proctorio Files SLAPP Lawsuit to Silence A Critic

EFF - Tue, 02/23/2021 - 4:31pm

During the pandemic, a dangerous business has prospered: invading students’ privacy with proctoring software and apps. In the last year, we’ve seen universities compel students to download apps that collect their face images, driver’s license data, and network information. Students who want to move forward with their education are sometimes forced to accept being recorded in their own homes and having the footage reviewed for “suspicious” behavior.

Given these invasions, it’s no surprise that students and educators are fighting back against these apps. Last fall, Ian Linkletter, a remote learning specialist at the University of British Columbia, became part of a chorus of critics concerned with this industry.

Now, he’s been sued for speaking out. The outrageous lawsuit—which relies on a bizarre legal theory that linking to publicly viewable videos is copyright infringement—will become an important test of a 2019 British Columbia law passed to defend free speech, the Protection of Public Participation Act, or PPPA.

Sued for Linking

This isn’t the first time U.S.-based Proctorio has taken a particularly aggressive tack in responding to public criticism. In July, Proctorio CEO Mike Olsen even publicly posted the chat logs of a student who complained about the software’s support, posting the conversation on Reddit, a move he later apologized for.

Shortly after that, Linkletter dove in deep to analyze the software that many students at his university were being forced to adopt, an app called Proctorio. He became concerned about what Proctorio was—and wasn’t—telling students and faculty about how its software works.

In Linkletter’s view, customers and users were not getting the whole story. The software performed all kinds of invasive tracking, like watching for “abnormal” eye movements, head movements, and other behaviors branded suspicious by the company. The invasive tracking and filming were of great concern to Linkletter, who was worried about students being penalized academically on the basis of Proctorio’s analysis.

“I can list a half dozen conditions that would cause your eyes to move differently than other people,” Linkletter said in an interview with EFF. “It’s a really toxic technology if you don’t know how it works.”

In order to make his point clear, Linkletter published some of his criticism on Twitter, where he linked to Proctorio’s own published YouTube videos describing how their software works. In those videos, Proctorio describes its own tracking functions. The videos described functions with titles like “Behaviour Flags,” “Abnormal Head Movement,” and “Record Room.”

Instead of replying to Linkletter’s critique, Proctorio sued him. Even though Linkletter didn’t copy any Proctorio materials, the company says Linkletter violated Canada’s Copyright Act just by linking to its videos. The company also said those materials were confidential, and alleged that Linkletter’s tweets violated the confidentiality agreement between UBC and Proctorio, since Linkletter is a university employee. 

Test of New Law

Proctorio’s legal attack on Ian Linkletter is meritless. It’s a classic SLAPP, an acronym that stands for Strategic Lawsuit Against Public Participation. Fortunately, British Columbia’s PPPA is a type of “anti-SLAPP” law. This is a type of law that’s being widely adopted throughout U.S. states and also exists in two Canadian provinces. In Canada, anti-SLAPP laws typically allow a defendant to bring an early challenge to the lawsuit against them on the basis that their speech is on a topic of “public interest.”  If the court accepts that characterization, the court shall dismiss the action—unless the plaintiff can prove that their case has substantial merit, the defendant has no valid defense, and that the public interest in allowing the suit to continue outweighs the public’s interest in protecting the expression.  That’s a very high bar for plaintiffs and changes the dynamics of a typical lawsuit dramatically.

Without anti-SLAPP laws, well-funded companies like Proctorio are often able to litigate their critics into silence—even in situations where the critics would have prevailed on the legal merits.

“Cases like this are exactly why anti-SLAPP laws were invented,” said Ren Bucholz, a litigator in Toronto. 

Linkletter should prevail here. It isn’t copyright infringement to link to a published video on the open web, and the fact that Proctorio made the video “unlisted” doesn’t change that. Even if Linkletter had copied parts or all of the videos—which he did not—he would have broad fair dealing rights (similar to U.S. "fair use" rights) to criticize the software that has put many UBC students under surveillance in their own homes.

Linkletter had to create a GoFundMe page to pay for much of his legal defense. But Proctorio’s bad behavior has inspired a broad community of people to fight for better student privacy rights, and hundreds of people donated to Linkletter’s defense fund, which raised more than $50,000. And the PPPA gives him a greater chance of getting his fees back. 

We hope the PPPA is proven effective in this, one of its first serious tests, and that lawmakers in both the U.S. and Canada adopt laws that prevent such abuses of the litigation system. Meanwhile, Proctorio should cease its efforts to muzzle critics from Vancouver to Ohio.

Legal documents

How Do Copyright Rules Affect Internet Creators? And What Can They Do About It?

EFF - Fri, 02/19/2021 - 2:10pm

If you make and share things online, professionally or for fun, you’ve been affected by copyright law. You may use a service that depends on the Digital Millennium Copyright Act (DMCA) in order to survive. You may have gotten a DMCA notice if you used part of a movie, TV show, or song in your work. You have almost certainly run up against the weird and draconian world of copyright filters like YouTube’s Content ID. EFF wants to help.

The end of last year was a flurry of copyright news, from the mess with Twitch to the “#StopDMCA” campaign that took off as new copyright proposals became law. The new year has proven that this issue is not going away, as a story emerged about cops using music in what looked like an attempt to trigger copyright filters to take videos of them offline. And throughout the pandemic, people stuck at home have tried to move their creativity online, only to find filters standing in their way. Enough is enough.

Next Friday, February 26th, at 10 AM Pacific, EFF will be hosting a town hall for Internet creators. There’s been a lot of actual and proposed changes to copyright law that you should know about and be able to ask questions about.

We will go over the copyright laws that got snuck into the omnibus spending package at the end of last year and what they mean for you. We will also use what we learned in writing our whitepaper on Content ID to help creators understand how it works and what to do with it. Finally, we will talk about the latest copyright proposal, the Digital Copyright Act, and how dangerous it is for online creativity. Most importantly, we will give you a way to stay informed and fight back.

Half of the 90-minute town hall will be devoted to answering your questions and hearing your concerns. Please join us for a conversation about the state of copyright in 2021 and what you need to know about it.

RSVP

Cops Using Music to Try to Stop Being Filmed Is Just the Tip of the Iceberg

EFF - Fri, 02/19/2021 - 1:42pm

Someone tries to livestream their encounters with the police, only to find that the police started playing music. In the case of a February 5 meeting between an activist and the Beverly Hills Police Department, the song of choice was Sublime’s “Santeria.” The police may not got no crystal ball, but they do seem to have an unusually strong knowledge about copyright filters.

The timing of music being played when a cop saw he was being filmed was not lost on people. It seemed likely that the goal was to trigger Instagram’s over-zealous copyright filter, which would shut down the stream based on the background music and not the actual content. It’s not an unfamiliar tactic, and it’s unfortunately one based on the reality of how copyright filters work.

Copyright filters are generally more sensitive to audio content than audiovisual content. That sensitivity causes real problems for people performing, discussing, or reviewing music online. It’s a problem of mechanics. It is easier for filters to find a match just on a piece of audio material compared to a full audiovisual clip. And then there is the likelihood that a filter is merely checking to see if a few seconds of a video file seems to contain a few seconds of an audio file.

It’s part of why playing music is a better way of getting a video stream you don’t want seen shut down. (The other part is that playing music is easier than walking around with a screen playing a Disney film in its entirety. Much fun as that would be.)

The other side of the coin is how difficult filters make it for musicians to perform music that no one owns. For example, classical musicians filming themselves playing public domain music—compositions that they have every right to play, as they are not copyrighted—attract many matches. This is because the major rightsholders or tech companies have put many examples of copyrighted performances of these songs into the system. It does not seem to matter whether the video shows a different performer playing the song—the match is made on audio alone. This drives lawful use of material offline.

Another problem is that people may have licensed the right to use a piece of music or are using a piece of free music that another work also used. And if that other work is in the filter’s database, it’ll make a match between the two. This results in someone who has all the rights to a piece of music being blocked or losing income. It’s a big enough problem that, in the process of writing our whitepaper on YouTube’s copyright filter, Content ID, we were told that people who had experienced this problem had asked for it to be included specifically.

Filters are so sensitive to music that it is very difficult to make a living discussing music online. The difficulty of getting music clips past Content ID explains the dearth of music commentators on YouTube. It is common knowledge among YouTube creators, with one saying “this is why you don’t make content about music.”

Criticism, commentary, and education of music are all areas that are legally protected by fair use. Using parts of a thing you are discussing to show what you mean is part of effective communication. And while the law does not make fair use of music more difficult to prove than any other kind of work, filters do.

YouTube’s filter does something even more insidious than simply taking down videos, though. When it detects a match, it allows the label claiming ownership to take part or all of the money that the original creator would have made. So a video criticizing a piece of music ends up enriching the party being critiqued. As one music critic explained:

Every single one of my videos will get flagged for something and I choose not to do anything about it, because all they’re taking is the ad money. And I am okay with that, I’d rather make my videos the way they are and lose the ad money rather than try to edit around the Content ID because I have no idea how to edit around the Content ID. Even if I did know, they’d change it tomorrow. So I just made a decision not to worry about it.

This setup is also how a ten-hour white noise video ended up with five copyright claims against it. This taking-from-the-poor-and-giving-to-the-rich is a blatantly absurd result, but it’s the status quo on much of YouTube.

A group, like the police, who is particularly tech-savvy could easily figure out which songs result in videos being removed rather than have the money stolen. Internet creators talk on social media about the issues they run into and from whom. Some rightsholders are infamously controlling and litigious.

Copyright should not be a fast-track to getting speech removed that you do not like. The law is meant to encourage creativity by giving artists a limited period of exclusive rights to their creations. It is not a way to make money off of criticism or a loophole to be exploited by authorities.

Racial and Immigrant Justice Groups Sue Government for Records of COVID-19 Data Surveillance

EFF - Fri, 02/19/2021 - 12:57pm
Just Futures Law, MediaJustice, Mijente, Immigrant Defense Project and Electronic Frontier Foundation say public must know details of COVID-19 related data collection and sharing

San Francisco - The Electronic Frontier Foundation (EFF) is representing four racial and immigrant justice groups— Just Futures Law, MediaJustice, Mijente Support Committee, and the Immigrant Defense Projectsuing the U.S. Departments of Homeland Security and Health and Human Services under the Freedom of Information Act (FOIA) for withholding critical records about the collection and sharing of data during the COVID-19 pandemic.

The four groups all filed FOIA requests for information about COVID-related surveillance and data analysis last year. In particular, the groups are worried about HHS Protect, a vast secretive data platform designed by controversial data software company Palantir. Palantir has a long history of building surveillance systems for the Department of Homeland Security that facilitate criminal prosecutions, family separation, and raids that lead to detention and deportation. In July of last year, the government required all hospitals to report COVID-19 infection data to HHS Protect, instead of the system operated by the Centers for Disease Control.

However, the public has little to no information about COVID-19 data collection and tracking, including on the more than 200 data sources included in HHS Protect. The plaintiffs in this case asked both the Department of Homeland Security and the Department of Health and Human Services for any records describing the data sources, as well as limits on the use of data collected and the duration of retention, but have yet to receive anything responsive to their requests. Without this information, the public cannot evaluate either the efficacy of these invasive technologies now or the risks they might pose in the future.

“Secrecy from the government is not helping us fight this pandemic. We’ve already seen how privacy fears have deterred some from getting important medical care for COVID,” said Steven Renderos, Executive Director of MediaJustice. “Yet the government is still withholding this information. If we can’t say with confidence what the government is doing, we have an uphill battle to protect public health. Immediate answers are essential.”

“We know that the government is collecting huge amounts of health data on us for the purported purpose of public health and combating COVID,” said Julie Mao, Deputy Director from Just Futures Law. “For example, we’ve seen a lot of location data gathered from mobile phones or contract tracing apps, but scientists have questioned the effectiveness of such mass surveillance at mitigating disease spread. The public has the right to know what sensitive information these agencies are collecting and to evaluate its utility.”

The lawsuit demands the government immediately process the groups’ FOIA request, and make the records available to them.

"It's unacceptable that we have no idea how the HHS Protect platform is collecting data or how long it's holding it," said Jacinta Gonzalez, Senior Campaign Organizer with Mijente. "It's imperative that the public understands how personal data is being funnelled into large databases like this and how long that data is being stored. But it's especially critical here, because HHS has a history of sharing personal data with ICE for deportation purposes, to say nothing of the fact that the company that designed this platform, Palantir, is a well-known ICE contractor. The government's secrecy here is very alarming."

“The potential privacy and human rights impact of this data surveillance is deeply concering,” said Mizue Aizeki, Interim Executive Director of the Immigrant Defense Project. “We cannot allow tech corporations and the government take advantage of the pandemic to expand surveillance and policing powers. The Department of Health and Human Services is set to spend half a billion dollars on surveillance and data technologies in the coming months and years, so the time for answers is now.”

For the full complaint in Just Futures v DHS:
https://www.eff.org/document/mediajustice-v-dhs-covid-19-foia-complaint

Just Futures Law (JFL) is a women-of-color led transformative immigration law project rooted in movement lawyering. @justfutureslaw.

MediaJustice is dedicated to building a grassroots movement for a more just and participatory media—fighting for racial, economic, and gender justice in a digital age. MediaJustice boldly advances communication rights, access, and power for communities harmed by persistent dehumanization, discrimination and disadvantage. Home of the #MediaJusticeNetwork, we envision a future where everyone is connected, represented, and free.

Mijente Support Committee​ is a Latinx/Chicanx political, digital, and grassroots organizing hub. Launched in 2015, Mijente seeks to strengthen and increase the participation of Latino people in the broader movements for racial, economic, climate, and gender justice. @conmijente

The Immigrant Defense Project (IDP) works to secure fairness and justice for immigrants in the racialized U.S. criminal and immigration systems. IDP fights to end the current era of unprecedented mass criminalization, detention and deportation through a multi-pronged strategy including advocacy, litigation, legal support, community partnerships, and strategic communications. @ImmDefense.    

Contact:  RebeccaJeschkeMedia Relations Director and Digital Rights Analystpress@eff.org JulieMaoDeputy Director, Just Futures Lawjulie@justfutureslaw.org

EFF to First Circuit: Schools Should Not Be Policing Students’ Weekend Snapchat Posts

EFF - Wed, 02/17/2021 - 5:36pm

This blog post was co-written by EFF intern Haley Amster.

EFF filed an amicus brief in the U.S. Court of Appeals for the First Circuit urging the court to hold that under the First Amendment public schools may not punish students their off-campus speech, including posting to social media while off campus.

The Supreme Court has long held that students have the same constitutional rights to speak in their communities as do adults, and this principle should not change in the social media age. In its landmark 1969 student speech decision, Tinker v. Des Moines Independent Community School District, the Supreme Court held that a school could not punish students for wearing black armbands at school to protest the Vietnam War. In a resounding victory for the free speech rights of students, the Court made clear that school administrators are generally forbidden from policing student speech except in a narrow set of exceptional circumstances: when (1) a student’s expression actually causes a substantial disruption on school premises; (2) school officials reasonably forecast a substantial disruption, or (3) the speech invades the rights of other students.

However, because Tinker dealt with students’ antiwar speech at school, the Court did not explicitly address the question of whether schools have any authority to regulate student speech that occurs outside of school. At the time, it may have seemed obvious that students can publish op-eds or attend protests outside of school, and that the school has no authority to punish students for that speech even if it’s highly controversial and even if other students talk about it in school the next day. As we argued in our amicus brief, the Supreme Court’s three student speech cases following Tinker all involved discipline related to speech that may reasonably be characterized as on-campus.

In the social media age, the line between off- and on-campus has been blurred. Students frequently engage in speech on the Internet outside of school, and that speech is then brought into school by students on their smartphones and other mobile devices. Schools are increasingly punishing students for off-campus Internet speech brought onto campus.

In our amicus brief, EFF urged the First Circuit to make clear that schools have no authority under Tinker to police students’ off-campus speech, including when that speech occurs on social media. The case, Doe v. Hopkinton, involves two public high school students, “John Doe” and “Ben Bloggs,” who were suspended for making comments in a private Snapchat group that their school considered to be bullying. Doe and Bloggs filed suit asserting the school suspension violated their First Amendment rights.

The school made no attempt to show the lower court that Doe and Bloggs sent the messages at issue while on campus, and the federal judge erroneously concluded that “it does not matter whether any particular message was sent from an on- or off-campus location.”

As we explained in our amicus brief, that conclusion was wrong. Tinker made clear that students’ speech is entitled to First Amendment protection, and authorized schools to punish student speech only in narrow circumstances to ensure the safety and functioning of the school. The Supreme Court has never authorized or suggested that public schools have any authority to reach into students’ private lives and punish them for their speech while off school grounds or after school hours.

This is exactly what another federal appeals court considering this question concluded last summer. In B.L. v. Mahanoy Area School District, a high school student who had failed to advance from junior varsity to the varsity cheerleading squad posted a Snapchat selfie over the weekend with text that said, among other things, “fuck cheer.” One of her Snapchat connections took a screen shot of the post and shared it with the cheerleading coaches, who suspended the student from participation in the junior varsity cheer squad.

The Third Circuit in Mahanoy made clear that the narrow set of circumstances established in Tinker where a school may regulate disruptive student speech applies only to speech uttered at school. As such, it held that schools have no authority to punish students for their off-campus speech—even when that speech “involves the school, mentions teachers or administrators, is shared with or accessible to students, or reaches the school environment.”

This conclusion is especially critical given that students use social media to engage in a wide variety of self-expression, political speech, and activism. As we highlighted in our amicus brief, this includes expressing dissatisfaction with their schools’ COVID-19 safety protocols, calling out instances of racism at schools, and organizing protests against school gun violence. It is essential that courts draw a bright line prohibiting schools from policing off-campus speech so that students can exercise their constitutional rights outside of school without fear that they might be punished for it come Monday morning.

Mahanoy is currently on appeal to the Supreme Court, which will consider the case in this spring. We hope that the First Circuit and the Supreme Court will take this opportunity to reaffirm the free speech rights of public-school students and draw clear limits on schools’ ability to police students’ private lives.

Speak Up for Real Privacy in Virginia

EFF - Tue, 02/16/2021 - 5:30pm

Last week, we raised the alarm about an empty privacy bill moving fast through the Virginia legislature. The bill, SB 1392, is supported by Microsoft and Amazon, and would set a dangerous standard for state privacy bills.

Take Action

Virginia: Speak Up for Real Privacy

The bill has passed through the House Committee on Technology, Communications, and Innovation and is headed to a floor vote in the House this week.

Thanks to your messages and the work of privacy and consumer advocates on the ground in Virginia, lawmakers have started to hear the message that privacy laws should protect people, not businesses. While they have made some small changes to the bill, such as a mandate to set up a working group to suggest ways to strengthen the bill, these changes are not nearly enough to protect the people of Virginia. It is much better to pass a strong bill than to pass a weak one with the hope of improving it, and we urge the legislature to hit pause on SB 1392 until it can be amended to offer real protections.

Now that people demanding privacy have the ear of the legislature, it’s time to speak up. Write to your delegates and demand real privacy in Virginia.

TAKE ACTION

VIRGINIA: SPEAK UP FOR REAL PRIVACY

EFF to Patent Office: No New Design Patents

EFF - Tue, 02/16/2021 - 4:49pm

Design is incredibly important to how people use and choose products, but design patents are not. They provide exclusive rights only to ornamental product features, which by definition are not useful or artistic; for those that are, utility patent and copyright protection exist instead. As we’ve said before, we don’t need design patents because they restrict far more creativity, innovation, and economic activity than they promote. Unfortunately, the Patent Office is preparing to grant even more.

Design patents provide exclusive rights to ornamental product features that are not useful enough to patent or creative enough for copyright. As we’ve said before, we don’t need design patents, as they give far too much power to those who give so little to the public. Unfortunately, the Patent Office wants to grant more design patents to those who contribute even less. 

To do that, the Patent Office is proposing regulations that would open the floodgates to unprecedented and unnecessary types of design patents on computer-generated imagery (CGI). Although the standards for CGI design patents are way too low already, the Patent Office wants to make them even lower by allowing patented designs on non-physical products, like websites, software applications, and holographic projections.

We have never allowed patents on designs untethered to physical products, and should not do so now. Design patent owners have the power to stop anyone else in this country from making, using, or selling what their patent covers. If companies can get patents on designs for non-physical products, like website banners, they will have the right to sue anyone whose website uses the same or similar features to demand payment or force them to stop. Given the exorbitant cost of litigation, companies with the resources to amass design patents will have massive power over what the web looks like for the rest of us. 

We should be especially cautious of expanding corporate power over computer graphics during a global pandemic when face-to-face communication is a public health risk. The last thing we need are more design patents restricting people’s ability to compete, create, and freely express themselves online. That is why EFF submitted comments urging the Patent Office not to take this unprecedented and perilous approach.

Extending design patent protection to digital images means unnecessarily extending protection to content that already gets ample protection under copyright and trademark law. Letting design patents intrude further into the realm of graphic design creates uniquely dangerous risks. When copyright applies, so do protections for fair uses under the First Amendment. But there are no such protections for the use of patented designs. That makes the extension of design patent protection a threat not only to technological innovation and competition, but also to creativity and free expression.

Despite these dangers, the Patent Office is proposing rules that will ensure we see more design patents and more patent litigation. The Office wants to change how it applies the part of the Patent Act which makes an “ornamental design for an article of manufacture” eligible for protection by effectively discarding the “article of manufacture” requirement altogether. For example, the Patent Office admiringly cited Singapore’s decision to eliminate a requirement that “a design must be applied to a physical article in order to be protected,” thus allowing patents on graphical user interface (GUI) designs applied to a “non-physical product.” But in the U.S., patents on designs for non-physical products have never been allowed.

Nor should they: granting new and unprecedented design rights would wreak havoc on the U.S. economy when it is already struggling to recover from the economic depression caused by the unrelenting COVID-19 pandemic. Now more than ever, people depend on computer technology and connectivity to work, learn, communicate with each other, and get essential products and services—from groceries to health care. We should not impose any additional restrictions on people’s ability to create, use, and communicate digital content.

To that end, Singapore may not be the best example to draw from — after all,  its law also includes content-based prohibitions on designs that do not align with public order or morals. If other countries are to serve as models, it would be better to look to those that better align values of free expression and individual choice in their design regulations. One such model is Germany, where the law governing registered designs explicitly says a “computer program is not considered to be a product.”

As we’ve written before, former Director of the Patent Office Andrei Iancu worked overtime during his tenure to tilt the scales in favor of patent owners and against technologists, start-ups, and end-users. Although his departure from the office is a positive sign, it will take a lot of time and work to rebuild from the damage he inflicted. If this proposal is adopted, however, the damage will be more pervasive and difficult to fix.

We call on the Patent Office to reconsider—and abandon—this effort to expand design patent protection. Instead of lowering patentability standards, we should be empowering examiners to reject deficient design patent applications under existing law. Granting more and worse design patents will only encourage extortionate patent litigation and deter the innovation and economic activity the patent system is supposed to promote.

 

 

 

 

 

Turkey’s Free Speech Clampdown Hits Twitter, Clubhouse -- But Most of All, The Turkish People

EFF - Tue, 02/16/2021 - 10:04am

EFF has been tracking the Turkish government’s crackdown on tech platforms and its continuing efforts to force them to comply with draconian rules on content control and access to users’ data. As of now, the Turkish government has now managed to coerce Facebook, YouTube, and TikTok into appointing a legal representative to comply with the legislation via threats to their bottom line: prohibiting Turkish taxpayers from placing ads and making payments to them if they fail to appoint a legal representative. According to local news, Google is the latest company to have appointed a representative through a shell company in Turkey. 

Out of the major foreign social media platforms used in Turkey, only Twitter has not appointed a local representative and subject itself to Turkish jurisdiction over its content and users’ policies. Coincidentally, Twitter has been drawn into a series of moderation decisions that push the company into direct conflict with Turkish politicians. On February 2nd, Twitter decided that three tweets by the Turkish Interior Minister Süleyman Soylu violated its rules about hateful conduct and abusive behavior policy. Access to these tweets was restricted rather than removed as Twitter considered them still in the public interest. Similarly, Twitter decided to remove and delete a tweet by the AKP coalition MHP leader Devlet Bahçel, where he tweeted that student protestors were “terrorists” and "poisonous snakes" “whose heads needed to be crushed”, as the tweet violated Twitter’s violent threats policy.

Yaman Akdeniz, a founder of the Turkish Freedom of Expression Association, told EFF 

“This is the first time Twitter deployed its policy on Turkish politicians while the company is yet to decide whether to have a legal representative in Turkey as required by Internet Social Media Law since October 2020.

As in many other countries, politicians in Turkey are now angry at Twitter both for failing to sufficiently censor criticism of Turkish policies, and for sanctioning senior domestic political figures for their violations of the platform’s terms of service. 

By attempting to avoid both forms of political pressure by declining to elect a local representative, Twitter is already paying a price. The Turkish regulator BTK has already imposed the first set of sanctions by forbidding Turkish taxpayers from paying for ads on Twitter. In principle, BTK can go further later this spring. It will be permitted to apply for sanctions against Twitter starting in April 2021, including ordering ISPs to throttle the speed of Turkish users’ connections to Twitter, at first by 50% and subsequently by up to 90%. Throttling can make sites practically inaccessible within Turkey, fortifying Turkey’s censorship machine and silencing speech--a disproportionate measure that profoundly limits users’ ability to access online content within Turkey.

The Turkish Constitutional Court has overturned previous complete bans on Wikipedia in 2019 and Twitter and YouTube back in 2014. Even though the recent legislation “only” foresees throttling sites’ access speeds by 50% or 90%, this sanction aims to make sites unusable in practice and should be viewed by the Court the same way as an outright ban. Research on website usability has already found that huge numbers of users will lose patience with only slightly slower sites than they expect; Delays of just “1 second” are enough to interrupt a person’s conscious thought process; making users wait five or ten times as long would be catastrophic.

But if the Turkish authorities think that throttling away major platforms that refuse to comply with its orders, they may have another problem. The new Internet Social Media law covers any social network provider that exceeds a “daily access” of one million. While the law is unclear as to what that figure means in practice, it wasn’t intended to cover smaller alternatives -- like Clubhouse, the new invitation-only audio-chat social networking, iOS-only app. Inevitably, with Twitter throttled and other services suspected of being required to comply with Turkish government demands, that’s exactly where political conversations have shifted. 

During the recent crackdown, Clubhouse has hosted Turkish groups every night until after midnight, where students, academics, journalists, and sometimes politicians join the conversations. For now, Turkish speech enforcement is falling back to other forms of intimidation. At least four students were recently taken into custody. Although the government said the arrests related to the students’ use of other social media platforms, the students believe that their Clubhouse activity was the only thing that distinguished them from thousands of others.

Clubhouse, as with many other fledglings, general-purpose social media networks, has not accounted for its use as a platform by endangered voices. It has a loosely-enforced real names policy -- one of the reasons why the students were able to be targeted by law enforcement. And as the Stanford Internet Observatory discovered, its design potentially allowed government actors or other network spies to collect private data on its users, en masse.

Ultimately, while it’s the major tech companies who face legal sanctions and service interruptions under Turkey’s Social Media Law, it’s ordinary Turkish citizens who are really paying the price: whether it’s slower Internet services, navigated cowed social platforms, or, physical arrest for simply speaking out online on platforms that cannot yet adequately protect them from their own government.

Indonesia’s Proposed Online Intermediary Regulation May be the Most Repressive Yet

EFF - Tue, 02/16/2021 - 10:00am

Indonesia is the latest government to propose a  legal framework to coerce social media platforms, apps, and other online service providers to accept local jurisdiction over their content and users’ data policies and practices. And in many ways, its proposal is the most invasive of human rights. 

This rush of national regulations started with Germany’s 2017 “NetzDG” law, which compels internet platforms to remove or block content without a court order and imposes draconian fines on companies that don’t proactively submit to the country's own content-removal rules. Since NetzDG entered into force, Venezuela, Australia, Russia, India, Kenya, the Philippines, and Malaysia have followed with their own laws or been discussing laws similar to the German example. 

NetzDG, and several of its copycats, require social media platforms with more than two million users to appoint a local representative to receive content takedown requests from public authorities and government access to data requests. NetzDG also requires platforms to remove or disable content that appears to be “manifestly illegal” within 24 hours of notice that the content exists on their platform. Failure to comply with these demands subjects companies to draconian fines (and even raises the specter of blocking of their services). This creates a chilling effect on free expression: platforms will naturally choose to err on the side of removing gray area content rather than risk the punishment. 

Indonesia’s NetzDG variant—dubbed MR5—is the latest example. It entered into force in November 2020, and, like some others, goes significantly further than its German inspiration. In fact, the Indonesian government is exploring new lows in harsh, intrusive, and non-transparent Internet regulation. The MR5 regulation, issued by the Indonesian Ministry of Communication and Information Technology (Kominfo), seeks to tighten the government’s grip over digital content and users’ data. 

MR5 Comes Amid Difficult Times In Indonesia

The MR5 regulation also comes at a time of increased conflict, violence, and human rights abuses in Indonesia: at the end of 2020, the UN High Commissioner for Human Rights raised concern about the escalating violence in Papua and West Papua and shed light on reports about “intimidation, harassment, surveillance, and criminalization of human rights defenders for the exercise of their fundamental freedoms.” According to APC, the Indonesian government has used hate speech laws, envisioned to protect minority and vulnerable groups, to silence dissent and people critical of the government. 

These provisions are not only a serious threat to Indonesians’ free expression rights, they are also a major compliance challenge for Private ESOs

MR5 further exacerbates the challenging situation of freedom of expression in Indonesia this year and in the future, according to Ika Ningtyas, Head of the Freedom of Expression Division at the Southeast Asia Freedom of Expression Network (SAFEnet). She told EFF: 

The Ministry's authority, in this case, Kominfo, is increasing capacity so it can judge and decide whether the content is appropriate or not. We're very concerned that MR5 will be misused to silence groups criticizing the government. Independent branches of government have been excluded, making it unlikely that this regulation will include transparent and fair mechanisms. MR5 can be followed by other countries, especially in Southeast Asia. Regional and global solidarity is needed to reject it.

Business enterprises have a responsibility to respect human rights law. The UN Special Rapporteur on Free Expression has already reminded States that they “must not require or otherwise pressure the private sector to take steps that unnecessarily or disproportionately interfere with freedom of expression, whether through laws, policies, or extralegal means.” The Special Rapporteur also pointed out that any measures to remove online content must be based on validly enacted law, subject to external and independent oversight, and demonstrate a necessary and proportionate means of achieving one or more aims under Article 19 (3) of the ICCPR.

We join SAFEnet in urging the Indonesian government to bring its legislation into full compliance with international freedom of expression standards. 

Below are some of MR5’s most harmful provisions.

Forced ID Registration To Operate in Indonesia

MR5 obliges every “Private Electronic System Operator” (or “Private ESO”) to register and obtain an ID certificate issued by the Ministry before people in Indonesia start accessing its services or content. A “Private ESO” includes any individual, business entity or the community that operates “electronic systems” for users within Indonesia, even if the operators are incorporated abroad. Private ESOs subject to this obligation are any digital marketplace, financial services, social media and content sharing platforms, cloud service providers, search engines, instant messaging, email, video, animation, music, film and games, or any application which collects, processes, or analyzes users’ data for electronic transactions within Indonesia. 

Registration must take place by mid-May 2021. Under MR5, Kominfo will sanction non-registrants by blocking their services. Those Private ESOs who decide to register must provide information granting access to their “system” and data to ensure the effectiveness in the “monitoring and law enforcement process.” If a registered Private ESO disobeyed the MR5 requirements, for example, by failing to provide the “direct access” to their systems (Article 7 (c)), it can be punished in various ways, ranging from a first warning to temporary blocking to full blocking and a final revocation of its registration. Temporary or full blocking of a site is a general ban of a whole site, an inherently disproportionate measure, and therefore an impermissible limitation under Article 19 (3) of the UN’s International Covenant on Civil and Political Rights (ICCPR). When it comes to general blocking, the Council of Europe has recommended that public authorities should not, through general blocking measures, deny access by the public to information on the Internet, regardless of frontiers. The United Nations and three other special mandates on freedom of expression explain that “[m]andatory blocking of entire websites, IP addresses, ports, network protocols or types of uses (such as social networking) is an extreme measure – analogous to banning a newspaper or broadcaster – which can only be justified in accordance with international standards, for example where necessary to protect children against sexual abuse.” 

A general ban of a Private ESO platform will also not be compatible with Article 15 (3) of the UN’s International Covenant on Economic, Social and Cultural Rights (ICESCR), which states that individuals have a right to “take part in cultural life” and to “enjoy the benefits of scientific progress and its applications.” The UN has identified “interrelated main components of the right to participate or take part in cultural life: (a) participation in, (b) access to, and (c) contribution to cultural life." They explained that access to cultural life also includes a “right to learn about forms of expression and dissemination through any technical medium of information or communication.” 

Moreover, while a State party can impose restrictions on freedom of expression, these may not put in jeopardy the right itself, which a general ban does. The UN Human Rights Committee has said that the “relation between right and restriction and between norm and exception must not be reversed.” And Article 5, paragraph 1 of the ICCPR, states that “nothing in the present Covenant may be interpreted as implying for any State … any right to engage in any activity or perform any act aimed at the destruction of any of the rights and freedoms recognized in the Covenant.”

Forced Appointment of a Local Contact Person

Tech companies have come under increasing criticism for decisions to flout and ignore local laws or treat non-U.S. countries with attitudes that lack understanding of the local context. In that sense, a local point of contact can be a positive step. But forcing the appointment of a local contact is a complex decision that can make companies vulnerable to domestic legal actions, including potential arrest and criminal charges of their local contact as has happened in the past. With a local representative, platforms will also find it much harder to resist arbitrary orders and can be vulnerable to domestic legal action, including potential arrest and criminal charges. MR5 compels everyone whose digital content is used or accessed within Indonesia to appoint a local point of contact based in Indonesia and who would be responsible to respond to content removal or personal data access orders. 

Regulations Requiring Take Down of Content and Documents Deemed “Prohibited by the Government”

Article 13 of the MR5 forces Private ESOs (except cloud providers) to take down prohibited information and/or documents. Article 9(3) defines prohibited information and content as anything that violates any provision of Indonesia’s laws and regulations, or creates “community anxiety” or “disturbance in public order.” Article 9 (4) grants the Ministry, a non-independent authority, unfettered discretion to define this vague notion of “community anxiety” and “public disorder.” It also forces these Private ESOs to take down anything that would “inform ways or provide access” to these prohibited documents.

Laws must provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not. 

This language is extremely concerning. Compelling Private ESOs to ensure that they are not “informing ways'' or “providing access” to prohibited documents and information, in our interpretation, would mean that if a user of a Private ESO platform or site decides to publish a tutorial on how to circumvent prohibited information or content (for example, by explaining how to use VPN to bypass access blocking), such a tutorial itself could be considered prohibited information. Use of a VPN itself could be considered prohibited information. (The Communications Minister has told Internet users in Indonesia to stop using Virtual Private Networks, which he claims allow users to hide from authorities and put users’ data at risk.)

While maintaining public order may in some circumstances be considered a legitimate aim, this provision could be used to justify limitations to freedom of expression. Any restrictions in the name of public order must be prescribed by law, be necessary and proportionate, and be the least restrictive means of realizing that legitimate aim. Moreover, as the Human Rights Committee stated, States' restrictions on the exercise of freedom of expression may not put in “jeopardy the right itself.” To comply with the “Prescribed by Law” requirement, they must not only be formulated with sufficient precision to enable an individual to regulate their conduct, but they must also be made accessible to the public. And they must not confer unfettered discretion for the restriction of freedom of expression on those charged with their execution. 

Article 9(3) includes within  “prohibited content and information” any speech that violates Indonesian law and regulations. GR71, a regulation one level higher than MR5, and the later Law No. 11 of 2008 on Electronic Information and Transactions, both use similar vague language without offering any further definition or elucidation. For example, Law No. 11 of 2008 defines “Prohibited Acts” as any person knowingly and without authority distributing and/or transmitting and/or causing to be accessible any material thought to violate decency; promote gambling; insult or defame; extort; spread false news resulting in consumer losses in electronic transactions; cause hatred based on ethnicity, religion, race, or group; or contain threats of violence. We see a similar systematic problem with the definition of “community anxiety” and “public order,” which fails to comply with the requirements of Article 19 (3) of the ICCPR. 

Additionally, Indonesia’s criminal code considers blasphemy a crime—even though outlawing "blasphemy" is incompatible with international human rights law. The United Nations Human Rights Committee has clarified that laws that prohibit displays of lack of respect for a religion or other belief systems, including blasphemy laws, are incompatible with the ICCPR. When it comes to defamation law, the UNHRC states that any law be crafted with care to ensure it does not stifle freedom of expression. The laws should allow for the defense of truth and should not be applied to other expressions that are not subject to verification. Likewise, the UNHRC has stated that “laws that penalize the expression of opinions about historical facts are incompatible with the obligations that the ICCPR imposes on States parties to respect for the right to freedom of opinion and expression.” Criminal defamation law has been widely criticized by UN Special Rapporteurs on Free Expression for hindering free expression. Yet under this new law, any speech that violates Indonesian law is deemed prohibited.

Forcing Private Companies To Proactively Monitor 

MR5 also obliges Private ESOs (except cloud providers) to ensure that their service, websites or platforms do not contain and do not facilitate the dissemination of such prohibited information or documents. Private ESOs are then required to ensure that their system does not carry prohibited content or information, which will in practice require a general monitoring obligation, and the adoption of content filters. Article 9 (6) imposes disproportionate sanctions, including a general blocking of systems for those who fail to ensure there is no prohibited content and information in their systems. 

We join SafeNet in urging the Indonesian government to bring its legislation into full compliance with international freedom of expression standards

These provisions are not only a serious threat to Indonesians’ free expression rights, they are also a major compliance challenge for Private ESOs. If the Ministry gets to determine what information is “prohibited,” a Private ESO would be hard-pressed to proactively ensure its system does not contain that information or facilitate its dissemination even before a specific takedown.

According to Ika Ningtyas, Head of the Freedom of Expression Division at the Southeast Asia Freedom of Expression Network (SAFEnet), leaving it up to the Ministry will allow it to censor content containing criticism of public policies and some discussion of LGBT rights or activities or the ongoing Papua conflict.

Who Decides What Is Prohibited? 

MR5 empowers an official with the Orwellian title “Minister for Access Blocking” to coordinate the prohibited information that will be blocked. Blocking requests may originate with Indonesian law enforcement agencies, courts, the Ministry of Information, or concerned members of the public. (The courts can issue “instructions” to the Access Blocking Minister, while other government entities send requests that the Minister can evaluate. Individuals’ requests related to pornography or gambling can be sent directly to the Access Blocking Minister, while those related to other matters are addressed first to the Ministry of Information.) The Minister then emails platform operators with orders to block particular things, which they are expected to obey within 24 hours—or only 4 hours for “urgent” requests. “Urgent” requests include  terrorism; child pornography; or content causing “unsettling situations for the public and disturbing public order.” If a Private ESO (with the exception of a cloud provider) does not comply, it may receive warnings, fines, and eventually have its services blocked in Indonesia—even if the prohibited information was legal under international human rights law.

It requires time to understand the local context and complexity of the cases, and to assess such government orders. Careful assessments are particularly needed when it comes to material that relates to minority groups and movements, regardless of the context in which the complaint is raised—copyright, defamation, blasphemy, or any of the categories MR5 describes as harmful or causing. Laws must provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not. 

Even the use of copyright law as a cudgel by the state to censor dissent is not hypothetical. According to  Google’s Transparency Report on Government requests: 

We received a request through our copyright complaints submission process from an Indonesian Consul General who requested that we remove six YouTube videos. Outcome: We did not remove the videos, which appeared to be critical of the Consulate.

Forcing User-Generated Content Platforms to Become Government Enforcers

MR5 Articles 11, 16(11), and 16(12) enlist user-generated content platforms (like Youtube, Twitter, TikTok or any local sites that distribute user generated content) as content enforcers by threatening them with legal liability for their users’ expression unless they agree to help monitor the content of communication in various ways specified by the Indonesian government. Under Article 11, a User Generated Content Private ESO must ensure that prohibited information and documents are not transmitted or distributed digitally through their services, and must disclose subscriber information revealing who uploaded such information for the purpose of supervision by administrative agencies (Trade Agency) and law enforcement, and must perform access blocking (takedowns) on prohibited content. 

User-Generated Content Private ESOs who fail to remove prohibited information and/or documents are subject to an administrative sanction based on the provisions of the law and regulations concerning Non-Tax State Revenue (Article 16 (11)).

The Minister can force ISPs to block access to the Social Media Private ESO and/or can impose a fine that would accumulate every 24 or 4 hours until compliance, up to a maximum of 3 times (i.e. the fine can be multiplied up to 3 times, over a total of 4x3 = 12 hours for emergency cases such as terrorism, requiring a turnaround time of 4 hours), or 24x3=72 hours for other “normal” cases. The result: if changes aren’t made within 12 or 72 hours, on top of owing 3 times the fine, the Private ESO could find itself blocked. ((Article 16 (11)(12)).

Bring MR5 and Indonesia Law with Full Compliance with International Human Rights Law


We join SafeNet in urging the Indonesian government to bring its legislation into full compliance with international freedom of expression standards, in particular its Criminal Code and the MR5 regulation. Companies should not remove content that is inconsistent with the permissible limitation test. General blocking measures, in our opinion, are always inconsistent with Article 19 of the ICCPR. Companies should legally challenge such general blocking orders. They should also fight back strategically under any pressure from the Indonesian government.

New EFF Report Shows Cops Used Ring Cameras to Monitor Black Lives Matter Protests

EFF - Tue, 02/16/2021 - 6:42am
LAPD Wanted Unknown Amount of Video for Unknown Reasons – Raising First Amendment Concerns

San Francisco - The Electronic Frontier Foundation (EFF) has obtained emails that show that the Los Angeles Police Department (LAPD) sent at least one request—and likely many more—for Amazon Ring camera video of last summer’s Black-led protests against police violence. In a report released today, EFF shows that the LAPD asked for video related to “the recent protests,” and refused to disclose to EFF what crime it was investigating or how many hours of footage it ultimately requested.

“The emails we received raise many questions about what the LAPD wanted to do with this video,” said EFF Policy Analyst Matthew Guariglia. “Police could have gathered hours of footage of people engaged in First-Amendment-protected activity, with a vague hope that they could find evidence of something illegal. LAPD should tell the public how many hours of surveillance footage it gathered around these protests, and why.”

EFF filed its public records request with LAPD after widespread complaints about police tactics during the protests in May and June of 2020. After receiving the emails in response to our request, we asked for clarification from the LAPD about what it was looking for and how much video it wanted. The agency said simply that it was attempting to “identify those involved in criminal behavior.”

“Outdoor surveillance cameras like Ring have the potential to provide the police with video footage covering every inch of an entire neighborhood. This poses an incredible risk to First Amendment rights,” said Guariglia. “People are less likely to exercise their right to political speech, protest, and assembly if they know that police can get video of these actions with just an email to people with Ring cameras.”

Los Angeles isn’t the only city where the police department tried to get video of last summer’s protests for racial justice. The San Francisco Police Department (SFPD) used a network of over 400 cameras operated by a business district to spy on protests in early June 2020, under the guise of public safety. Last fall, EFF and ACLU of Northern California filed a lawsuit against the City and County of San Francisco on behalf of three protesters, asking the court to require the city to follow its Surveillance Technology Ordinance and prohibit the SFPD from acquiring, borrowing, or using non-city networks of surveillance cameras absent prior approval from the city’s Board of Supervisors.

For the full report “LAPD Requested Ring Footage of Black Lives Matter Protests”:
https://www.eff.org/deeplinks/2021/02/lapd-requested-ring-footage-black-lives-matter-protests

Contact:  MatthewGuariglia Policy Analystmatthew@eff.org

LAPD Requested Ring Footage of Black Lives Matter Protests

EFF - Tue, 02/16/2021 - 5:57am

Along with other civil liberties organizations and activists, EFF has long warned that Amazon Ring and other networked home surveillance devices could be used to monitor political activity and protests. Now we have documented proof that our fears were founded.

According to emails obtained by EFF, the LAPD sent requests to Amazon Ring users specifically targeting footage of Black-led protests against police violence that occurred in cities across the country last summer. While it is clear that police departments and federal law enforcement across the country used many different technologies to spy on protests, including aerial surveillance and semi-private camera networks, this is the first documented evidence that a police department specifically requested footage from networked home surveillance devices related to last summer’s political activity.

A map of Ring-police partnerships in the United States. Clicking the map will bring you to an interactive version.

In May 2019, LAPD became the 240th public safety agency to sign a formal partnership with Ring and it's associated app, Neighbors. That number has now skyrocketed to more than 2,000 government agencies. The partnerships allow police to use a law-enforcement portal to canvass local residents for footage.

Requests from police to Ring users typically contain the name of the investigating detective and an explanation of what incident they are investigating. Police requesting footage also specify a time period, usually a range spanning several hours, because it’s often hard to identify exactly what time certain crimes occurred, such as an overnight car break-in. 

A June 16, 2020 email showing an LAPD request for footage to an Amazon Ring user.

In its response to EFF’s public records requests, the LAPD produced several messages it sent to Ring users, but redacted details such as the circumstances being investigated and the dates and times of footage requested. However, one email request on behalf of the LAPD “Safe L.A. Task Force” specifically asked for footage related to “the recent protests.” Troublingly, the LAPD also redacted the dates and times sought for the requested footage. This practice is concerning, because if police request hours of footage on either side of a specific incident, they may receive hours of people engaging in First Amendment protected activities with a vague hope that a camera may have captured illegal activity at some point. Redacting the hours of footage the LAPD requested is a cover up of the amount of protest footage the police department sought to acquire.

EFF asked the LAPD for clarification of the specific context under which the department sent requests concerning the protests. The LAPD would not cite a specific crime they were investigating, like a theft from a specific storefront or an act of vandalism. Instead, the LAPD told EFF, “SAFE LA Task Force used several methods in an attempt to identify those involved in criminal behavior.”

Their full response reads:

The SAFE LA Task Force used several methods in an attempt to identify those involved in criminal behavior. One of the methods was surveillance footage. It is not uncommon for investigators to ask businesses or residents if they will voluntarily share their footage with them. Often, surveillance footage is the most valuable piece in an investigators case.

Police have used similar tactics before. EFF investigated the San Francisco Police Department’s use of a Business Improvement District’s network of over 400 cameras to spy on protests in early June 2020, under the guise of public safety and situational awareness. We learned that police gained over a week of live access to the camera network, as well as a 12-hour “data dump” of footage from all cameras in the network. In October 2020, EFF and ACLU of Northern California filed a lawsuit against the City and County of San Francisco on behalf of three protesters. We seek a court order requiring the city to comply with the city’s Surveillance Technology Ordinance by prohibiting the SFPD from acquiring, borrowing, or using non-city networks of surveillance cameras absent prior approval from the city’s Board of Supervisors.

The LAPD announced the creation of the Safe L.A. Task Force on June 2, 2020, in order to receive tips and investigate protests against police violence that started just four days earlier. The LAPD misleadingly labeled these protests as an “Unusual Occurrence (UO).” The FBI announced they would join the task force “in order to investigate significant crimes that occurred at or near locations where legitimate protests and demonstrations took place in Los Angeles beginning on May 29, 2020.” The Los Angeles Police Department, Beverly Hills Police Department, Santa Monica Police Department, Torrance Police Department, Los Angeles City Fire Department, Los Angeles City Attorney’s Office, Los Angeles County District Attorney’s Office, and United States Attorney’s Office for Los Angeles also joined the task force. 

Protests began in Los Angeles County following the Minneapolis police killing of George Floyd on May 25, 2020. LAPD sent a number of requests for Ring footage from users starting at the end of May, but because of the extensive redactions of circumstances, dates, and times, we’re unable to verify if all of those requests are related to the protests. However, some of the detectives associated with the Safe L.A. Task Force are the same people that began requesting Ring footage at the end of May and early June. 

On June 1, 2020, the same day of Los Angeles' largest protests, police receive footage from a Ring user.

The LAPD’s response shows that on June 1, 2020, the morning after one of the largest protests of last summer in Los Angeles, Det. Gerry Chamberlain sent Ring users a request for footage. Within two hours, Chamberlain received footage from at least one user. The nature of the request was redacted; however, the next day, his unit was formally assigned to the protest task force.

The LAPD’s handling of last summer’s protest are under investigation after widespread complaints about unchecked suppression and use of disproportionate tactics. At least 10 LAPD officers have been taken off the street pending internal investigations of their use of force during the protests. 

Technologies like Ring have the potential to provide the police with video footage covering nearly every inch of an entire neighborhood. This poses an incredible risk to First Amendment rights. People are less likely to exercise their right to political speech, protest, and assembly if they know that police can acquire and retain footage of them. This creates risks of retribution or reprisal, especially at protests against police violence. Ring cameras, ubiquitous in many neighborhoods, create the possibility that if enough people share footage with police, authorities are able to follow protestors’ movements, block by block. Indeed, Gizmodo found that on a walk of less than a mile between a school and its gymnasium in Washington D.C., students had to walk by no less than 13 Ring cameras, whose owners regularly posted footage to social media. Activists may need to walk past many more such cameras during a protest. 

We Need New Legal Limits on Police Access

This incident once again shows that modern surveillance technologies are wildly underregulated in the United States. A number of U.S. Senators and other elected officials have commented on—and sent inquiries to Amazon—to uncover how few legal restrictions govern this rapidly growing surveillance empire. The United States is ripe for a legislative overhaul to protect bystanders, as well as consumers, from both corporations and government. A great place to start would be stronger limits on government access to data collected by private companies. 

One of EFF’s chief concerns is the ease with which Ring-police partnerships allow police to make bulk requests to Ring users for their footage, although a new feature does allow users to opt out of requests. Ring has introduced end-to-end encryption, preventing police from getting footage directly from Amazon, but this doesn't limit their ability to send these blanket requests to users. Such “consent searches” pose the greatest problems in high-coercion settings, like a police “asking” to search your phone during a traffic stop, but they are also highly problematic in less-coercive settings, like bulk email requests for Ring footage from many residents. 

Thus, an important way to prevent police from using privately-owned home security devices as political surveillance machines would be to impose strict regulations governing “Internet of Things” consent search requests. 

EFF has previously argued that in less-coercive settings, consent searches should be limited by four rules. First, police must have reasonable suspicion that crime is afoot before sending a request to a specific user. Such requests must be specific, targeting a particular time and place where there is reasonable suspicion that crime has happened, rather than general requests that, for example, blanket an entire neighborhood for an entire day in order to investigate one broken window. Second, police must collect and publish statistics about their consent searches of electronic devices, to deter and detect racial profiling. Third, police and reviewing courts must narrowly construe the scope of a person’s consent to search their device. Fourth, before an officer attempts to acquire footage from a person’s Ring camera, the officer must notify the person of their legal right to refuse. 

Ring has made some positive steps concerning its user’s privacy—but the privacy of everyone else in the neighborhood is still in jeopardy. The growing ubiquity of Ring means that if the footage exists, police will continue to access more and more of it. The LAPD’s use of Ring cameras to gather footage of protesters should be a big red flag for politicians. 

You can view the emails between Ring and the LAPD below:

Related Cases: Williams v. San Francisco

Virginians Deserve Better Than This Empty Privacy Law

EFF - Fri, 02/12/2021 - 8:03pm

A very weak consumer data privacy bill is sailing through the Virginia legislature with backing from Microsoft and Amazon, which have both testified in support of the bill. The bill, SB 1392 and its companion HB 2307, are based on a Washington privacy law backed by tech giants that has threatened for two years to lower the bar for state privacy bills. If you’re a Virginia resident who cares about privacy, please submit a comment to the House Committee on Technology, Communications, and Innovation before it meets on Monday, Feb. 15.

EFF has long advocated for strong privacy legislation. Consumer privacy has been a growing priority for legislatures across the country since California in 2018 passed the California Consumer Privacy Act—a sweeping, first-of-its kind piece of privacy legislation in the country. Since then, several states have considered broad data privacy laws; California amended its privacy law in 2020.

But not all privacy laws are the same. While California’s law is itself not perfect, a bill in the style of the Washington Privacy Act is a step in the wrong direction—particularly the version of the bill under consideration in Virginia. Bills that follow this model allow companies to appear to be doing a lot to protect privacy but are full of carveouts that fail to address some of the industry’s worst data privacy abuses.

A strong privacy bill would protect people’s privacy by default

Virginia’s bill copies much of what we’ve spoken out against in Washington state—and is, in some ways, even worse. For one, Virginia’s bill has almost no teeth. While the Attorney General’s office could bring a lawsuit, it offers no way for people to sue companies for violating their privacy, an enforcement tool known as a private right of action. Broad private rights of action are a vital tool for ensuring that people can act in their own interest to protect their privacy. Even California’s law and the Washington bill that Virginia’s measure is based on—which themselves could both benefit from stronger enforcement—have at least a narrow private right of action, offering at least a limited way for people can hold businesses to account without having to wait for the attorney general to act.

The Virginia bill stacks the deck against consumers even more under its “right to cure” provision:  If the Attorney General sues a business for violating people’s privacy, the business has a chance to fix what it did wrong, which would make the Attorney General's lawsuit go away. Considering how much time and work goes into bringing a lawsuit, giving the other side a cheap and easy out clearly illustrates how a right to cure allows you to look like you care about privacy without actually having to care.

Virginia’s privacy law also explicitly allows companies to engage in “pay for privacy” schemes, which punish consumers for exercising their privacy rights. In Virginia’s case, the bill says that consumers who opt-out of having their data used for targeted advertising, having it sold, or for profiling, can be charged a different “price, rate, level, quality or selection of goods and services.” That means punishing people for protecting their privacy—a structure that ends up harming those who can’t afford to protect themselves against data protection. Privacy should have no price tag.

A strong privacy bill would protect people’s privacy by default by letting them opt-in to data sale and use, rather than having to go to each company to ask them to stop using their information. It would require companies to commit to strict standards for what information they ask to collect in the first place. And it would also have real teeth to make sure that companies don’t get away with violating privacy rights.

EFF has joined with other national privacy groups, as well as with consumer advocates in Virginia, to ask the legislature to consider amendments that prioritize their constituents’ rights over empty promises from businesses.

Virginia’s lawmakers have made it clear that they want to hear from their own constituents who may be concerned about this matter. Tell your lawmakers to hit the brakes on this bill, and work to craft a better law for the people they serve.

Victory! EFF Scores Another Win for the Public’s Right of Access against Patent Owner Fighting for Secrecy

EFF - Mon, 02/08/2021 - 4:09pm

Patents generate profits for private companies, but their power comes from the government, and in this country, the government’s power comes from the people. That means the rights patents confer, regardless of who exercises them, are fundamentally public in nature.

Patent owners have no right to keep their patents rights secret. The whole point of the patent system is to encourage people to disclose information about their inventions to the public by giving certain exclusive rights to those who do. But that doesn’t stop private companies from trying to keep information about their patents secret—even when their disputes to go court, where the public has a right to know what happens.

A recent decision by a federal court in a long-running transparency push by EFF affirmed the public’s right to access important information about a patent dispute. For more than two years, we have been working to vindicate the public’s right of access to important sealed court documents in Uniloc v. Apple. The sealed documents supported Apple’s argument that the case should be dismissed because Uniloc lost ownership of the patents when it sued Apple, and thus lost the right to bring the suit. But as filed, the documents were so heavily redacted that it was impossible to understand them. So EFF intervened to oppose the sealing requests on the public’s behalf—and we won. When Uniloc asked for reconsideration, the court refused—and we won again. When Uniloc appealed, the Federal Circuit overwhelmingly upheld the district court’s decision—and for the third time, we won.

EFF hoped that the string of victories would mark the end of our intervention and that the parties would promptly file properly-redacted documents as required at last. But they did not do so.

In October 2020, after more than three months had passed since the Federal Circuit’s ruling, we discovered Apple had filed a new motion to dismiss against Uniloc. Again, the motion and exhibits were so heavily redacted that it was impossible to know what Apple’s argument for dismissal was. So EFF moved to intervene, challenging Uniloc’s failure to comply with the Federal Circuit’s ruling as well as its new failure to submit proper sealing requests. The district court agreed, and for the fourth time, we won.

That EFF had to intervene underscores the problem of excessive sealing in patent cases between private companies. No matter how much they disagree on other issues, otherwise-warring sides often have a mutual interest in wanting to keep information about the litigation secret. When that happens, both sides are motivated to make excessive requests to seal court records—but not to oppose them. If there’s no opposition, there’s no guarantee a judge will weigh the request against the public’s right of access. To make sure that happens, EFF often intervenes in patent cases to vindicate the public’s access rights.

In its December 2020 decision, the district court did not mince words, excoriating both parties for their casual attitude toward the public’s right of access. The court emphasized the perils of “collusive oversealing,” which happens in cases such as this where “both parties seek to seal more information than they have any right to and so do not police each other’s indiscretion.” Although Apple did not request secrecy, it had ample opportunity to challenge Uniloc’s sealing requests, but “opted instead to grab its December 4 victory on the standing issue and head for the hills.” Seeing Apple and Uniloc’s mutual interest in secrecy, the court realized that “[w]ithout EFF, the public’s right of access will have no advocate,” and granted our motion for intervention with thanks.

The court then denied all of Uniloc’s sealing requests—including the requests to seal the names and amounts paid by Uniloc’s licensees. In doing so, the court emphasized the public’s right to information about U.S. patents in addition to the right to access court records. As it explained: “a patent is a public grant of rights. . . . The public has every right to account for all its tenants, all its sub-tenants, and (more broadly) anyone holding even a slice of the public grant.” It also emphasized the public’s “interest in inspecting the valuation of the patent rights . . . particularly given secrecy so often plays to the patentee’s advantage in forcing bloated royalties.” We commend the court for recognizing the gravity of the public’s right—and need—for information about the ownership, licensing, and valuation of U.S. patents.

We hoped this victory would convince Uniloc to admit defeat and change its sealing practices, but it has decided to appeal its loss to the Federal Circuit again. EFF’s fight for access to Uniloc’s licensing secrets will continue. In the meantime, we hope this decision will encourage judges and litigants to enforce the public’s right of access, especially when the adversarial process collapses.

 

Related Cases: Uniloc v. Apple

Pages