Electronic Freedom Foundation

Victory! Federal Circuit Enables Public to Hear Arguments In Important Patent Case

EFF - Wed, 04/01/2020 - 5:40pm

Just like us, federal judges are continuing to grapple with the challenges of COVID-19 and its impact on their ability to do their jobs. Less than two weeks ago, the U.S. Court of Appeals for the Federal Circuit in Washington, D.C. announced that April’s oral arguments in our case would take place telephonically or not at all. Since that time, the court has cancelled arguments for a substantial number of cases on its calendar, but EFF’s argument on behalf of the public’s right to access court documents in patent cases is among those the Court has scheduled for telephonic argument.

Whatever challenges lie ahead, courts must ensure that their proceedings remain as accessible to the public as possible.

Before the court ruled out in-person argument, EFF had filed a motion asking the court to make public video of the oral argument so that people unable to travel to the Washington, D.C., courtroom could see the argument too. The motion for video access was, of course, denied when in-person arguments were cancelled. But today, the Federal Circuit embraced the EFF’s push for live access to oral argument, announcing that it will provide “media and public access to the live audio of each panel scheduled for argument during the April 2020 session.

This is the first time the Federal Circuit has provided the public and press with access to oral argument audio in real time. It will ensure that during the outbreak, the public and press do not altogether lose the ability to access court proceedings as they happen. We commend the Court for taking this crucial step to enhance public access. And we are deeply grateful to the court staff working to make sure that arguments can proceed and be heard by members of the press and public alike.

Whatever challenges lie ahead, courts must ensure that their proceedings remain as accessible to the public as possible. We hope this precedent cements the public’s right to remotely access oral arguments in real time, and paves the way for greater public access in the future.

Related Cases: Uniloc v. Apple

The EARN IT Act Violates the Constitution

EFF - Tue, 03/31/2020 - 7:17pm

Since senators introduced the EARN IT Act (S. 3398) in early March, EFF has called attention to the many ways in which the bill would be a disaster for Internet users’ free speech and security.

We’ve explained how the EARN IT Act could be used to drastically undermine encryption. Although the bill doesn’t use the word “encryption” in its text, it gives government officials like Attorney General William Barr the power to compel online service providers to break encryption or be exposed to potentially crushing legal liability.

The bill also violates the Constitution’s protections for free speech and privacy. As Congress considers the EARN IT Act—which would require online platforms to comply with to-be-determined “best practices” in order to preserve certain protections from criminal and civil liability for user-generated content under Section 230 (47 U.S.C. § 230)—it’s important to highlight the bill’s First and Fourth Amendment problems.

First Amendment

As we explained in a letter to Congress, the EARN IT Act violates the First Amendment in several ways.

1. The bill’s broad categories of “best practices” for online service providers amount to an impermissible regulation of editorial activity protected by the First Amendment.

The bill’s stated purpose is “to prevent, reduce, and respond to the online sexual exploitation of children.” However, it doesn’t directly target child sexual abuse material (CSAM, also referred to as child pornography) or child sex trafficking ads. (CSAM is universally condemned, and there is a broad framework of existing laws that seek to eradicate it, as we explain in the Fourth Amendment section below).

Instead, the bill would allow the government to go much further and regulate how online service providers operate their platforms and manage user-generated content—the very definition of editorial activity in the Internet age. Just as Congress cannot pass a law demanding news media cover specific stories or present the news a certain way, it similarly cannot direct how and whether online platforms host user-generated content.

2. The EARN IT Act’s selective removal of Section 230 immunity creates an unconstitutional condition.

Congress created Section 230 and, therefore, has wide authority to modify or repeal the law without violating the First Amendment (though as a policy matter, we don’t support that). However, the Supreme Court has said that the government may not condition the granting of a governmental privilege on individuals or entities doing things that amount to a violation of their First Amendment rights.

Thus, Congress may not selectively grant Section 230 immunity only to online platforms that comply with “best practices” that interfere with their First Amendment right to make editorial choices regarding their hosting of user-generated content.

3. The EARN IT Act fails strict scrutiny.

The bill seeks to hold online service providers responsible for a particular type of content and the choices they make regarding user-generated content, and so it must satisfy the strictest form of judicial scrutiny.

Although the content the EARN IT Act seeks to regulate is abhorrent and the government’s interest in stopping the creation and distribution of that content is compelling, the First Amendment still requires that the law be narrowly tailored to address those weighty concerns. Yet, given the bill’s broad scope, it will inevitably force online platforms to censor the constitutionally protected speech of their users.

Fourth Amendment

The EARN IT Act violates the Fourth Amendment by turning online platforms into government actors that search users’ accounts without a warrant based on probable cause.

The bill states, “Nothing in this Act or the amendments made by this Act shall be construed to require a provider of an interactive computer service to search, screen, or scan for instances of online child sexual exploitation.” Nevertheless, given the bill’s stated goal to, among other things, “prevent” online child sexual exploitation, it’s likely that the “best practices” will effectively coerce online platforms into proactively scanning users’ accounts for content such as CSAM or child sex trafficking ads.

Contrast this with what happens today: if an online service provider obtains actual knowledge of an apparent or imminent violation of anti-child pornography laws, it’s required to make a report to the National Center for Missing and Exploited Children’s (NCMEC) CyberTipline. NCMEC then forwards actionable reports to the appropriate law enforcement agencies.

Under this current statutory scheme, an influential decision by the U.S. Court of Appeals for the Tenth Circuit, written by then-Judge Neil Gorsuch, held that NCMEC is not simply an agent of the government, it is a government entity established by act of Congress with unique powers and duties that are granted only to the government.

On the other hand, courts have largely rejected arguments that online service providers are agents of the government in this context. That’s because the government argues that companies voluntarily scan their own networks for private purposes, namely to ensure that their services stay safe for all users. Thus, courts typically rule that these scans are considered “private searches” that are not subject to the Fourth Amendment’s warrant requirement. Under this doctrine, NCMEC and law enforcement agencies also do not need a warrant to view users’ account content already searched by the companies.

However, the EARN IT Act’s “best practices” may effectively coerce online platforms into proactively scanning users’ accounts in order to keep the companies’ legal immunity under Section 230. Not only would this result in invasive scans that risk violating all users’ privacy and security, companies would arguably become government agents subject to the Fourth Amendment. In analogous cases, courts have found private parties to be government agents when the “government knew of and acquiesced in the intrusive conduct” and “the party performing the search intended to assist law enforcement efforts or to further his own ends.”

Thus, to the extent that online service providers scan users’ accounts to comply with the EARN IT Act, and do so without a probable cause warrant, defendants would have a much stronger argument that these scans violate the Fourth Amendment. Given Congress’ goal of protecting children from online sexual exploitation, it should not risk the suppression of evidence by effectively coercing companies to scan their networks.

Next Steps

Presently, the EARN IT Act has been introduced in the Senate and assigned to the Senate Judiciary Committee, which held a hearing on March 11. The next step is for the committee to consider amendments during a markup proceeding (though given the current state of affairs it’s unclear when that will be). We urge you to contact your members of Congress and ask them to reject the bill.

Take Action

PROTECT OUR SPEECH AND SECURITY ONLINE

EFF to Supreme Court: Losing Your Phone Shouldn’t Mean You Lose Your Fourth Amendment Rights

EFF - Mon, 03/30/2020 - 4:36pm

You probably know the feeling: you reach for your phone only to realize it’s not where you thought it was. Total panic quickly sets in. If you’re like me (us), you don’t stop in the moment to think about why losing a phone is so scary. But the answer is clear: In addition to being an expensive gadget, all your private stuff is on there.  

Now imagine that the police find your phone. Should they be able to look through all that private stuff without a warrant? What if they believe you intentionally “abandoned” it? Last week, EFF filed an amicus brief in Small v. United States asking the Supreme Court to take on these questions.

In Small, police pursued a robbery suspect in a high-speed car chase near Baltimore, ending with a dramatic crash through the gates of the NSA’s campus in Fort Meade, Maryland. The suspect left his car, and officers searched the area. They quickly found some apparently discarded clothing, but many hours later they also find a cell phone on the ground, over a hundred feet from the clothing and the car. Despite the intervening time and the distance from the other items, the police believed that the phone also belonged to their suspect. So they looked through it and called one of the stored contacts, who eventually led them to the defendant, Mr. Small.

The Fourth Circuit Court of Appeals upheld this warrantless search of Small’s phone under the Fourth Amendment’s “abandonment doctrine.” This rule says that police don’t need a warrant to search and seize property that is abandoned, as determined by an objective assessment of facts known to the police at the time. Mr. Small filed a petition for certiorari, asking the Supreme Court to review the Fourth Circuit’s decision.

EFF’s brief in support of Small’s petition argues police shouldn’t be able to search a phone they find separated from its owner without a warrant. That’s because phones have an immense storage capacity, allowing people to carry around a comprehensive record of their lives stored on their phones. And if you’ve ever experienced that panicky feeling when you can’t find your phone, you know that, despite their intimate contents, phones are all too easy to lose. Even where someone truly chooses to abandon a phone, such as when they turn in an old phone to upgrade to a new one, they probably don’t intend to abandon any and all data that phone can store or access from the Internet—think of cloud storage, social media accounts, and the many other files accessible from your phone, but not actually located there. As a result, we argue phones are unlike any other object that individuals might carry with them and subsequently lose or even voluntarily abandon. Even when it’s arguable that the owner “abandoned” their cell phone, rather than simply misplacing it, police should be required to get a warrant to search it.

If this reasoning all sounds familiar, it’s because the Supreme Court relied on it in a landmark case involving the warrantless search of phones all the way back in 2014, in Riley v. California. Riley involved the warrantless searches of phones found on suspects during lawful arrests. Even though police can search items in a suspect’s pockets during an arrest to avoid destruction of evidence and identify any danger to the officers, the Court recognized in its opinion that phones are different: “Modern cell phones are not just another technological convenience. With all they contain and all they may reveal, they hold for many Americans ‘the privacies of life.’”  In a unanimous decision by Chief Justice Roberts, the Court wrote, “Our answer to the question of what police must do before searching a cell phone seized incident to an arrest is accordingly simple — get a warrant.”

Even though the warrant rule in Riley seemed clear and broadly applicable, the lower court in Small ruled it was limited to searches of phones found on suspects during an arrest. That’s not only a misreading of everything the Supreme Court said in Riley about why phones are different than other personal property, it’s also a bad rule that creates terrible incentives for law enforcement. It encourages warrantless searches of unattended phones, which are especially likely to lead to trawling through irrelevant and sensitive personal information.

Losing a phone is scary enough; we shouldn’t have to worry that it also means the government has free rein to look through it. We hope the Supreme Court agrees, and grants review in Small. A decision on the petition is expected by June.

Related Cases: Riley v. California and United States v. Wurie

Speaking Freely: Sandra Ordoñez

EFF - Mon, 03/30/2020 - 4:09pm

Sandra (Sandy) Ordoñez is dedicated to protecting women being harassed online. Sandra is an experienced community engagement specialist, a proud NYC Latina resident of Sunset Park Brooklyn, and a recipient of Fundación Carolina’s Hispanic Leadership Award. She is also a long-time diversity and inclusion advocate, with extensive experience incubating and creating FLOSS and Internet Freedom community tools.

These commitments and principles drive Sandra’s work as the co-founder and Director of the Internet Freedom Festival (IFF) at Article19. Even before launching the Internet Freedom Festival, Sandra was helping to grow and diversify the global Internet Freedom community. As their inaugural Director of Community and Outreach, Sandra led the creation of Open Technology Fund’s (OTF) Community Lab. Before her time at OTF, Sandra was Head of Communications and Outreach at OpenITP where she supported the community behind FLOSS anti-surveillance and anti-censorship tools. She also served as the first Communications Director for the Wikimedia Foundation.

As a researcher Sandra has conducted over 400 expert interviews on the future of journalism, and conducted some of the first research on how Search Engine Optimization (SEO) reinforces stereotypes. She also provides consultation on privacy-respecting community marketing, community building, organizational communication, event management, program design, and digital strategy. All while serving on the board of the Open Technology Fund, Trollbusters, and Equality Labs.

In recent months Facebook, and others, have proposed the creation of oversight boards to set content moderation policies internationally. In the US, the fight to protect free expression has taken on a new urgency with Senators Graham and Blumenthal introducing the EARN IT Act. A bill that, if enacted, would erode critical free speech protections and create a government commission with the power to codify best practices, with criminal and civil liability on platforms that failed to meet them. With these committees in mind, I was eager to speak with Sandy about how these proposals would impact communities that are often the most directly affected, and the last consulted.

Nathan "nash" Sheard: What does free speech mean to you?

Oh, that's a good one. Free speech, to me, means the ability to share your thoughts, your analysis of things, your experience as a human being. Your experience can be anything from what you lived, to the challenges that you're facing, to your goals and hopes for the future.

The reason I'm wording it that way is because it really bothers me how free speech is being used recently. Hate speech, for me, is not free speech. I think that's why I'm phrasing it that way because I really think that the idea of free speech is to not censor people. To be able to express ideas and experiences and things like that. But it does not constitute an opportunity to basically hate against others or bring people down.

My partner is a philosophy professor, so I think of it in relation to critical thinking. I think of creating spaces that allow people to be honest and truthful, but towards improving society, not making it worse.

nash: What are your thoughts on responsible ways to navigate concerns around censorship and speech that is hateful or incites harm?

If I had that answer, I think I would be a billionaire right now. I think there's a very clear distinction, when folks are trying to debate, or share information, or when they're attacking another group of people. From my perspective, when speech incites violence or hatred against another group of people, that's not free speech.

Nowadays, because of the context, because of the situation that we're living in, ensuring that speech doesn't lead to violence is really important. I think that a lot of times, cultivating healthy communities, whether it's local advocacy, parents, professional or society in general, it requires not just having these debates about what exactly is free speech, but really about investing more resources and energy in creating tools that allow us to create safe spaces for folks. Once you have a safe space where people feel protected, and there's rules that each community is able to create for themselves, you know what's acceptable and not acceptable.

You can have free speech without hurting others rights, so I don't think it's a question of semantics. I think it's a question of shifting our culture to create safer spaces.

nash: What do you think about the control that corporations have over deciding what those parameters look like right now?

I think they're doing a horrible job. In fact, it gets me really angry because a lot of the corporations that are dominating these conversations have more resources than others. So, I think for them, really, they need to have a wakeup call. Communities have to start shifting resources into creating safe, healthy spaces. These corporations have to do that as well. It's kind of like diversity and inclusion, right? Corporations may have diversity and inclusion initiatives but that doesn’t mean they really cause change or really care. Same for other areas. It feels as though the safety of people and community health is always the last thing they consider.

So, I think that if they're going to be leaders. If they're creating these tools, or spaces, that they want so many people to use, they have a social responsibility to make sure that what they're offering is not going to harm society. That it's going to protect society. So, I think it's really about them readjusting where and how they're spending resources. Obviously, it's a complex question like it's a complex problem, but it's not impossible. In fact, it's very, very, possible. But, it requires intent and resources.

nash: Are there steps that we as folks engaged with technology the way we are—and in the technology space with the intent to empower communities and users—should be taking to help reclaim that power for the users and for communities, rather than leaving those decisions to be made within the silos of corporate power?

I mean, more and more people, rightfully so, are pushing for more community-owned infrastructures. Ideas on what that will look like are really diverse, and it's really going to depend on the community. Some folks will advocate for mesh networks, others will advocate for alternatives to Facebook and Twitter.

I really think it's up to the community to decide what that looks like. We need to start brainstorming along with these communities, and finding ways to shift how we've done tech. In the past, a lot of folks kind of had an idea, and then they started working on the idea, and then whoever used that tool, or not, that was the end of it. I think now we really have to—especially if we care about movements and the users right to privacy and security—we really need to start working more hand in hand with not just users, but specific communities, to basically help them and empower them with options of what they can use. And, also, empower them to make decisions for themselves. That's a really hard shift. I feel like in some ways we're going towards that direction, but it's going to take a kind of reprogramming of how we do things. There are so many things baked into that—power structures and how we relate to the world, and how others relate to us. I do think that investing more in brainstorming, in conjunction with communities, of what the possibilities are, is a really good first step.

nash: Some in the platform moderation community are looking to committees that would decide what is acceptable. We should obviously be exploring many different kinds of ideas. Still, I get concerned with the idea that a committee of folks who might exist and move around the world in one context will be making decisions that will affect folks in completely different contexts that they might not be able to relate with. Do you have thoughts on that strategy?

It's a really broad question, and it's hard because I think there are different situations that require different measures. But what I would say is 'localize, localize, localize'. If you have moderators that are looking over content, you have to make sure you have a variety of moderators from different cultural backgrounds, different languages, different socioeconomic backgrounds, so they really understand what's happening.

This problem requires a lot more investment, and understanding that the world is very diverse. You can't just solve it with one thing, so that means investing in the actual communities that you're serving. Using India as an example—India is a country that has multiple languages, multiple groups. That means that your moderation group, or your employees that you hire to do moderation, are probably going to have to be plentiful and just as diverse. The bigger tech companies may realize how much money actually is required to solve the problem, and may not feel ready to do that. But the problem is that while they're questioning what to do, or coming up with what they think is a simple solution to a very complex problem, it's going to impact more and more humans.

nash: I get concerned that we won't be able to move in that direction and not create a scenario where only Facebook or Twitter have the funds to execute these schemes effectively. That we'll set up guidelines that say you must do x or y, and then in doing so we inadvertently lock in Facebook and Twitter as the only platforms that can ever exist, because they're the only ones with the funds to comply effectively.

Like I said, it's a very complex problem. You know, when the Internet first started, it was almost like tons and tons of small communities everywhere that weren't necessarily connected to other folks around the world. Then we moved to this phase where we are now. On these large global platforms like Facebook and Twitter and Instagram where we're connected globally. I can find anybody in the world through Facebook. I think that we're going to start seeing a shift to more local groups again and mesh networks. Or, whatever that may be for your community.

I think a lot of these companies are going to see a decrease in users. Lots of people in my life that don't work in tech are ready. They're just overwhelmed with the use of these platforms because it really has impacted their real human interactions. They're looking for ways to really connect to the communities that they're part of. That doesn't mean that you won't be able to connect to folks you know across the globe, but communities can mean many different things. Communities can mean the people that live in your neighborhood, or it could be colleagues that are interested in the same topic that you're interested in.

The issue is that it's a more complex problem than just Facebook or Twitter, and honestly it really just requires rethinking how we are bringing people together. How are we creating safe spaces? How are we defining community? How do you bring them together? How do you make sure they're okay? It's a rethinking. The Internet's not that old. Right? And so it's not surprising that in like 2020, I think we're in 2020, that we're starting to reconfigure how we actually want that to impact our society.

 nash: I really appreciate your thoughtfulness here.  Do you have any final words you would like to offer?

This is really an important time for everybody to get involved in their communities. I just see how tired people are. And we really need to build more capacity. So, whatever people can do. If they're interested in supporting an open Internet where people are secure and protected they really really really need to start supporting the folks that are doing work, because people are really really tired and we need to build capacity, not just in existing communities but we have to build capacity where capacity doesn't exist. Going back to what you were saying before about platform accountability. Creating a group is not going to solve it. We need to invest in people and invest in people that can help us shift culture. That's it.

nash: thank you, so much.

Vallejo Must Suspend Cell-Site Simulator Purchase

EFF - Mon, 03/30/2020 - 2:38pm

As Bay Area residents sheltered at home due to the COVID-19 pandemic, the Vallejo City Council assembled via teleconference last week to vote on the purchase of one of the most controversial pieces of surveillance equipment—a cell-site simulator. What’s worse is that the city council approved the purchase in violation of state law regulating the acquisition of such technology. 

Any decision to acquire this technology must happen in broad daylight, not at a time when civic engagement faces the greatest barriers in modern history due to a global pandemic. EFF has submitted a letter to the Vallejo mayor and city council asking the city to suspend the purchase and hold a fresh hearing once the COVID-19 emergency has passed and state and local officials lift the shelter-at-home restrictions. 

A cell-site simulator (also referred to as an IMSI catcher or “Stingray”) pretends to act as a cell tower in order to surveil and locate cellular devices that connect to it. After borrowing such a device from another agency, the Vallejo Police Department argued it needed its own, and proposed spending $766,000 on cell-site simulator devices from KeyW Corporation, along with a vehicle in which police would install it. 

As EFF told the council, the privacy and civil liberties concerns around cell-site simulators “have triggered Congressional investigations, high-profile legal challenges, a Federal Communications Commission complaint, and an immense amount of critical media coverage.” To combat secrecy around cell-site simulators, the California legislature passed a law in 2015 that prohibits local government agencies from acquiring cell-site simulators without the local governing body approving a privacy and usage policy that “is consistent with respect for an individual’s privacy and civil liberties.” This policy needs to be available to the public, published online, and voted on during an open hearing.

As Oakland Privacy—a local ally in the Electronic Frontier Alliance—pointed out in its own letter, no such policy was presented or approved at the hearing. EFF further notes that the city council, however, did approve a non-disclosure agreement with the cell-site simulator seller, KeyW Corporation, that could hinder the public’s right to access information.

The Vallejo City Council must follow the law and put the cell-site simulator on the shelf. 

Read EFF’s letter to the Vallejo City Council here. 

Keeping Each Other Safe When Virtually Organizing Mutual Aid

EFF - Mon, 03/30/2020 - 1:01pm

Communities across the country are stepping up to self-organize mutual aid groups, uniting virtually to offer and coordinate support to those who are in need. In solidarity with the need for physical distancing, many people are organizing online using Google spreadsheets, Google forms, public posts on Twitter and Facebook, and private messages on social media platforms. 

There is great beauty and power in this support, but it also puts security concerns in the spotlight: overlooked privacy settings and overbroad collection of personal data can lead to the unintended disclosure of private information that can be used to harm the very people seeking help. Though these efforts may seem like they have equal benefit in helping connect people in need to people with resources, the privacy and security implications for these mediums vary widely. 

At EFF, we’ve been approached by U.S.-based mutual aid organizers to provide guidance on digital security and privacy considerations for organizers and volunteers, to better protect the communities they work to support. Our hope with this blog post is to provide considerations for those organizing mutual aid efforts, collecting and storing information, and connecting people with needs with people who want to help. However, we’ve also included some short lists of questions at the bottom of this post for anyone interested in contributing to, benefiting from, or aggregating information about mutual aid efforts. If you're interested in learning more, keep reading. Our recommendations are below, followed by a detailed walkthrough of digital security considerations for mutual aid organizers. 

Here are some security considerations to keep in mind for organizers, which we’ll go into in depth in each section of the post: 

These are all questions that organizers should think through when designing these efforts, that participants should feel empowered to ask organizers about. The information shared in these efforts can be sensitive, and a prime target for potential phishing attempts. It’s important that everyone involved in these efforts understand what the risks are and how to minimize them through thoughtful data collection.

Why Data Security Matters When Organizing Mutual Aid

To make these considerations a little more relatable, throughout this post we’ll imagine the journey a mutual aid organizer, Layla, might take. Layla recognizes it’s urgent to set up an effort to connect people who need financial support to helpers with resources. She decides to set up a website with a corresponding easily viewable document for people to share and promote their needs, and to provide a way to connect further.

But, in doing so, Layla has determined that she wants to protect her community’s sensitive data from people with bad intentions. Personal data can be misused in a variety of ways, and there are, unfortunately, a lot of people who want to take advantage of others’ vulnerability during these uncertain and stressful times. These are just a few:

Phishing: In learning very specific information about people’s circumstances— such as their emails, Venmo or banking information, their real names, their addresses, the circumstances of them asking for aid, their health information, and their stories—bad actors can scam the very people seeking help. In particular, malicious people take advantage of finding as much information they can about their targets to make a more realistic-sounding scam. 

Layla will need to think through how to limit how visible this information is, and ensure she is only collecting sensitive data if it’s absolutely necessary.

Doxxing vulnerable groups and facilitating targeted harassment: Private information about someone’s livelihood, workplace, and home address can be published with the intention of harassing them. This harassment can be digital, financial, and physical. Digital harassment usually takes the form of abusive comments and behavior online. Financial harassment might mean using this information for fraudulent billing. In other cases, attackers have spammed Venmo requests until the user accidentally accepted. Physical harassment can range from stalking to the practice of prank calling the police so they swarm the victim’s address (“swatting”). Even under normal circumstances, these activities can endanger someone's livelihood or safety. They can be even more detrimental for people who are already marginalized or are particularly affected by current events. 

As Layla is supporting a community at risk of their private information being used for targeted harassment, she needs to think through how to protect this information from getting in the hands of bad actors.

Government collection: Many governments collect information about citizens at scale. At its most harmful, this collection and sharing of data between government agencies can put already targeted communities even more at risk, especially when someone might already be surveilled (because of their immigration status, sexuality, gender, health, financial insecurity, faith, ethnicity, or political affiliation).

In Layla’s case, she especially worries about Immigration and Customs Enforcement further targeting people in her community, and does not want to collect information that could be misused to facilitate raids.

Selling of data: Companies big and small are constantly scraping the web for information about individuals that they can aggregate and sell, such as is done for third-party tracking.  

Layla’s community members are worried that sharing their information might mean that they wind up on more telemarketing lists, or that multiple companies that they may not know or recognize begin to track them.

These are all hard problems with terrible consequences: an organizer might determine that they are willing to go through substantial precautions to prevent these bad outcomes, using the principles we outline below. 

With all of these potential threats in mind, Layla knows she wants to protect the submitted data, and as someone from a targeted community, she recognizes that the data she is collecting is very sensitive. Using a security plan (or “threat modeling”) framework, we can brainstorm through the following questions with Layla:

  1.     What do you want to protect?
  2.     Who do you want to protect it from?
  3.     How bad are the consequences if you fail?
  4.     How likely are these threats?
  5.     How much trouble are you willing to go through to try to prevent potential consequences?

The following are considerations that can help Layla and those like her answer these questions.

Define Your Intended (And Unintended) Audience

In thinking through questions around building your community's security plan, it can be helpful to define your goals with this effort and scoping for the size of your initiative.

Who are you trying to reach? Is this effort for a neighborhood community (a group of 20 neighbors who know each other), a local community (people within a township or county, up to hundreds), or larger? The considerations for each of these varying sizes have differing security plans.

What can you clearly communicate to people participating in your effort? Plan to establish expectations at the outset—not just for people asking for and giving help, but external parties that may wish to amplify your effort. Currently, there’s a large trend of aggregators cross-linking to other mutual aid efforts, and there’s a chance that an effort you intended to be more closed off may get more visibility than you intended. Be clear about structuring your asks for this community: think about how you can make the process transparent to someone just joining the effort: when they submit, how many days should they expect for a response? What happens if a response is fulfilled or unfulfilled? What happens to this document and the data within it?

How much data do you need to organize that aid? Different audiences may require different levels of data collection. Which brings us to our next point.

Collect As Little Data As Possible 

Connecting people for mutual aid requires you to share some information about the participants. But it’s important to be mindful of the sensitivity of certain types of data—especially regarding a person’s medical history, location, and identity. Collecting as little data as possible to accomplish your goals helps lower the risk that bad actors will acquire enough information to do harm to those who provided that data. 

Certain types of identifying information may be less risky for a community to share than others. Layla, for example, may know that some people in her community worry about exposing their phone numbers publicly, and so opts to only include an email address field in her form. A first name and email address allow her to identify her participants, so she also decides she doesn’t need to store their last names. She might also encourage her community to use email addresses that do not include their first and last names.  Now, if the data were to fall into a bad actor’s hands, they would have a harder time uniquely identifying each participant. 

Thinking about how long you need to keep information is also important. Deleting information that you no longer need is a great safety measure. Some organizers use documents such as spreadsheets to organize one-time efforts where they don't need to keep the data forever. 

Since your community may have different needs and concerns, here are some questions you might ask to ensure you’re only collecting what’s strictly necessary:

  • What types of information do you need to accomplish your goal?
  • Are there redundancies in the data you’re asking for? If so, can you remove some of those fields?
  • Which types of data are the most sensitive to your community? Can you ask for a different, less sensitive alternative piece of information, and still achieve your goal?
  • At what point can you delete this spreadsheet and the submitted data? 
Be Mindful of Permissions, And Transparent About Access

Within a service like Venmo, Facebook posts, or Google Sheets, users can limit visibility by adjusting settings.

For example, people using Venmo might be surprised that all their transactions are public by default. Users can adjust the settings for their transactions to Private, to be visible to the sender and receiver only; however, Venmo always makes the record of who you are interacting with publicly visible. Google products, like Docs and Sheets, can be made private to be only visible to invited email addresses within a trusted community. Facebook posts can be made more private by limiting visibility to certain friends or communities. 

However, permissions and access considerations go beyond individual tools, and organizers need to think them through from the beginning. For example, instead of using a large Google Sheets document that’s publicly accessible, visible, and editable by anyone, Layla might consider using a Google Form to have her community submit requests for aid and offers to volunteer. Layla might be comfortable with the minor trade-off that a Google Form requires a few trusted people to vet requests, and she might choose to communicate that process clearly with her community members. 

Or perhaps Layla decides to act as matchmaker only—connecting those offering services and those requesting help—by introducing them over email, and encouraging them to use an end-to-end encrypted tool to communicate further details.

Encrypt All The Things 

There are many types of encryption, and it’s helpful to get familiar with those that are relevant to your mutual aid effort. EFF spends a lot of time writing about the vast benefits of encryption. You can read a more thorough summary on types of encryption at our beginner-friendly educational resource, Surveillance Self-Defense.

When selecting a method to facilitate communication, it’s helpful to think through who can see what data, and how that data is stored and protected. When accessing a service through the Internet, your traffic (and all its submitted content—“data”—and information about the content—“metadata”) is passed through multiple devices controlled by multiple entities before arriving at the intended destination device. It can be very distressing to learn that information that was intended for one person was in fact visible to many people. 

The diagram shows unencrypted data in transit—which is often the default setting for Internet service providers. On the left, a smartphone sends a green, unencrypted message to another smartphone on the far right. Along the way, a cellphone tower passes the message along to company servers and then to another cellphone tower, which can each see the unencrypted “Hello” message. All computers and networks passing the unencrypted message are able to see the message. At the end, the other smartphone receives the unencrypted “Hello” message.

One thing to think about is how the data is moving in transit: how are people sharing the information, how are they communicating their needs and services, how are they contacting each other? And how can you make it as safe as possible?

In general, end-to-end encryption is the best option available to protect communications data to be between just sender and recipient, as it encrypts between the users’ “end” devices. Examples of end-to-end encrypted messaging tools include SignalWhatsApp, and Keybase. However, before joining an end-to-end encrypted service, the community needs to hear about this mutual aid effort in the first place, and they might first learn about it through a website.

Which brings us to our next point: be wary of services and websites that aren’t encrypted. For example, if a service is just using HTTP (and not HTTPS) to collect information submitted from a form, this means their sensitive data is not encrypted.  

If you’re someone who is running a website, like Layla, you can get a free HTTPS certificate through Let’s Encrypt. Check out this list of web hosts that provide HTTPS certificates to see how to get a free Let’s Encrypt certificate and provide basic security for your users.  

For those hoping for assistance from a mutual aid effort, be wary of services that don’t use encryption. Know that mutual aid efforts that encourage you to send very personal information over HTTP offer no protections: anyone from your Internet Service Provider to someone passively looking at your network or the website provider’s network can access the data that is submitted. Instead of HTTP, look for services using HTTPS, which means that the data is using transport-layer encryption.

Thinking About Trust And The Sensitivity of Your Community’s Data

The good news is that most services on the web use HTTPS to protect that data in transit. However, this doesn’t necessarily mean that the service deserves your trust. Is it someone who you know personally, running their own website for mutual aid efforts? Do you trust them to protect the data being submitted? Or is it a large company, like Google, Facebook, or Twitter? Does the company provide different settings for documents and posts, such as “public,” “private,” or restricted to a small group?

In particular, ask yourself the following questions:

  • How sensitive is the data that you’re collecting on this platform?
  • Do you trust in the security capacity of the service provider?
  • Do you trust they'd handle your community’s data responsibly?
  • What do you do if you don't trust them?

For some people’s security plans, knowing that a large company like Google or Facebook can see all their communications within the platform is an acceptable risk—for others, this may be completely inappropriate for their community and would violate trust. Those people may instead choose to go with a more privacy-protecting product or to use an end-to-end encrypted service. For more detail on how to consider a service, check out these questions for assessing a vendor’s data security

Regardless, organizers will want to think about how to facilitate communication outside of a company’s service and view. That is, moving from just transport-layer encryption like HTTPS, where the company or website provider can see communications happening on the platform, to an end-to-end encrypted service, where those communications can just happen between the intended sender and intended recipient. 

The top diagram demonstrates transport-layer encryption, where a company's devices in the middle can decrypt messages exchanged between users; The bottom diagram demonstrates end-to-end encryption, where the decrypted message is only visible to the end devices and not the service providing devices.

Layla might encourage her community to use a tool like Signal or WhatsApp to communicate more details of their story, as she has determined that she doesn’t need to collect nor know this private information. 

Other Things To Consider

 As Layla’s organizing effort gains traction, she may consider cross-linking to other mutual aid organizing efforts to amplify their work. However, each organizing effort has different security plans, and may have different levels of comfort with publicity, or with being cross-linked as a national network of mutual aid efforts. For folks creating these aggregating documents, it’s a good practice to ask each of these organizers individually if they’re okay with their effort being amplified.

Additionally, aggregators may want to consider the difference between types of information a mutual aid organizer publishes. It may range from the very sensitive (information about community members and requests and offers for help), to less sensitive, such as amplifying government financial assistance programs, or hospital calls to donate Personal Protective Equipment, restaurants offering takeout, and store hours for people with disabilities and the elderly.

For those aggregating and compiling mutual aid efforts, think through:

  • Why are you aggregating? What is your goal?
  • What different kinds of data or information are you amplifying? Do they need different privacy considerations?
  • What information do you actually need for your data aggregation to be useful to people?
  • Before linking to smaller data sources, can you communicate with the spreadsheet organizers? It’s helpful to get consent from the mutual aid organizers you are referencing, as they may not have intended for their work to be viewed beyond their communities.

 It’s incredible what mutual aid organizers have been able to accomplish in such a short span of time, especially in a time of such upheaval. Sites aggregating hundreds of local community resources have cropped up, connecting and supporting people in ways that may prove to be life-saving during this crisis. It’s more important than ever to ensure that mutual aid efforts are protective of the people they’re serving. Working security planning processes into your organizing is one way to make sure you’ve got the bases covered for you and your community.

* * *

Participating in Mutual Aid? Keep the Following in Mind Collecting and sharing information

For those organizing mutual aid, collecting information from individuals, and creating solutions to connect people:

  • Define your audience
    • Who are you trying to reach? What expectations can your audience have about what’s needed from them, how they’ll receive updates, and the visibility of their data? Who shouldn’t have access to this information?
  • Collect as little data as possible
    • What minimum data do you need to accomplish your goal? Which types of data are the most sensitive to your community? Can you ask for alternative types of data instead?
  • Be mindful of permissions, and restrict access where possible
    • Do you need public access to your data? If not, can you restrict permissions to a smaller subset of your community?
  • Use encryption in transit and at rest
    • For the service or platform you’re using, who can see what data? Is your data protected when it’s sent or stored?
  • Think about which companies, people, and systems you’re trusting with this sensitive data
    • Can you suggest more secure channels for following up with more detailed information?
    • Can you connect participants through end-to-end encrypted platforms? End-to-end encrypted communications help to protect communications’ data to be between the intended sender and intended recipient. Some examples are Signal, Whatsapp, and Keybase.

For those aggregating and compiling mutual aid efforts, think through:

  • Why are you aggregating? What is your goal?
  • What different kinds of data or information are you amplifying? Do they need different privacy considerations?
  • What information do you actually need for your data aggregation to be useful to people?
  • Before linking to smaller data sources, can you communicate with the spreadsheet organizers? It’s helpful to get consent from the mutual aid organizers you are referencing, as they may not have intended for their work to be viewed beyond their communities
Using and contributing to mutual aid services 

For those using and contributing to these mutual aid services, check for clear communication from the organizer about:

  • What expectations are for participation in this mutual aid effort
  • Which information is necessary or not necessary to participate
  • Whether the platform (website form, spreadsheet, or other method) is using encryption, and ensure that it is at least using HTTPS
  • How publicly visible the data is, and how much organizers can see versus the general public
  • Where the data will be stored, and for how long
  • Whether there are end-to-end encrypted communication tools for connecting with participants further around sensitive details, and how to separate those details from a more widely-viewed platform

Additional considerations for people participating in mutual aid efforts are:

  • Know your risks: can you communicate these concerns with the organizers and talk through the steps they are taking to mitigate them?
  • Be wary of potential phishing attempts relating to the information provided.
  • Consider what you can omit: Do you need to give out your real name, or other identifying information such as your phone number or home address?  If your email includes your real name, can you use a different email that’s less connected to your identity?

We’d like to thank Sherry Wong, Rocket Lee, Mona Wang and Martin Shelton for their guidance. 

Government Needs Critics—Now More Than Ever

EFF - Mon, 03/30/2020 - 12:11pm

In late December, only a few hundred people knew of COVID-19. Now it’s March—just 90 days later—and much of the world has had to learn about and adapt and respond to the deadly disease. Though the highly contagious virus seems impossible to ignore today, it’s in part thanks to whistleblowers and critics around the world sharing warnings and information that some governments responded to the pandemic when they did.

Even dissenting voices are critical when the literal health of millions is at stake.

But even now, different governments are handling the crisis in a spectrum of ways: within the U.S., individual states have taken extraordinarily diverse approaches to controlling its spread, some nearly dismissing it, others implementing strict quarantine measures. And rather than highlighting the need for increased transparency, some governments are using this as an opportunity to curb freedom of the press, limiting what can be reported, or putting out competing stories meant to shift the narrative away from the dangers of the disease or criticisms of their official response. 

It’s rarely been more important for individuals to be able to speak out and share information with one another online than in this moment. In a crisis—especially under authoritarian regimes, and in the absence of a trustworthy press—free expression is critical for people to engage with one another. Under governments that dismiss or distort scientific data, it may even be life saving. 

Governments Misuse Crisis to Crack Down and Expand Laws

But as individuals comment on how officials are handling the situation—either to praise, critique, or ask questions—and people share potentially critical experiences and information with one another, some countries are using the opportunity to crack down dangerously on free speech. 

Dr. Li Wenliang, the Chinese physician who warned colleagues of the deadly and contagious new virus in late December via a private WeChat message, has gained notoriety as a whistleblower. He was quickly accused by government officials of illegally posting rumors online when screenshots from his private messages were shared on public forums, which became the first place many heard of the virus. Li signed a police statement agreeing to cease spreading misinformation, and a few weeks later he passed away due to complications from having contracted coronavirus himself. 

Li’s warnings likely saved lives: his colleagues shared his message, which helped force officials into action. He has since been called a hero, and authorities have admitted to mishandling his case. But Li was not alone: seven other Chinese medical professionals blew the whistle about coronavirus early on

Since then, hundreds more have been arrested by the “Internet police” in China for commenting about the situation online. Others have been arrested around the world for posting comments about the virus or for protesting government reactions to it. The governments of Cambodia, Malaysia, Palestine, Thailand, and Indonesia have all arrested individuals for spreading “misinformation.” Many arrested were activists. 

Singaporean officials are using the outbreak to justify legislation which gives new powers to limit “fake news” far beyond the scope of potential dangers to public health. In Morocco, individuals have been arrested for critiquing restrictions on public gatherings, and government officials have used this crisis to push forward new cyber crime laws limiting online speech. Egyptian police have arrested protestors for demanding the release of prisoners jailed dangerously close in overcrowded cells, including the family of free software developer and activist Alaa Abd El Fattah. And Egyptian officials have removed at least one journalist from the country for reporting on a study critical of the “official” number of cases. 

Healthy Societies Require More Than One Voice

Though some regimes are taking this moment as an opportunity to censor and even jail individuals for their opposition, it's heartening that there are also a number of stories of people coming together in innovative ways to aid one another, often in lieu of official government assistance. But in the rush to take in all the information available about the virus, often shared by individuals, not governments or the press, we can't lose sight of how countries may be building frameworks that cement in place what does or does not qualify as “information” or “misinformation.” And while there’s an important flurry of legislative activity to protect people affected by this crisis, it’s important to remember that laws or regulations instituted now could be used to censor and overcorrect accurate, useful speech—sometimes the speech of those working together to help one another survive.

It’s clearer when you look to Dr. Li. In this crisis, the stories of individuals coming together to aid one another often intersect with those being arrested or charged for protest or misinformation. Time has saved lives, helping us slow the spread of the disease through quarantines, testing, and simple public health notices about hand washing and keeping six feet apart from others. And throughout the crisis, it’s frequently been those most at risk of retaliation—whistleblowers and government critics—who have given us that much-needed time by sounding the alarm.

Human rights workers, free expression advocates, bloggers, software developers, and activists are all in danger when government uses leeway obtained during a crisis to curtail free expression far beyond what’s required. Governments must not take advantage of the COVID-19 pandemic to justify new limitations on speech. And they must not use this crisis as a tool to set in place new restrictions or regulations on whistleblowers, activists, or others who are sharing information. 

“I think a healthy society should not have just one voice,” Dr. Li told journalists just before his death, which sparked cries for an end to freedom of speech restrictions around the country. In this moment when the Internet has helped millions come together through quarantines and other difficult measures, laws restricting freedom of expression must not be expanded. Even dissenting voices are critical when the literal health of millions is at stake.

EFF Joins Locast Defense Team to Fight for TV Viewers’ Right to Use Free, Legal Streaming Service

EFF - Mon, 03/30/2020 - 12:00pm
Giant Broadcasters Abuse Copyright Laws to Go After Nonprofit

San Francisco—The Electronic Frontier Foundation (EFF) today joined the legal team defending Sports Fans Coalition NY, Inc. (SFCNY), the nonprofit organization that runs Locast, a free, local TV streaming service facing bogus copyright infringement claims by broadcast giants ABC, CBS, NBC, and Fox.

Locast enables TV viewers to receive local over-the-air programming—which broadcasters must by law make available for free—using set-top boxes, smartphones, or other devices of their choice. Locast is available in 17 metro areas and has more than one million users, including people who can’t get local channels through an antenna or can't afford a pay-TV subscription.

The four broadcast giants filed suit against Locast last year, a year and a half after Locast launched, claiming it violates their copyrights in programming. But Locast is protected by an exemption to copyright law, put in place by Congress, that allows nonprofits to retransmit broadcast TV so communities can access independent, local stations offering news,  foreign-language programming and local sports. There’s no infringement if nonprofits make noncommercial transmission of copyrighted works, using donations to cover their costs.

“Broadcast TV is a vital source of local news and cultural programming for millions of people which matters now more than ever because of COVID-19,” said EFF Senior Staff Attorney Mitch Stoltz.  “But some broadcasters want to use copyright law to control when, where, and how people can receive their local TV broadcasts, and force people to buy expensive pay-TV services just to get their local news and sports.”

EFF joins the case as co-counsel alongside law firm Orrick, Herrington & Sutcliffe. EFF has a long history fighting copyright abuse and defending innovation that benefits the public. Broadcast giants, which already reap billions from charging users for programming, are attempting to use their copyrights to maintain market power and force consumers to pay for programming that’s supposed to be free.

“EFF has worked for many years to defend people’s right to access and use content with the devices and technologies of their choice,” said EFF Legal Director Corynne McSherry. “Defending Locast’s ability to stream local TV broadcasts using the Copyright Act’s nonprofit provision is part of that goal.”

“I am grateful beyond words to EFF for representing our non-profit and the consumers who rely on Locast,” said SFCNY Chairman and Locast founder David Goodfriend.  “Especially during the COVID-19 crisis, when Americans need emergency news and information from their local broadcasters, and when so many of our fellow Americans are suffering economically, Locast provides a critical public service.”


Contact:  MitchStoltzSenior Staff Attorneymitch@eff.org CorynneMcSherryLegal Directorcorynne@eff.org

EFF Asks California AG to Close Loopholes, Respect "Do Not Track" With Regulations

EFF - Fri, 03/27/2020 - 7:30pm

Today, EFF once again joined a coalition of privacy advocates filing comments with the California Attorney General (AG) on the latest proposed regulations for the California Consumer Privacy Act (CCPA). The CCPA was passed in June 2018 and took effect on January 1, 2020. Later this year, the AG will finalize regulations that dictate how exactly the law will be enforced.

While the first set of proposed regulations were (as we wrote at the time) a “good step forward” that could have gone further, the first revision to those regulations—published earlier this year—was largely a step backwards for privacy. Two weeks ago, the AG released a second set of revisions to the draft regulations, available here. [.pdf] With the enforcement deadline approaching, the public is running out of chances to weigh in on the rulemaking process. Some of the worst features of the regulations have been cut, but this round of modifications still falls short of a user-friendly implementation of CCPA. In fact, some new provisions added this round threaten to undermine the intent of the law.

For example, the CCPA sets aside a special set of companies, called “service providers,” which are exempt from certain parts of the law. Consumers can’t opt out of having their data sold to service providers in some interactions. In exchange, CCPA is meant to tightly restrict the ways service providers can use data they collect. However, the new draft regulations would greatly expand the ways service providers may use personal data, even allowing them to build profiles of individual people. The new regulations would also allow data brokers that collect information directly from consumers to avoid notifying them of the collection.

Other issues remain from earlier drafts. The latest draft still makes it hard for consumers to exercise their right to opt out of the sale of their personal information. Businesses may not need to treat clear signals like Do Not Track (DNT) as requests to opt out of sale.

Finally, some industry advocates have asked the AG to extend the enforcement deadline—by 6 months or more—amid the global health crisis. But the CCPA went into effect on January 1st, 2020, more than 18 months after its passage, and companies should already be complying with the law. Now, more than ever, consumers need the legal protections offered by CCPA. The AG should not extend the enforcement deadline on behalf of companies who would violate user privacy and the law.

Our coalition letter goes into more detail about these and other issues we have identified with the latest draft regulations. We urge the Attorney General to close business-friendly loopholes and make the CCPA an effective, enforceable tool for user privacy.

Read the coalition's full comments below.

Members of Congress Once Again Urge ICANN to Save Dot Org

EFF - Fri, 03/27/2020 - 6:44pm

As the proposed sale of the .ORG domain registry to private equity firm Ethos Capital plays out, we see more and more why this sale was rushed through: the longer we have to look at it, the more questions we all have, and the fewer answers we get. For the second time, some of the people questioning the wisdom of this sale are members of the U.S. Congress.

On March 18, Senators Elizabeth Warren, Ron Wyden, Richard Blumenthal, Edward Markey, and Representative Anna Eshoo sent a new letter [.pdf]  to the Internet Corporation for Assigned Names and Numbers (ICANN), urging, for the second time, that ICANN reject the “private equity takeover of the .ORG registry.”

The members of Congress pointed out that their previous questions have still not gotten satisfactory answers from ICANN, Public Interest Registry (PIR, the currently-a-nonprofit entity that runs .ORG that will be converted to a for-profit if this sale goes forward), and Ethos Capital. What we do know is that, while PIR claimed that ICANN’s review of the deal is limited to whether the sale will keep .ORG “secure, reliable, and stable,” ICANN itself said, “This is wrong.” ICANN can, in fact, consider the impact of the sale on the “public interest and the interest of the .ORG community.”

Take Action

Stand up for .ORG

Of course, the sale is against the interest of the .ORG community. More than 25,000 people and 858 organizations have signed a letter demanding a stop to the sale. The impact on the public interest is proved by, among other things, the weakness of the “stewardship council” that Ethos claims will prevent them from harming the nonprofit community. Among other problems, PIR has reserved for itself the ability to ignore the council, making its existence basically moot.

The deal loses even on PIR’s preferred home court of the security, reliability, and stability of the .ORG domain registry. Ethos Capital and PIR claim that the benefits of converting PIR from a nonprofit for a for-profit is that it will allow them to take “risks” and develop new “products and services.” The members of Congress point out that in a webinar held last month neither PIR’s CEO nor Ethos’s CEO could give “a single, clear example of a useful new product or service that would be offered in exchange for the private equity-funded takeover of the .ORG domain, or an explanation of how .ORG being operated by a company that is ‘in the business of taking risks’ would be in the public interest.”

Based on EFF’s and Americans for Financial Reform Education Fund’s analysis of what is publicly known about this deal, it seems like the outcome can only be a PIR burdened with debt that will likely be paid off by reducing investments in technical upkeep, which could hurt reliability, security, and stability of the domains; charging nonprofits higher fees, under a new rule that allows PIR to raise prices up to 10% every year; allowing PIR and Ethos Capital to double the registration fee within seven years; and offering unspecified “new products and services” that could harm the interests of nonprofits in .ORG.

Ethos Capital and PIR have tried to use Public Interest Commitments (PICs), in order to make the square peg of this deal fit in the round hole of what is wanted and needed by the .ORG community. One PIC concerns registration fees, but doesn’t address any other burden PIR could place on .ORG registrants, many of whom rely on their .ORG address and have spent years making it a safe and reliable site for people seeking information and help from a nonprofit to go and therefore are incredibly reluctant to give up the address. As the letter from the members of Congress states, “we remain concerned that, if the sale is approved, Ethos can and will impose unlimited additional fees on registrants or registrars, which would not be addressed by the PIC’s price limit on registration fees.”

It’s incredibly important that people looking for help from nonprofits are able to go to the established, stable website to ask for it. In emergencies, people looking for help or reliable information are under extreme stress and need to get to the exact organization they are seeking, without interruption. And if someone is looking to donate to nonprofits providing vital services, it’s equally important they give their money or other gift to the right place.

The important work of the .ORG community should not be interrupted by anything, and certainly not a sale which will wring money out of that community while risking the reliability and stability it needs.

Now More Than Ever, Prisoners Should Have Some Access to Social Media

EFF - Fri, 03/27/2020 - 4:00pm

COVID-19 has trapped many of us in our homes, isolating us from family and friends and limiting our movements. But there are few people who feel the isolating impacts of COVID-19 more acutely than those who are actually incarcerated­ in jails and prisons across the country. As Jerry Metcalf, an inmate in Michigan, wrote for the Marshall Project’s “Life on the Inside” series:

For those of you reading this who feel trapped or are going stir-crazy due to your coronavirus-induced confinement, the best advice I can give you—as someone used to suffering in long-term confinement—is to take a pause, inhale a few deep breaths, then look around at all the things you have to be grateful for.

Metcalf’s is an important perspective to have, but, unfortunately, it is increasingly difficult to hear from inmates like him. That's because prison systems are making it harder for the public to hear from incarcerated people through excessive restrictions on the ways prisoners can express themselves over the Internet.

As the pandemic unfolds, state agencies should take a flexible approach to enforcement of restrictions on inmates’ ability to connect with the outside world.

It’s especially important to hear from Metcalf, and others like him, in this momentgiven the heightened risk COVID-19 poses to inmates. The virus has already demonstrated an ability to move swiftly through closed spaces, like cruise ships and nursing homes—and it’s already made its way into several prison systems, the consequences of which we’ll sadly see unfold over the next several weeks. As Metcalf described it, COVID-19 has turned his prison into a “death trap.” Given the potential humanitarian crisis many prisoners now face, it’s critically important to receive unvarnished reports from them about life inside prison walls.   

For those outside of prison, social media has been an important tool during the pandemic—helping us connect with family and friends, to share updates and news, and to stay informed.

But, overwhelmingly, the incarcerated cannot connect to the outside world in this way.   

At EFF, we’ve long been concerned with government attempts to unduly limit prisoners’ speech—especially by limiting access to technology that would allow the incarcerated to lift their voices beyond the prison walls. These restrictions come in a variety of forms, but one type we’ve paid particular attention to in the past is limitations on access to social media.

Many states prohibit inmates from accessing or posting information to social media in any manner. Some states, like Alabama and Iowa (pdf), go so far as to limit the ability of third-parties outside of prison—like a friend or relative—to post information to social media on an inmate’s behalf. Some of these policies can even extend beyond what we typically think of as social media, prohibiting access to email or even any online publication of prisoners’ speech (including, as a potential example, stories like Metcalf’s published by the Marshall Project). Violations can carry extreme and disproportionate consequences. For example, some inmates in South Carolina received years in solitary confinement for posting on Facebook while in prison.  

Even in calmer times, draconian limitations on social media access are dangerous and raise serious First Amendment concerns. Prisoners, and those who support them, use social media to raise awareness about prison conditions; to garner support for court cases or clemency proceedings; and to otherwise advocate for important social and political issues.

As we’ve said before, invoking the immortal words of Martin Luther King, Jr, whose writings from jail changed the course of civil rights in America:

Inmates may lose many liberties when they enter the correction system, but the ability to participate in debate online should not be one of them. Censorship of prisoners is also censorship of society at large because it deprives the public of the freedom to read the long letters, consider the long thoughts, and hear the long prayers of people who have lost their freedom. 

The need to hear these voices now is particularly important—as prisons begin to close to outside visitors, and further isolate, in an attempt to stave off COVID-19. Jerry Metcalf’s perspective—from inside a prison in Michigan in the midst of a global pandemic—is equally important if it’s published by the Marshall Project or if it’s shared by a relative in a Facebook post. What’s important is that the world is able to hear his story, and those like him, right now. 

As the pandemic unfolds, state agencies should take a flexible approach to enforcement of restrictions on inmates’ ability to connect with the outside world, including curbing the enforcement of overly restrictive social media policies. We’ll be carefully watching to make sure any restrictions that are applied are done so consistent with the First Amendment rights of inmates and those who support them.   

EFF, ACLU & CDT Argue Five Months of Warrantless Covert 24/7 Video Surveillance Violates Fourth Amendment

EFF - Fri, 03/27/2020 - 1:38pm

Should the fact that your neighbors can see the outside of your house mean the police can use a camera to record everything that happens there for more than five months? We don’t think so either. That’s why we joined ACLU, ACLU of Massachusetts, and the Center for Democracy & Technology in filing an amicus brief last week in the Massachusetts Supreme Judicial Court arguing the Fourth Amendment and Massachusetts’s state equivalent protect us from warrantless video surveillance of our homes.

In Commonwealth v. Mora, Massachusetts State Police secretly installed several cameras high up on utility poles in front of Nelson Mora and Randy Suarez’s homes. These “pole cameras” allowed officers to watch video feeds of the two homes (and by extension everyone going in and out of the homes) in real time, remotely control angle and zoom functions, and zoom in close enough to read license plates. Officers recorded the footage over a period of several months, which allowed them to go back, search through, and review footage at their convenience. They never got a warrant to install the cameras, and the extended surveillance was not subject to any court oversight.  

Mora and Suarez moved to suppress the video surveillance, arguing the use of the cameras violated the Fourth Amendment and article 14 of Massachusetts’s Declaration of Rights, which prohibit unreasonable searches. 

In our amicus brief, we asked the court to recognize, as the Supreme Court did in U.S. v Carpenter, that, just as collecting cell phone location data over time reveals sensitive information about people, using stationary video surveillance to record all activity in front of a person’s home for months implicitly reveals so much more private, sensitive, and intimate information than the public sees merely walking by the house from time to time. Using this invasive surveillance, the police  could learn or infer private relationships, medical information, and political or religious beliefs. And, as with the collection of location data, technological advances make video surveillance cheap and easy for law enforcement to implement, removing the practical privacy protections that existed when the police had to rely on physical surveillance such as covertly positioning actual officers in front of a house (and paying those officers their full salaries).

Our brief also informed the court about recent advances in camera technology and digital storage and search. Cameras can now hone in on small details with startling accuracy. For example, one company has released a camera small enough to fit on a drone that can identify a face from 1,000 feet and read serial numbers on appliances from 100 feet. Casinos are using cameras that can read text messages off phones. And Logan Airport has a camera that can see any object a centimeter and a half wide from a distance of more than one and a half football fields. Digital storage and search capabilities also now make it possible for police departments to hold on to surveillance footage for a long time and to search through footage easily using keyword searches for categories like gender, age, and “appearance similarity.” Even though the cameras that focused on Mora and Suarez’s homes did not have all of these capabilities, the U.S. Supreme Court has instructed that courts should take into consideration technology that is currently in use or in development in conducting their Fourth Amendment analysis.

Finally, we noted that secret video surveillance like this disproportionately impacts minority and poorer communities. The prosecutors in this case argued that Mora and Suarez did nothing to hide their homes from public view, so they couldn’t expect privacy from government surveillance that would in essence “see” the same thing that a worker on the top of a utility pole could see. However, utility poles commonly rise 20-40 feet in the air. Only the very wealthy can live in communities where their properties are either set back so far from these poles as to be hidden from view or the utilities are buried underground. Without the financial resources to live in neighborhoods and homes like this, under the government’s arguments, those with less means would face forced diminishment of their privacy expectations and disproportionate surveillance in direct proportion to their income level.

The Massachusetts Supreme Judicial Court planned to hear this case on April 7, 2020, but that date has been extended, given the current COVID 19 crisis. We will update this post when the court issues its opinion.

Related Cases: United States v. Vargas

EFF Joins Coalition Urging Judicial Transparency During the COVID-19 Emergency

EFF - Wed, 03/25/2020 - 6:22pm

EFF and a number of other organizations that advocate for government transparency have signed onto a letter written by the First Amendment Coalition asking the California state judiciary to ensure public access to court proceedings and records.

Many clerk’s offices are restricting entry and many operations of the state court system have moved online in direct response to actions taken by Gov. Gavin Newsom, including the Statewide Order of March 23, 2020, which in effect restricted physical access to and the activities of California’s courts. In the letter, addressed to Chief Justice Tani Cantil-Sakauye, coalition groups urge that while extraordinary measures are needed in the time of a public health emergency:

“we need to recognize that important civil liberties and constitutional rights should not be unduly restricted. While courts are closing buildings, halting proceedings and holding some hearings telephonically, we are concerned members of the press and public will face insurmountable barriers to access judicial records and proceedings.”

 Especially in times of crisis as governments make big decisions that could impact the safety and liberty of millions, it is more important than ever that government remain transparent and accessible when it comes to decision making.  With so much to be decided, secrecy breeds distrust, panic, and conspiracy theories at a time when people need their government most.

To that end, the letter requests:

  1. Telephonic hearings must be conducted on conference lines that make allowance for free public usage and dial-in information be made public ahead of the hearing.
  2. Criminal proceedings must be conducted in a way that the public and press can still safely observe.
  3. Court records must remain publicly available, and fees for online access waived, until normal operations resume.

These requests echo those EFF has made in other venues to preserve government transparency during the COVID-19 crisis.

EFF recently signed onto a letter urging local and state governments not to give into panic and secrecy by cutting people off from their right to know what the government is doing and what decisions they are making. “At all times,” the letter said,  “but most especially during times of national crisis, trust and credibility are the government’s most precious assets. As people are asked to make increasing sacrifices in their daily lives for the greater good of public health, the legitimacy of government decision-making requires a renewed commitment to transparency.” This included a rejection of the Federal Bureau of Investigation’s decision to totally suspended accepting Freedom of Information Act requests.

EFF has also pushed for digital access to the arguments and processes of the U.S. Supreme Court as a way to make sure the American people are not shut off from the nation’s highest court. Although the Court has suspended oral arguments, once it beings hearing them again, it must allow the public access by broadcasting or releasing same-day video recordings of its proceedings. The Supreme Court recognized the need for this transparency more than 40 years ago, writing that “People in an open society do not demand infallibility from their institutions, but it is difficult for them to accept what they are prohibited from observing.”

Whether it concerns actions dedicated to stop the spread of COVID-19, or just the general everyday operations of government, people have the right to know what their government is up to. In the era of social-distancing, this might require getting creative, but if we’re all moving online to contend with the public health crisis, government transparency can too.

The Feds Can Stop Patent Trolls from Endangering COVID-19 Testing and Treatment

EFF - Wed, 03/25/2020 - 5:50pm

It’s unthinkable that bad actors could take advantage of patent law and keep the public from getting access to COVID-19 tests and treatment, but they can and will—it already happened this month. Fortunately, an often-overlooked section of U.S. patent law allows the government to do something about it.

Patent troll Labrador Diagnostics LLC recently used a portfolio of old patents to sue a company that makes and distributed COVID-19 tests. The story gets weirder: those patents were originally issued to Theranos, the notoriously fraudulent blood-testing company that closed up shop in 2018. It’s a particularly outrageous example of an all-too-common story: a company fails, yet its patents live on as fodder for legal bullying against practicing companies in the same field.

The Labrador Diagnostics case is a clear example of a time when the incentives don’t line up right: in this case, exclusivity is standing in the way of innovation.

We’re relieved that Labrador has now agreed to grant royalty-free licenses for COVID-19 testing, but this case shows how high the stakes become once a U.S. patent issues and grants its owner the right to stop others from engaging in productive—and here, potentially life-saving—activities. It also shows that these stakes are high because of the power that patents convey—not because patent owners necessarily provide any benefit to the public.

At its core, the patent system exists to enhance the public’s access to innovation, not to compensate individual rightsholders. The patent grant isn’t a paycheck; it represents a trade between an inventor and society. Inventors agree to disclose certain information to the public about how an invention works; in return, they get the right to stop others from making, using, or selling the patented invention without permission for 20 years. In principle, this period of exclusivity allows inventors to recover the costs of research and development and make a profit.

The most vocal defenders of a rigid patent system believe that innovation simply would not happen without that period of exclusivity—in other words, that innovation is simply impossible without government-backed restrictions on access. But the Labrador Diagnostics case is a clear example of a time when the incentives don’t line up right: in this case, exclusivity is standing in the way of innovation. Nonprofit researchers have developed low-cost tests for COVID-19—truly life-saving innovation—that companies like Labrador could block by asserting their patents and thus invoking their right to exclude.

Fortunately, the U.S. government can do something about it. 28 U.S.C 1498 allows the government to use or authorize others to use any invention “described in and covered by a patent of the United States.” If such authorization is granted, patent owners can sue the United States, but only for reasonable compensation. That means they cannot seek injunctions against private entities working for the United States government. Nor can they engage in protracted litigation in patent-friendly jurisdictions like the Eastern District of Texas; they must sue the government in a bench trial in the Court of Federal Claims in Washington, D.C.

To stop Labrador and any bad actors that might follow, the government could invoke Section 1498, and thereby make itself—rather than private entities—the defendant in a patent infringement lawsuit. That would save the public from the risk of an injunction that would cut off the public’s access to desperately-needed tests. Long before the current crisis, scholars in the pharmaceutical field have advocated for the government to use Section 1498 to enhance access and reduce drug prices.

As Labrador’s patents on methods of testing show, the effects of patents on access to medical care goes beyond the accessibility and affordability of pharmaceuticals. That is especially true as software-based tools and services become more and more integral to our health care system. We already have evidence showing that software patents generally serve to transfer resources from more to less innovative companies. And we have seen patent trolls go after innovative health care companies, as a patent assertion company called “My Health” did, when it sued numerous remote healthcare monitoring services despite the fact that it wasn’t offering any services to the public. Patent abuse could stop software-based healthcare solutions from getting to people who need them. 

During the current crisis, we hope the United States government will use its statutory authority under 28 U.S.C. § 1498 to issue compulsory licenses on patents that stand in the way of access to existing technologies and the space to develop new technologies to benefit public health. Whether those technologies are rooted in biochemistry or computer science, the government has the power to mitigate the damage done by opportunists using the patent system to stop practicing companies from bringing needed services to the public. Owners of valid patents would still be entitled to reasonable compensation for the use of their inventions, but they wouldn’t be entitled to stop private companies and nonprofits from doing important work that benefits us all.

Verily's COVID-19 Screening Website Leaves Privacy Questions Unanswered

EFF - Wed, 03/25/2020 - 12:12pm

One week after Alphabet’s Verily launched its COVID-19 screening website, several unanswered questions remain about how exactly the project will collect, use, and retain people’s medical information.

Verily, a healthcare data subsidiary of Google's parent company Alphabet, has until now operated its Project Baseline as a way to connect potential participants with clinical research. Now, after a confused roll-out, Verily’s Baseline COVID-19 Pilot Program screening and testing website allows users to fill out a multi-question survey about their symptoms and, if they are eligible, directs them to testing locations in a few counties in California.

After a letter from Congress and multiple blog posts, press statements, and not one but two FAQs from Verily, users still do not have enough information about how using this service will affect their medical privacy. So, we have a few questions of our own.

Why does using the site require a Google account?

While the United States is in dire need of more testing, individuals’ access to this critical health service should not hinge on whether or not they have created an account and shared information with the world’s biggest advertising company.

But you can’t use the Verily screening website without a Google account: users must either log into their existing Google account, or create a new one, before filling out the screening survey. Verily representatives have claimed this is necessary to authenticate users and contact them during the screening and testing process. However, Verily has not explained why a Google account is uniquely suited to identifying patients, or why the project cannot use other less invasive forms of identification.

What will Verily do with your information?

Verily assures users that the medical information they input as part of the screening service will not be linked with their Google account data without “separate or explicit” consent. However, the screening website’s FAQ page says that information may be shared with “certain service providers engaged to perform services on behalf of Verily,” which includes—you guessed it—Google.

Verily also assures users that their information will not be used for advertising. What Verily will use that information for, however, is broad and unclear. Its privacy policy lists “commercial product research and development,” as a potential use, and the Project Baseline FAQ lists similarly vague uses, including to “provide insights about your health,” “conduct and publish research on health and disease,” and “build new tools, technologies, products, and partnerships related to health and disease.” Without explicit written documents memorializing these data use protocols, users have little reassurance that Verily’s uses of their health data will be tailored, appropriate, or privacy-protective.

Who is Verily sharing data with?

Verily states that it will not share any information with insurance or medical providers, which is a good start. However, Verily outlines other potential recipients of users’ information:

The information you choose to provide during the screening process or testing process may also be shared with the healthcare professionals at the specimen collection sites, the clinical laboratory that processes specimens, the California Department of Public Health, and potentially other federal, state, and local health authorities, as requested or mandated for public health purposes.

While Verily has been clearer about the healthcare professionals and labs it partners with, it does not detail what “other federal, state, and local health authorities” include. What is Verily’s relationship with the U.S. government? Would ICE, for example, have access to user data under any circumstances? The only thing that's clear here is that Verily is lumping federal, state, and local public health agencies into one undifferentiated mass, and that is unacceptable.

Verily also fails to provide more information about its relationship with the California Department of Public Health. Is there a written Memorandum of Understanding that lays out how data will flow between Verily and state health authorities?

Instead of FAQs and a privacy policy filled with vague predictions of how information “may” be shared, the public needs detailed documentation of how each of these relationships could play out.

Does using this service opt you in to Verily’s Project Baseline?

In addition to Project Baseline, where the COVID-19 screening site is hosted, Verily has its Baseline Platform, Baseline Registry, and Baseline Community.

After completing the screening survey on the website, users are asked if they would like to participate in Verily’s Baseline Community, which spokespeople have told the press will “enable you to participate in creating new knowledge that is critically important to the health of all of us in the face of the COVID-19 pandemic.” Statements go on to say that participation in Baseline Community is “completely voluntary,” and imply that users’ information is shared with California public health authorities regardless.

It’s unclear how these various Verily services intersect with the screening website, and how those relationships may or may not change in the future. Concerns about such internal relationships are especially critical given Google’s healthcare ambitions and previous scrutiny in this area.



Speaking Freely: An Interview with Cristian León

EFF - Wed, 03/25/2020 - 12:00pm

Cristian León, based in Buenos Aires, works for Asuntos del Sur, a “think/do tank” that works to strengthen democracy and participation. Originally from Bolivia, Cristian works on open government and democracy across several countries in Latin America, including conducting digital security trainings. He is also one of the founders and current advisors to the Internet Bolivia Foundation.

Over Zoom a couple of months ago, we discussed the current threats to free expression in Latin America, the connection between digital security and expression, and the increasing culture of surveillance he sees in the region.

Jillian C. York: What does free expression mean to you?

For me, it’s the ability for someone to express their mind, their thoughts as they are, without pressure. It’s the ability to say anything you want.

York: I love hearing the different answers to this! And what brought you to your work?

I’ve had many cases where I felt like I didn’t have free expression. Much of my professional life has been related to equality, and defense of human rights, so I’ve seen many cases where I suffer myself, or where other people’s [ability to express themselves] was cut off.

Some examples I can mention: Feminist cases, I have many feminist colleagues, who can’t use the green scarf [editor’s note: a symbol of the abortion rights movement in Argentina] in some government buildings, because some people from the government don’t like it because they don’t want to hear demands related to abortion. So when they go to a meeting with someone from the government, they have to stop wearing the scarf.

Another case, from Bolivia—you know, recently we had a situation where our last president, Evo Morales ... the army told him to resign, and another party took control over the government. The transitional government wasn’t legal. I wrote about it on Twitter, and many people that I know, relatives or friends, actually wrote me private messages to shut up because, even though they thought it was bad, they didn’t want me to express anything about it because they know I have relationships to international organizations. Some people even threatened to harm my parents. For that reason, I couldn’t express myself freely on Twitter. That was [a couple months] ago.

York: Do you feel safe now?

Yes, I’m safe, my parents are safe, but I cannot express anything about that topic on Twitter. I can talk about many issues, but not about whether that was a coup or not.

York: Wow, I’m glad you’re safe, but that’s intense. Are you observing or working on other issues in Latin America?

Yes. I’ve been working in Nicaragua, in Colombia, Bolivia, and Argentina.

York: I think it would be really interesting for readers to know what you see, in the next decade, as some of the threats to free expression in Latin America.

What I see is that fear is growing, and because of that many people are afraid to express themselves. For example, I know several cases from Colombia where people believe their phones were tapped. They’re afraid to say things on calls because they thought the government might hear them. They asked me how to know if your phone is being [spied on.]

The same issue happened in Nicaragua, but the difference is that the government there doesn’t have as much capacity, technology, to do that. But in Colombia, we believe that they do.

Because of the movies, because of the Snowden story, people believe—and I think in some ways they’re right—that they’re being monitored all day long and because of that, they can’t spread ideas. It’s surveillance culture. Somehow, this is positive because people are more aware of data, and how technology can be used in meaningful ways. But it’s really bad for our democracies because free expression is under threat.

York: Absolutely.

In Bolivia as well, even though we know that the Bolivian government doesn’t have this kind of technology, they don’t have this capacity … what we saw—and we have documentation of this—is that people in the streets have [had their phones taken by policemen] who make them open their phones to read their WhatsApp or other messages. There’s a belief that people might be conspiring against the new government. Many journalists, many activists, are being more careful with their phones because of that.

York: As a person who does digital security training, how would you describe the connection between security and expression?

I think that if you don’t have the conditions to know that the channel you’re using is secure, you might not be expressing yourself freely. For people to express themselves, to say whatever they want, they need to have secure channels: Secure phones, encrypted apps. That’s the connection.

York: Yes, I completely agree. Okay, let me take this in a different direction: Do you have a free expression hero?

Hmmm.

York: It could be someone from history, someone you know, whatever you like!

For me, I don’t have an individual hero, but I feel like the hero here, at least in Argentina, are the Madres de Mayo—it’s a movement that … there was a dictator in the 1970s, and the Madres de Mayo went out to express themselves and defend their rights, because they were looking for their sons that were lost during this period. They went against the dictatorship. Even now, they are a very powerful and respected movement. And, you know these feminist groups that tried to make abortion legal last year...they are modeled on the Madres de Mayo. To me, they are champions of free expression and democracy in general, because in spite of all the bad conditions and hostility, they went out to defend their rights and expose themselves—because you know that the government here was extremely violent, and most of them could’ve been assassinated, but they went out anyway. You can say that because of this, the democracy here in Argentina is now very strong.

York: That’s a great answer...I’ve really enjoyed everyone’s answer to this question so far.

What we’re experiencing now in Latin America, in most countries, is moving backwards. Usually we say that the direction of human rights is going forward, improving to have more rights. But what I feel now in Latin America, especially after having to give advice to many activists and hear [their stories], there is a sensation now that human rights in general are less respected, and people are afraid of what is coming...especially in countries like Brazil, Venezuela, Nicaragua, Bolivia. I think this is really bad, and it has an impact on how people feel. People no longer feel that they can express what they want anymore. For example, if you go into a meeting and you don’t know the people you’re meeting, you stay quiet because you don’t want to expose yourself. Others could take the information out of the meeting and do something to harm you.

York: Thank you again Cris.




The Right to Anonymity is Vital to Free Expression: Now and Always

EFF - Wed, 03/25/2020 - 9:15am

There are myriad reasons why individuals may wish to use a name other than the one they were born with. They may be concerned about threats to their lives or livelihoods, or they may risk political or economic retribution. They may wish to prevent discrimination or they may use a name that’s easier to pronounce or spell in a given culture.”

These words, from a blog post we published nine years ago during my first year at EFF, remain as true as ever. Whether we’re talking about whistleblowers, victims of domestic violence, queer and trans youth who aren’t out to their local communities, or human rights workers, secure anonymity is critical for these individuals, even life-saving.

And yet, our right to anonymity online remains at risk. Just last month, British television presenter Caroline Flack’s death by suicide prompted calls for more regulation of social media, with some pundits suggesting platforms require ID. In India, a similar proposal is expected to be released by the country’s IT Ministry, although reports indicate that verification would be optional.

Proponents of such proposals believe that when people use their “real” name, they behave more civilly toward one another. Facebook has long maintained that their policy requiring “authentic identity” keeps users safe. But the evidence just isn’t there. One report, from the Coral Project, breaks down the fallacy of why people believe anonymity makes people less civil, while another—from commenting platform Disqus—suggests that people are at their kindest when using a pseudonym.

But most importantly, there are myriad reasons why anonymity and pseudonymity remain vital tools for free expression and safety. Take, for instance, our recent case involving Darkspilver, a member of the Jehovah’s Witness community who posted comments—including a copy of an advertisement from the organization’s Watchtower magazine—to Reddit. The Watchtower Bible and Tract Society pursued a copyright claim against Darkspilver over the advertisement. A magistrate judge ruled that the organization should be able to pursue its claim, and ordered the disclosure of Darkspilver’s identity.

Darkspilver had serious concerns about being “disfellowshipped” from their community, having seen others cut off from their families and communities. EFF was able to successfully appeal in District Court, however, and Darkspilver’s anonymity remains protected.

Today, as we’re seeing many of our digital rights impacted by governments’ handling of COVID-19, the right to anonymity remains vital. We’ve already seen important medical information being shared with the press by anonymous health experts in Wuhan. We’ve also already heard stories of vital information being suppressed, and arrests of those who speak out against their governments.

In times of turmoil, authorities might scapegoat anonymous speakers, blaming them for societal challenges. But anonymous speech is often how the public finds out the depth and severity of those challenges, be it an abuse of political power or the severity of a global pandemic. Without anonymous speech, some lies powerful people tell would go unchecked.


















Social Distancing, The Digital Divide, and Fixing This Going Forward

EFF - Mon, 03/23/2020 - 8:02pm

Social distancing, work from home, shelter in place—these are all strategies employed in response to the COVID-19 epidemic. Americans who have jobs allowing them to engage in social distancing are very dependent on their Internet connection. That dependence is only going to grow as time goes on. As parents depend on the Internet for homeschooling, as businesses depend on employees being able to work from home, and as everyone depends on the Internet for public safety information, we need to recognize that our current Internet ecosystem is failing many Americans. And any infrastructure recovery effort that comes out of this situation should address the digital divide at its source: policy decisions that have left us at the mercy of a few, giant companies whose business concerns don’t include all Americans.

For however long this emergency lasts, an untold number of us will be forced to deal with the failure of our telecom policies to produce universally available, affordable, and competitive high-speed broadband options. Families with children who must simultaneously handle school closures and remote education while also working through video conferencing and cloud computing will reside in the two different Americas for broadband access. American households who reap the benefits of competition among ever increasing speeds with lowering prices and Americans who are forced to rely on obsolete infrastructure built from a bygone era or, worse yet, have no broadband options at all. Those two Americas still being split between what we call the "digital divide" in 2020 is a clear sign of failure in our current approach to broadband. It is imperative that we take it upon ourselves to forcefully bring an end to the inequality of access as part of any infrastructure recovery effort.

We Are Seeing the Digital Divide at Work, and Its Lines Are Drawn Where Fiber Access Exists

It could not be more clear: where there are upgraded networks—meaning networks that can deliver gigabit connections—those homes are able to handle the increase in Internet usage that social distancing requires. Where those networks do not exist—where Americans do not have choices for high-capacity services—social distancing is much harder on people, if not outright impossible.

Upgraded networks generally have had fiber infrastructure built by new, local, independent ISPs from both private and public providers.  This new competition forced the old ISPs—often the usual suspects of AT&T, Verizon, and so on—to improve their own networks to keep pace. Not only did competition improve the quality of Internet service, it also improved the price.

But there are many Americans who don’t have meaningful access to choice for high-speed broadband. Some have no choice at all. Communities that rely on decades-old Internet infrastructure lack access to an Internet connection that can handle the demands of social distancing. And the fault of this will lie with the ISPs who used record profits and tax cuts on everything but upgrading their services. The fault will also lie with our federal and state governments, which failed to promote fiber through laws pushing universality or funding to simply have someone besides the large incumbents build it.

Those relying on older networks are those who can least afford to: low-income and/or rural Americans. The most expensive part of starting an ISP is the initial construction cost. The legacy ISPs serving low-income and/or rural populations with older infrastructure have long since paid off that cost, but they still charge through the nose because their customers don’t have alternative choices. And the number one reason people do not subscribe to broadband at all is excessive price. Because no one is offering better service, at a better price, there is no reason for these companies to upgrade their networks, leaving many Americans without the high-speed, reliable, competitively priced Internet service that we absolutely need, especially now.

The differences between competitive markets in the United States and noncompetitive ones is stark. Aside from higher prices and inferior infrastructure, even the COVID-19 oriented relief packages are dramatically different. For example, AT&T is waiving overage fees (a fraction of the excessive bill most people pay) and Comcast is offering 25 mbps/3 Mbps for free for two months to low-income users, but a fiber competitor called Sonic in San Francisco (a city with a fairly decent amount of competition) is offering free gigabit service for three months to families and seniors regardless of their income status.

High-Speed Affordable Broadband Is Essential for Everyone—and That Makes It a Sound Investment

What is tragic about the digital divide is that there are no good reasons for it to exist, let alone continue. It is profitable to serve all Americans, no matter what major incumbents like AT&T and Verizon may say. If the major ISPs universally converted their older networks over to fiber to the home, they would be net profitable in the long run. Contrary to assertions that smartphones and wireless plans alone are sufficient, nothing can truly substitute for a high-capacity connection in the home. As we are seeing right now, the more and more we do online, the less and less our phones and our outside-the-home options will be compelling replacements.

Our own analysis of the world’s fastest ISP demonstrates how the financials work for fiber networks. That ISP is located in the United States, built and run by the local government of Chattanooga, Tennessee. Once a portion of their network had subscribers, their revenue from $70 a month for gigabit service outpaced their costs for the entire network. In other words, after they reached a certain number of customers, their profits grew faster than their costs. That profit allowed them to stretch the network further and further. In fact, because of the unique nature of fiber wires, they were able to upgrade to a 10 gigabit network with only a tiny additional investment. Unfortunately—and predictably—the old ISPs stepped in and got states to ban local government broadband, crushing further expansion by this successful competitor. Extending fiber networks is perfectly doable, blocked only by the refusal of the big ISPs to do it themselves and their successful campaign to erect legal barriers to stymie alternatives.

But even that hasn’t worked entirely. Because we need the Internet. And in a reversal of the classic movie quote, we’re already there, so we will build it. In the state of Utah, where residents had been left behind by incumbent ISPs, and where the state law banning community broadband remains, a handful of cities collectively started building universal open access fiber as a workaround. To butcher another movie quote, we will not be ignored.

Rather than build broadband, they built fiber infrastructure, and allowed small private broadband companies to sell services off the network. Demand is so high for the services from these neglected communities, that more than enough money is being made. In fact, they’ve made enough to pay for the entire construction effort. This is allowing the network (called Utopia Fiber) to rapidly expand and complete universal fiber deployments on schedule, all while giving people nearly a dozen broadband options at competitive prices.

In response to COVID-19, they are currently experiencing a record number of new subscriptions from the people of Utah who need more capacity to stay home for long periods of time. Everywhere in the country we continue to see pockets of success, from the 7,000-member People’s Rural Telephone Cooperative in Kentucky to nearly 100+ other small rural cooperatives deploying fiber to the home.

All of this shows not only that building fiber networks could have been done everywhere, for everyone, years ago, but also that it would have been profitable. So why have our big ISPs failed us?

The answer lies in their investor expectations and the companies' lack of willingness to engage in long-term investments versus faster short-term profits. Fiber networks are big investments that generally need 10 years or more to fully pay down the construction costs. Similar to when you buy a car, it comes with a big down payment, but eventually you have paid it off and just have maintenance costs. The difference here is that unlike your car, which depreciates after you buy it with higher maintenance costs over time, a fiber network will grow in value and usefulness because advancements in technology will allow it to get faster without any new down payments for construction. It is also expected to be useful for around 70 years after it is built. It’s a future-proof investment—the old ISPs just lack an interest in the future.

Since the old ISPs have proven unwilling to invest in what we need, no relief package or infrastructure package should defer to them on what to do. We should conclude that, after billions in tax breaks and federal deregulation by the FCC, that they are content with leaving people using decades-old infrastructure forever. After all, it is not like companies like AT&T are afraid of spending money when it comes to buying other companies, as their merger debt is an eye popping $171 billion (which is less than it would cost to give every single American a fiber connection). 

Ending the Digital Divide Depends on Federal and State Infrastructure Plans That Deliver High-Speed Internet to Everyone

The unnecessary hardships many Americans face to maintain their daily lives are the inevitable result of relentlessly low expectations pushed by the big, old ISPs. They’ve set the bar so low in hopes that the public and the government would just accept a fraction of what Americans deserve from the broadband carrier industry. This has resulted in too many policymakers engaging in rhetoric about the importance of broadband, rather than putting forth policies that would give every American affordable 21st century-ready Internet access as a matter of law. It is time for policymakers to back up their rhetoric with action.

EFF supports universal deployment of fiber optics and open access policies that would promote competition and affordability not as a pipe dream, but because we’ve seen the proof. Other countries are further along, giving us proof of concept.

So here’s what we know: we need to be willing to invest, both with dollars and with our laws, in the goal of connecting everyone by a specific date. We need to also refocus our laws in remedying the lack of competition in the broadband access market. Our own engineering analysis shows that a broadband access network that is all fiber will be more than ready for advances in applications and services for decades to come, including massive increases in usage needs. Countries like South Korea that long ago completed their universal fiber build did so because the government’s telecom policy drove that result.

As we noted in comments to the federal government and in our home state of California, the absence of a policy effort from government to push for guaranteed universality of fiber will continue the digital divide problem and worse yet replace it with a "speed chasm" of broadband choices. That means allowing the current state of affairs in the United States to continue is a choice. Let the hard lessons we are learning in real time today be the reason we finally commit to getting everyone connected in the aftermath.

The absence of universal access to high-speed, affordable Internet has made social distancing, working from home, remote education for children, and connecting with loved ones unnecessarily difficult. As Congress, the state governments, and local governments work to provide relief to Americans and the economy, any Internet infrastructure spending needs to remember this lesson.

User Privacy Champion Ashkan Soltani Joins EFF Advisory Board

EFF - Mon, 03/23/2020 - 5:28pm

EFF is proud to announce that independent researcher and technologist Ashkan Soltani has joined our advisory board, where he will share his expertise in privacy and security. Ashkan is a long-time EFF friend and collaborator whose research has informed our efforts to protect users from NSA backdoors, shine a light on third-party tracking, and hold the government accountable for unconstitutional mass surveillance.

Ashkan is a career advocate for user rights in the digital world, and his commitment to protecting consumer privacy will be vital to the work we do at EFF. Ashkan is one of the architects of the California Consumer Privacy Act, the nation’s strongest digital privacy law protecting private information and providing users more control over their data. His work looking under the hood of tracking technology and practices used by companies to collect user data—years before the Cambridge Analytica scandal—has been critical to the public’s understanding of how personal data is being mined and monetized.

Ashkan’s research was the basis for the Wall Street Journal’s award-winning series “What They Know,” a ground-breaking report on tracking technologies and how they work. He also co-authored a Washington Post series on NSA spying programs that was awarded a 2014 Pulitzer Prize. Ashkan was one of the first staff technologists at the Federal Trade Commission’s Division of Privacy and Identity Protection, where he helped lead investigations of Google, Twitter, and Facebook for misleading user privacy practices. Later, he was appointed Chief Technologist at the FTC, advising on technology policy and helping create a new Office of Technology Research and Investigation.

In 2016 Ashkan was recruited by the White House to serve as Senior Advisor to the U.S. Chief Technology Officer, consulting on consumer privacy and the ethics of big data. The engagement ended after the White House denied Ashkan a security clearance, which many in the tech community speculated was a result of his work on the NSA spying series at the Washington Post.

In 2018 Ashkan became an expert witness in EFF’s landmark Jewel v. NSA lawsuit challenging the constitutionality of NSA mass surveillance. In an affidavit, he testified that the communications of EFF’s plaintiffs were likely subjected to collection as part of NSA’s surveillance network.

We’re thrilled to have Ashkan on our advisory board.

The California Public Records Act Is an Essential Right, Even During a State of Emergency

EFF - Mon, 03/23/2020 - 5:21pm

As Californians shelter-at-home up and down the state, the journalists and citizen watchdogs who file California Public Records Act (CPRA) requests know that trade-offs must be made. We know that local agencies may be understaffed at this time and that they may be slow to respond to our letters. They may need to restrict our ability to inspect records in person at City Hall, and public records lawsuits may stall as courts restrict hearing dates. 

But where we draw the line is when government agencies announce they will suspend the public records request process altogether, a move telegraphed by several agencies in a recent Los Angeles Times story.

The right to access information is enshrined in the California Constitution, and this right is never more important than during an international crisis. That’s why EFF has joined the First Amendment Coalition and other public records advocacy groups in signing a statement supporting government transparency, even amid the most challenging circumstances. 

“While we acknowledge the extraordinary stresses that government agencies face right now, we urge all government agencies to comply with the California Public Records Act and the California Constitution and take all reasonable measures to continue to provide information to the public and the press during these exceptionally difficult times,” the groups write. 

The letter notes that COVID-19 is hardly California’s first major crisis. The legislature has never authorized the suspension of CPRA, nor do Gov. Gavin Newsom’s emergency orders waive agencies’ responsibilities under CPRA. 

The California Supreme Court has found that "openness in government is essential to the functioning of a democracy.” While COVID-19 will certainly interrupt some of our normal expectations, it is essential that our democracy continue to function through these hard times. That means ensuring that the public can understand and hold officials accountable for the decisions they make in the halls of power while we’re all stuck at home. 

Pages