Electronic Freedom Foundation

EFF Report Shows FBI Is Failing to Address First Amendment Harms Caused By National Security Letters

EFF - Fri, 12/13/2019 - 12:14pm

EFF has long fought to end the FBI’s ability to impose gag orders via National Security Letters (NSLs). They violate the First Amendment and result in indefinite prohibitions on recipients’ ability to speak publicly about controversial government surveillance powers. Records and data released by the FBI earlier this year confirm that, despite congressional reforms in 2015, the vast majority of NSL recipients remain gagged. What’s more, the FBI has not taking meaningful steps to dissolve those gag orders.

Today, EFF is publishing “The Failed Fix to NSL Gag Orders,” a new report based on an in-depth analysis of records EFF obtained after we won a Freedom of Information Act lawsuit earlier this year. Our report is based on records we obtained that identified more than 700 NSL recipients that the FBI had freed from lengthy gag orders, the subject of a front-page New York Times story in September.

As the Times reported, those records showed that in addition to Internet companies, the leading credit reporting agencies are frequent recipients of NSLs.  But these credit agencies have been entirely silent about NSLs, even after the FBI explicitly permitted them to speak.  Today, Senators Elizabeth Warren, Rand Paul, and Ron Wyden sent a letter to Experian, Equifax, and Transunion, expressing alarm about the companies’ silence and seeking more information about how this frequently used national security investigatory authority affects Americans’ credit histories and other sensitive records.

EFF’s analysis of the records obtained in our FOIA suit concludes that absent further judicial or legislative intervention, the FBI will continue to violate the First Amendment rights of NSL recipients. As we write in the report, “when left to its own discretion, the FBI overwhelmingly favors maintaining gag orders of unlimited duration.” Our findings suggest even though Congress directed the FBI to reduce the number of these gag orders, the Bureau’s internal procedures “do not meaningfully reduce the large numbers of de facto permanent NSL gag orders, failing First Amendment scrutiny. They also fall short of adequately safeguarding recipients’ First Amendment rights. And as the records and data EFF obtained in its FOIA suit show, the FBI is unlikely to make progress in ending those gags without further direction by Congress or the courts.”

Accordingly, the report includes recommendations for how to fix this urgent problem. We’re pleased that Sens. Warren, Paul, and Wyden are looking into the matter, and we hope Congress takes up the larger issue of NSL reform soon.

Related Cases: In re: National Security Letter 2011 (11-2173)In re National Security Letter 2013 (13-80089)In re National Security Letter 2013 (13-1165)

Victory: San Diego to Suspend Face Recognition Program, Limits ICE Access To Criminal Justice Data

EFF - Wed, 12/11/2019 - 7:27pm

We just stopped one of the largest, longest running, and most controversial face recognition programs operated by local law enforcement in the United States. 

A face recognition system used by more than 30 agencies in San Diego County, California will be suspended on Jan. 1, 2020, according to a new agenda published by the San Diego Association of Governments (SANDAG), which manages the program. 

In October, EFF sent a letter to SANDAG demanding it suspend the program to comply with a new law that takes effect at the beginning of the year. Authored by Assemblymember Phil Ting, A.B.1215 creates a three-year moratorium on law enforcement use of face recognition connected with cameras carried by police officers. These cameras include body-worn cameras and handheld devices. 

Launched in 2012, San Diego’s program—the Tactical Identification System (TACIDS)—provided 1,309 specialized face-recognition tablets and phones to dozens of local, state, and federal agencies. Between 2016 and 2018, officers had conducted more than 65,500 scans with the devices. 

“To ensure compliance with AB 1215, operation of the TACIDS program will be suspended beginning January 1, 2020,” writes Pam Scanlon, head of SANDAG’s Automated Regional Justice Information System (ARJIS) in the agenda memo. “ARJIS will notify all law enforcement partners that TACIDS access will be suspended, which will include removal of the TACIDS Booking Photo interface and all user access to TACIDS systems.”

The agenda also indicates SANDAG will not renew the contract with the face recognition vendor, FaceFirst, when it expires in March 2020. 

In the same agenda, SANDAG also reveals that in October it disabled all ICE Enforcement and Removal Operations accounts across the agency’s law enforcement databases and computer systems to comply with guidance from the California Department of Justice (CADOJ) on S.B. 54, the California Values Act. This law is designed to limit how local law enforcement may collaborate with immigration enforcement activities. EFF and immigrant rights groups successfully lobbied CADOJ to restrict ICE access to law enforcement databases, and EFF specifically called on SANDAG to address this issue after data revealed ICE was using the face recognition devices. 

The end of San Diego’s program marks a major victory in the nationwide battle against face surveillance. But it doesn’t stop here. Join our campaign to end face surveillance on the local level across the county. 

TAKE ACTION

END FACE SURVEILLANCE IN YOUR COMMUNITY

For more information on San Diego’s face recognition program, read our October 2019 report and letter.

The Senate Judiciary Committee Wants Everyone to Know It’s Concerned about Encryption

EFF - Tue, 12/10/2019 - 6:19pm

This morning the Senate Judiciary Committee held a hearing on encryption and “lawful access.” That’s the fanciful idea that encryption providers can somehow allow law enforcement access to users’ encrypted data while otherwise preventing the “bad guys” from accessing this very same data.

But the hearing was not inspired by some new engineering breakthrough that might make it possible for Apple or Facebook to build a secure law enforcement backdoor into their encrypted devices and messaging applications. Instead, it followed speeches, open letters, and other public pressure by law enforcement officials in the U.S. and elsewhere to prevent Facebook from encrypting its messaging applications, and more generally to portray encryption as a tool used in serious crimes, including child exploitation. Facebook has signaled it won’t bow to that pressure. And more than 100 organizations including EFF have called on these law enforcement officials to reverse course and avoid gutting one of the most powerful privacy and security tools available to users in an increasingly insecure world. 

Many of the committee members seemed to arrive at the hearing convinced that they could legislate secure backdoors. Among others, Senators Graham and Feinstein told representatives from Apple and Facebook that they had a responsibility to find a solution to enable government access to encrypted data. Senator Graham commented, “My advice to you is to get on with it, because this time next year, if we haven't found a way that you can live with, we will impose our will on you.”

But when it came to questioning witnesses, the senators had trouble establishing the need for or the feasibility of blanket law enforcement access to encrypted data. As all of the witnesses pointed out, even a basic discussion of encryption requires differentiating between encrypting data on a smartphone, also called “encryption at rest,” and end-to-end encryption of private chats, for example. 

As a result, the committee’s questioning actually revealed several points that undercut the apocalyptic vision painted by law enforcement officials in recent months. Here are some of our takeaways: 

There’s No Such Thing As an Unhackable Phone

The first witness was Manhattan District Attorney Cyrus Vance, Jr., who has called for Apple and Google to roll back encryption in their mobile operating systems. Yet by his own statistics, the DA’s office is able to access the contents of a majority of devices it encounters in its investigations each year. Even for those phones that are locked and encrypted, Vance reported that half could be accessed using in-house forensic tools or services from outside vendors. Although he stressed both the high cost and the uncertainty of these tools, the fact remains that device encryption is far from an insurmountable barrier to law enforcement. 

As we saw when the FBI dramatically lowered its own estimate of “unhackable” phones in 2017, the level of security of these devices is not static. Even as Apple and Google patch vulnerabilities that might allow access, vendors like Cellebrite and Grayshift discover new means of bypassing security features in mobile operating systems. Of course, no investigative technique will be completely effective, which is why law enforcement has always worked every angle it can. The cost of forensic tools may be a concern, but they are clearly part of a variety of tools law enforcement use to successfully pursue investigations in a world with widespread encryption.

Lawful Access to Encrypted Phones Would Take Us Back to the Bad Old Days 

Meanwhile, even as Vance focused on the cost of forensic tools to access encrypted phones, he repeatedly ignored why companies like Apple began fully encrypting their devices in their first place. In a colloquy with Senator Mike Lee, Apple’s manager of user privacy Erik Neuenschwander explained that the company’s introduction of full disk encryption in iOS in 2014 was a response to threats from hackers and criminals who could otherwise access a wealth of sensitive, unencrypted data on users’ phones. On this point, Neuenschwander explained that Vance was simply misinformed: Apple has never held a key capable of decrypting encrypted data on users’ phones. 

Neuenschwander explained that he could think of only two approaches to accomplishing Vance’s call for lawful access, both of which would dramatically increase the risks to consumers. Either Apple could simply roll back encryption on its devices, leaving users exposed to increasingly sophisticated threats from bad actors, or it could attempt to engineer a system where it did hold a master key to every iPhone in the world. Regarding the second approach, Neuenschwander said “as a technologist, I am extremely fearful of the security properties of such a system.” His fear is well-founded; years of research by technologists and cryptographers confirm that key escrow and related systems are highly insecure at the scale and complexity of Apple’s mobile ecosystem.

End-to-End Encryption Is Here to Stay

Finally, despite the heated rhetoric directed by Attorney General Barr and others at end-to-end encryption in messaging applications, the committee found little consensus. Both Vance and Professor Matt Tait suggested that they did not believe that Congress should mandate backdoors in end-to-end encrypted messaging platforms. Meanwhile, Senators Coons, Cornyn, and others expressed concerns that doing so would simply push bad actors to applications hosted outside of the United States, and also aid authoritarian states who want to spy on Facebook users within their own borders. Facebook’s director for messaging privacy Jay Sullivan discussed ways that the company will root out abuse on its platforms while removing its own ability to read users’ messages. As we’ve written before, an encrypted Facebook Messenger is a good thing, but the proof will be in the pudding.

Ultimately, while the Senate Judiciary Committee hearing offered worrying posturing on the necessity of backdoors, we’re hopeful that Congress will recognize what a dangerous idea legislation would be in this area.

Genetic Genealogy Company GEDmatch Acquired by Company With Ties to FBI & Law Enforcement—Why You Should Be Worried

EFF - Tue, 12/10/2019 - 4:39pm

This week, GEDmatch, a genetic genealogy company that gained notoriety for giving law enforcement access to its customers’ DNA data, quietly informed its users it is now operated by Verogen, Inc., a company expressly formed two years ago to market “next-generation [DNA] sequencing” technology to crime labs.  

What this means for GEDmatch’s 1.3 million users—and for the 60% of white Americans who share DNA with those users—remains to be seen. 

GEDmatch allows users to upload an electronic file containing their raw genotyped DNA data so that they can compare it to other users’ data to find biological family relationships. It estimates how close or distant those relationships may be (e.g., a direct connection, like a parent, or a distant connection, like a third cousin), and it enables users to determine where, along each chromosome, their DNA may be similar to another user. It also predicts characteristics like ethnicity. 

An estimated 30 million people have used genetic genealogy databases like GEDmatch to identify biological relatives and build a family tree, and law enforcement officers have been capitalizing on all that freely available data in criminal investigations. Estimates are that genetic genealogy sites were used in around 200 cases just last year. For many of those cases, officers never sought a warrant or any legal process at all. 

Earlier this year, after public outcry, GEDmatch changed its previous position allowing for warrantless law enforcement searches, opted out all its users from those searches, and required all users to expressly opt in if they wanted to allow access to their genetic data. Only a small percentage did. But opting out has not prevented law enforcement from accessing consumers’ genetic data, as long as they can get a warrant, which one Orlando, Florida officer did last summer.  

Law enforcement has argued that people using genetic genealogy services have no expectation of privacy in their genetic data because users have willingly shared their data with the genetics company and with other users and have “consented” to a company’s terms of service. But the Supreme Court rejected a similar argument in Carpenter v. United States. 

In Carpenter, the Court ruled that even though our cell phone location data is shared with or stored by a phone company, we still have a reasonable expectation of privacy in it because of all the sensitive and private information it can reveal about our lives. Similarly, genetic data can reveal a whole host of extremely private and sensitive information about people, from their likelihood to inherit specific diseases to where their ancestors are from to whether they have a sister or brother they never knew about. Researchers have even theorized at one time or another that DNA may predict race, intelligence, criminality, sexual orientation, and political ideology. Even if later disproved, officials may rely on outdated research like this to make judgements about and discriminate against people. Because genetic data is so sensitive, we have an expectation of privacy in it, even if other people can access it.

However, whether individual users of genetic genealogy databases have consented to law enforcement searches is somewhat beside the point. In all cases that we know of so far, law enforcement isn’t looking for the person who uploaded their DNA to a consumer site, they are looking for that person’s distant relatives—people who never could have consented to this kind of use of their genetic data because they don’t have any control over the DNA they happen to share with the site’s users.  

We need to think long and hard as a society about whether law enforcement should be allowed to access genetic genealogy databases at all—even with a warrant.


These are also dragnet searches, conducted under “general warrants,” and no different from officers searching every house in a town with a population of 1.3 million on the off chance that one of those houses could contain evidence useful to finding the perpetrator of a crime. With or without a warrant, the Fourth Amendment prohibits searches like this in the physical world, and it should prohibit genetic dragnets like this one as well.  That means these searches are nothing more than fishing expeditions through millions of innocent people’s DNA. They are not targeted at finding specific users or based on individualized suspicion—a fact the police admit because they don’t know who their suspect is. They are supported only by the hope that a crime scene sample might somehow be genetically linked to DNA submitted to a genetic genealogy database by a distant relative, which might give officers a lead in a case. There's a real question whether a warrant that allows this kind of search could ever meet the particularity requirements of the Fourth Amendment. 

We need to think long and hard as a society about whether law enforcement should be allowed to access genetic genealogy databases at all—even with a warrant. These searches impact millions of Americans. Although GEDmatch likely only encompasses about 0.5% of the U.S. adult population, research shows 60% of white Americans can already be identified from its 1.3 million users. This same research shows that once GEDmatch’s users encompass just 2% of the U.S. population, 90% of white Americans will be identifiable.

Although many authorities once argued these kinds of searches would only be used as a way to solve cold cases involving the most terrible and serious crimes, that is changing; this year, police used genetic genealogy to implicate a teenager for a sexual assault. Next year it could be used to identify political or environmental protestors. Unlike established criminal DNA databases like the FBI’s CODIS database, there are currently few rules governing how and when genetic genealogy searching may be used.

We should worry about these searches for another reason: they can implicate people for crimes they didn’t commit. Although police used genetic searching to finally identify the man they believe is the “Golden State Killer,” an earlier search in the same case identified a different person. In 2015, a similar search in a different case led police to suspect an innocent man. Even without genetic genealogy searches, DNA matches may lead officers to suspect—and jail—the wrong person, as happened in a California case in 2012. That can happen because we shed DNA constantly and because our DNA may be transferred from one location to another, possibly ending up at the scene of a crime, even if we were never there. 

All of this is made even more concerning by the recent acquisition of GEDmatch by a company whose main purpose is to help the police solve crimes. The ability to research family history and disease risk shouldn’t carry the threat that our data will be accessible to police or others and used in ways we never could have foreseen. Genetic genealogy searches by law enforcement invade our privacy in unique ways—they allow law enforcement to access information about us that we may not even know ourselves, that we have no ability to hide, and that could reveal more about us in the future than scientists know now. These searches should never be allowed—even with a warrant.

Related Cases: Maryland v. KingCarpenter v. United States

Speaking Freely: An Interview With Biella Coleman

EFF - Tue, 12/10/2019 - 11:31am

Around the globe, freedom of expression (or free speech) varies wildly in definition, scope, and level of access. The impact of the digital age on perceptions and censorship of speech has been felt across the political spectrum on a worldwide scale. In the debate over what counts as free expression and how it should work in practice, we often lose sight of how different forms of censorship—of hate speech, for example—can have a negative impact on different communities, and especially marginalized or vulnerable ones.

Speaking Freely brings forth interviews with human rights workers, free expression advocates, and activists from a variety of disciplines and affiliations. The common thread in these interviews is that curtailing free expression, via public or private censorship, can harm our ability to fully and authentically participate in an open society.

Gabriella “Biella” Coleman is an anthropologist whose work focuses on a range of subjects, from the anthropology of medicine to the practice of whistleblowing. To EFF readers, she is probably best known for her work on hacker communities. In 2014, she published the book Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous (Verso). She currently holds the Wolfe Chair in Scientific & Technological Literacy at McGill University in Montréal. 

I first met Biella at Berlin’s re:publica conference in 2011, and got to know her when we both contributed chapters to Beyond Wikileaks: Implications for the Future of Communications, Journalism, and Society. She’s a long-time friend to many EFFers, and contributed to our 2018 collaboration with McSweeney’s, The End of Trust

We recently got together to discuss a subject that Biella—who has a background in medical anthropology—has been thinking about for a long time: How medical misinformation spreads, and how attempts to curb that spread can potentially cause harm to patient communities, particularly those that lack trust in the medical system for valid reasons. Mis- or disinformation has been a hot topic since the 2016 U.S. election, and its impact on vaccination rates is an important issue, but rarely is the other side of the coin—that is, the harm that censorship can cause to patients seeking answers to little-understood medical concerns—discussed in policy circles.

It's a subject we spend a lot of time thinking about at EFF. We know that misinformation can lead to harm, but we're also wary of attempts to censor it—particularly when censorship is proposed as the only solution to what is nearly always a much deeper societal issue. Some of us have found great value in online patient communities, and understand that freedom of expression is integral to such spaces. We think that Biella's insights into the topic will be valuable to anyone who struggles with this question.

York: So what are we talking about today?

I’ve been thinking a lot of free speech issues in the context of medical misinformation. And then I’ve also been thinking about how certain quarters of the left are very dissatisfied with free speech. This is an ongoing problem, but it’s become very punctuated. I think it’s shortsighted, even though many of the critiques are valid...I think it’s very dangerous to let go of free speech commitments. I actually think it’s incumbent on people [like us] to help explain why it’s important why free speech also can’t solve a lot of problems. It’s something I often think about in terms of putting [free speech] in its place, but not getting rid of it.

York: Let’s start with medical misinformation then. It’s something I’m very concerned about as well. There are some interesting problems here, and no easy solutions—I’d love to get your take on that.

Indeed, there is a lot of medical misinformation out there. But as a scholar who often looks at the history of science and medicine, oftentimes the state of medicine is such where we can be quite sure about a cluster of issues but there’s a cluster of other issues that are in a dramatic state of uncertainty, and they always coexist at once. It makes everything from doctoring to policing information very difficult.

The famous example is of course vaccinations. There’s a lot of fear and misinformation and it is important to get the scientific consensus out there. But sometimes today scientific consensus is upended by the scientific field as well. Patients have fought very hard in order to gain a voice in order to help that process along in very important ways, whether it was psychiatric patients fighting against electric shock therapy or HIV activists demanding a different method for clinical trials. They were pushing against the grain of medical consensus at the time. So I always fear a set of commitments and solutions that rest on the idea of full certainty in one moment.

This is one of the reasons why it’s such a conundrum—because today’s scientific consensus may be tomorrow’s consensus, but some of it may not be, and where and how you draw that line may be difficult. I think it’s important to recognize that as we move forward with solutions. For example, I think something like linking to the CDC website to provide information about vaccinations is a really good idea, while I’m actually kind of against blanket bans.

This is for two reasons: There’s a lot of value in patients getting together to push against the medical establishment, because they can be wrong, so you just have to modulate that in way where you can point people to what the medical field believes is the correct information, but what happens with blanket bans is they impinge too much on the ability for patients to get together and discuss freely in ways that sometimes do go against the medical establishment. And then more importantly, it creates extreme resentment and more mistrust of mainstream scientific establishments. And so when you’re trying to correct for misinformation, it works against your very goal.

York: I agree, that’s so important. What you said about the idea of censorship creating resentment is interesting...it’s not the Streisand effect exactly, but it’s definitely something of concern to me as well, that when we suppress certain speech, it gets pushed into dark spaces. Can you elaborate on that?

Sure. And let me be clear: I’m not saying there are no instances of speech on various platforms that shouldn’t be banned. It’s also because I believe that different providers have the right to configure their communities in certain waysWhat I’m really concerned about here is the medical realm.

I do think that if we start seeing a trend where medical information, even the sort that we deem extremely problematic, is across the board or mostly banned—first of all, it’s not going to prevent people from gathering, there are just too many channels through which people will find other  places to congregate—it’ll fuel their conviction that they’re absolutely right to the extent that trying to change their mind becomes even harder than it is today. And, in some cases, it can push some of these groups and communities to places that are more difficult to track and see, but where they’re still associating and sharing information. 

And again, I think that the risk in banning medical information too is you will catch in the net certain forms of pushback, discussions, that may strike as misinformation today, but in another ten years they’re not. So, where do you draw that line? Maybe sometimes you do draw that line, but very, very conservatively. You say, ‘you know what? There are fifteen things we find questionable, but we’ll only ban two of them, because scientific consensus is of extreme agreement, whereas in the other thirteen there are not.’ 

Many gains have been made by patient communities and lay experts getting together to push against dominant models of the time, and that’s everything from the minimization of side effects from drugs to questioning dominant therapies ... Medical history has shown that a lot of positive good comes when patients come together to be able to talk and share information that is not the consensus of the time.

York: You may recall a few years ago, when I posted on Facebook about trying to find a diagnosis, and connecting with online communities really helped. I can’t even tell you how many people I know who have been helped by online medical communities, especially those with chronic illnesses.

And that’s the thing...so many people with chronic illnesses especially, or obscure ones, or ones that are controversial like autoimmune diseases...these patient communities are vital for people to get to a diagnosis and be able to move forward with therapy. 

If you’re on these forums, there’s a total mix of misinformation and scientific consensus, and also a range of information where you can’t even really say what it is because the state of the science is also uncertain.

And so I do fear blanket censorship, extreme bans, even while I favor some interventions that can maybe flag certain types of medical misinformation. I don’t know how effective that will be; it’s an open question that can be researched, but I’d be in favor of those types of interventions over more blunt instruments.

York: Yeah, I can understand that position.

Yeah, with hate speech and Nazi stuff, it’s maybe a different story. I feel less well-positioned to talk about it.

York: We don’t have to, that’s up to you.

[Coleman laughs]. I’m trained as a medical anthropologist so I know the history of … historical change around scientific consensus and facts and how wildly it can swing at times, and the role of patients and the importance of having an independent sphere of autonomy to discuss these issues, you know? I’m less familiar with the hate speech stuff even though I kind of obsess about it a little bit.

York: None of us want to touch it, right? It’s not simple. I’ve been dealing with it quite a bit, but we don’t have to do that here. Instead, let’s move to something else that interests you. I’d love to hear your thoughts on something else, which we’re grappling with too: How do we talk to the left about free speech?

Whew. Yeah, it’s so good that we’re talking about this. My general commitment is that we lose a lot if we cede free speech commitments and discourse, which are not the same thing, to the right. And in order for us not to allow that relinquishment to happen, we do have to make a more convincing case as to why free speech still matters for progressive and leftist causes.

Obviously, what we’ve done is not working, so we have to rethink both our commitments and our packaging. 

York: Yes. This is so important.

Okay, so first: What do we lose if we relinquish both our visible commitments—if it’s not part of our platform anymore—and also, if we don’t fight for it? It’s two different things. One has to do, oddly enough, with recruitment.

I think that there’s sometimes this idea among progressive and leftists that those who go to certain channels of the right 

What I’ve observed is that some progressives, especially young ones who are pro-immigration and have left politics on one level are drawn to people like [members of the so-called “intellectual dark web”], and they’re extremely skeptical of progressives and leftists on free speech grounds. They see those types as being critical thinkers, not the left. And so, if we cede free speech as a commitment and discourse, we will lose people that could be joining our cause to the right. It’s like a counter-radicalization strategy.

There are people who are like ‘Aren’t we supposed to like free speech? Haven’t we fought for this? Isn’t that what the university, and journalism, are about?’

So if the left is saying that free speech has been overvalued and doesn’t help our cause, of course young people are going to be like ‘that’s weird,’ you know what I mean? It’s such an important cultural value that just to denigrate it off the bat, for people who don’t know the complexities of many issues, it becomes a deterrent. That’s one reason that if we cede it to the right, we will also be pushing groups of people who have progressive leanings to the least reactionary, but nevertheless reactionary parts of the right. I see it all the time.

York: Yes, yes. I mean, we saw that article about Emma Sulcowicz just this week.

Yes! I know, exactly! But then, on top of that, is two things, one of which is that certainly free speech if we have it and institute it, whether at the university or through journalistic channels or more widely—it’s not a panacea. The history of liberal thought has had a simplistic idea: Ensure that people have access to free speech, get good information out there, and good ideas will prevail. There are a lot of naive assumptions built into the philosophical base about free speech. And I think that for those of us that want to continue to have a place and reclaim it, we have to put the naive assumptions aside and say, ‘okay, it’s not a panacea.’

But imagine if we had no free speech, what would happen. Well, for example historically, leftists often get thrown under the bus. Their free speech rights, for example in universities, are often the first to go. There was an interesting Washington Post op-ed about this; it was about how the speakers being disinvited or de-platformed are often leftists, around issues like Palestine and BDS, are often muted, and if we had robust free speech protections, it would be harder to do that.

Historically, whether it was McCarthyism on or off campuses in the United States, or BDS today, without protections, leftist and progressive causes will be silenced. That’s one reason to keep them at play, or fight for them. 

Another thing too is that people on the right do embrace free speech rhetoric in extremely problematic ways, like ‘oh, forcing me to use your preferred pronoun impinges on my free speech rights,’ like Jordan Peterson’s case. I think that a case can be made as to why it doesn’t impinge someone’s speech rights, but [that’s not what I focus on]. When I teach about civil liberties and free speech, those just aren’t the issues I emphasize. I emphasize things like whistleblowing, or how difficult it was for newspapers to be able to publish stolen material that was in the public interest. Things that then led to significant social change, such as the Pentagon Papers. I try to show why free speech still really matters for the running of the free press so that it’s clear that if we lost those protections, we’d be in a much more precarious place.

We have to both recognize what free speech has gained us, what we will lose if we totally relinquish these protections, and also recognize that there are other structural dynamics that have nothing to do with free speech that shape who can speak. Even with good free speech protections, a lot of other elements need to be instituted for progressive change. 

Free speech is a helpful ingredient in the cocktail of progressive politics, but shouldn’t be the only ingredient in our cocktail.

York: Yeah, I agree. 

I think that always, as we fight for it, we have to put it in its place, and recognize its power and its limits at the same time. So that’s what I try to do when I support it and talk about it.

York: This is awesome, I love this. I’m in so much violent agreement with what you’re saying. I’ve been really leaning over the past few years toward highlighting the ways in which marginalized communities are affected by speech regulations. And I think it’s really interesting that there’s a recognition of that amongst the left, but a lot of the solutions proposed [to things like hate speech] don’t recognize how collateral damage might happen.

Exactly. That’s the interesting thing, both for free speech and anonymity: There is collateral damage! When you protect these things, there are going to be some problems that preciptate out of that, but then an extreme narrowing of free speech or anonymity will also produce collateral damage for our own politics as well. So, we need to make that case, we need to make that obvious, both by going back into history—looking at how those who are persecuted have been progressives and leftists—and show how that’s also the present when it comes to campus politics as well. The BDS example is one of the best ones. I’ve seen it in Berkeley and on my own campus, where BDS makes a strong showing, and then there’s an idea where we have to curb its expression. It’s put in the fold of hate speech, but it’s not—it’s attacking a political configuration, and people should have the free speech right to canvass this cause on campus.

Also, just to reiterate the very first point I made, there’s a great point made, I think, by Corey Robin. He made this case where we have to embrace discourses of freedom because American society is still obsessed with this question. So yes, we still have to have a progressive politics, a platform, but you can still thread that through commitments to freedom, because that’s just kind of the milieu in which this country was founded and configured. 

I think it’s the same with free speech; it’s a very familiar discourse that was aligned with progressive causes for a long time. It doesn’t only serve progressive causes, but it’s progressives that fought for the right to have more robust free speech protections. 

...So, if all of a sudden we relinquish that only to the right, we will fail to convince some younger people to join more progressive causes as well.

York: Absolutely. Okay, here’s a question I’ve been asking everybody: Do you have a free speech hero?

Ha, that’s a good question. I will say the whistleblower. The whistleblower to me is so important, whether it is the whistleblower in a corporation such as those who exposed Theranos in Silicon Valley, or whether it’s [Daniel] Ellsberg, or Snowden, or anonymous whistleblowers...it’s so risky to get that information out, and you risk so much in doing so.

We do need massive free speech protections, and beyond, to both ensure that they’re not punished and that those that publish information like newspapers are also protected. There have been many gains garnered from whistleblowing and, in fact, we need more of it and more effective whistleblowing as well. And for me, the figure of the whistleblower is my free speech hero, as well as the journalist willing to publish information provided by the whistleblower. That’s incredibly important.

And they should be our hero, not the Jordan Peterson who just doesn’t want to use “she/he”. That’s just kind of justifying being an asshole. I think it’s important to engage in the debate about why that’s problematic, but you don’t see people like that embracing the whistleblower, do you? I find that interesting. So yes, progressives should embrace free speech, but a different facet of it. We need to explain why those protections gain us something.

Of course it isn’t always effective, but whether it’s the ending of the Tuskegee experiments—that was done by a whistleblower—or the Pentagon Papers, or the closing of Theranos, there have just been so many gains, even though whistleblowing doesn’t always result in the change we want to see. We’d be much worse off if it didn’t exist, or if it were harder to do than it already is.

York: I want to touch on something you said here. You know, the thing that really frustrates me about some of the free speech defenders on the right is that I don’t see them speaking up about government censorship of sexuality, obscenity, et cetera. How are they okay with that?

Well, it’s a very selective reading of free speech. 

But [there’s another thing I want to address]. When we look at [some of the criticisms of the left], they’re focused on things like the insistence of using certain words. But we can change that narrative. If I were to address it, I would note that we, as a society, change our terms all the time for the purposes of civil rights: We don’t use the N-word, we don’t use “colored,” “homosexual” is out of favor. And it’s because these people have been discriminated against, and fought hard for their civil rights. And so changing language is part of that architecture of civil rights and respect.

We change our language to conform to protocols of respect and dignity and civil rights. Free speech issues have to do with governments and corporations squelching the little Davids who are fighting the Goliaths. There are ways to approach the language issues where we can still fight for free speech protections.

York: Yeah, I hear this a lot from people, and it’s not a policy issue or a free speech issue. It’s something for us to have conversations about.

Yes, and to come up with what’s going to have the most respect for the autonomy and dignity of these groups.

We have to think about how we present it, and again, I do see this thing where [people on the left] say ‘free speech is useless, the right is using it to make these ridiculous arguments.’ It’s tough—there are many different sets of issues that get wrapped up into one ball.

York: Yes, absolutely. We’ve touched on so many important things here. Thank you, Biella!

Speaking Freely: An Interview With Rima Sghaier

EFF - Mon, 12/09/2019 - 1:38pm

Around the globe, freedom of expression (or free speech) varies wildly in definition, scope, and level of access. The impact of the digital age on perceptions and censorship of speech has been felt across the political spectrum on a worldwide scale. In the debate over what counts as free expression and how it should work in practice, we often lose sight of how different forms of censorship—of hate speech, for example—can have a negative impact on different communities, and especially marginalized or vulnerable ones.

Speaking Freely brings forth interviews with human rights workers, free expression advocates, and activists from a variety of disciplines and affiliations. The common thread in these interviews is that curtailing free expression, via public or private censorship, can harm our ability to fully and authentically participate in an open society.

Rima Sghaier is a human rights activist and researcher who works at the intersection of technology and human rights, particularly in the Middle East and North Africa. 

Rima grew up in Tunisia under the regime of Zine El Abidine Ben Ali, which lasted for twenty-four years. Although Tunisia was among the earliest countries in its region to connect to the internet (in 1991), its use by dissidents and subcultures led to the government increasingly restricting access to information and communications tools. By the end of 2010, Tunisians had had enough and overthrew the Ben Ali government in a popular revolution that kicked off what some have referred to as the "Arab Spring."

For Rima, the experience of censorship—and the fear that it invokes—affected her from an early age, and shaped her views about freedom of expression. For the past few years, she has lived in Italy and has worked with the Hermes Center for Transparency and Digital Human Rights, which has brought her into the global digital rights community and challenged her thinking about where societies should draw lines when it comes to free speech.

For many free expression advocates, this is the ultimate question. While some may invoke Evelyn Beatrice Hall (and through her Voltaire) in their defense of speech, claiming "I disapprove of what you say, but I will defend to the death your right to say it," others would not take such a strong stance of defense, but nevertheless are uncomfortable with the idea of any authority being imbued with the power to decide for the rest of society what is or is not appropriate speech.

In our flowing conversation, we also touch on platform censorship, speech regulations, the role that Wikileaks played in the Tunisian revolution, and who Rima sees as the true heroes of free expression.

York: So let’s get down to it! My first question is, what does free speech, or free expression, mean to you?

“I use ‘freedom of expression’ more than I do freedom of expression, because that’s what’s used in Tunisia, in the sense of Article 19 of the Universal Declaration of Human Rights. And for me, personally—I don’t know if you’ve had this reaction from other interviewees—if you’re someone who really believes in expression, when you’re asked to define it, it surprises you. It happened to me [in a job interview]. It’s something I defend and advocate for, but I’ve never actually had to explain what it means!

If I have to give a definition that isn’t the legal definition, for me it means freedom from any fear, when expressing and articulating your thoughts and opinions, but also when accessing and sharing information.

York: I like that definition a lot. Would you say that you identify as a free expression advocate, or defender?

“Yes. It’s a part of who I am right now, and it’s really special for someone who was born and lived under a dictatorship for eighteen years.  It’s still personal, because I lived and was raised for so many years with things I could not say, and so my own personal freedom of expression is about being able to say things that I couldn’t.

York: Wow, I love that too. Is there anything else you want to say about that experience?

“To get personal, I can say that I had a family member who was in the political opposition to the regime. At one point, he was invited for police questioning. He worked under the cover of cultural reasons, but also gave advice on political issues and the political situation in Tunisia. I remember one thing that was often repeated when I was a child, when the topics of politics or the economy came up, your family would say ‘the walls have ears.’ You weren’t supposed to worry about politics, those things were taken care of by the Ben Ali government. You weren’t supposed to think about that.

If I asked why someone was absent, I was told that the person didn’t respect limits, that they were causing trouble. Speaking up was causing trouble. It was a weird thing, because it intersected with other oppression mechanisms—so it’s not only politics, but the patriarchy, what you can or can’t talk about as a woman, what’s ladylike or not, what’s educated or not, and many things, like full equality between women and men, gender and sexuality … we don’t even have words for those things, it’s a work in progress in Arabic right now.

Some people just take those rules, this system of ‘do’s’ and ‘don’ts’ and go with it, but for me and many other young people, it was so frustrating. I wanted to talk about things like why YouTube was censored. We used to have censorship levels that equalled Cuba’s. For me, I sometimes doubted if there were other countries in the world, because it was so closed, you didn’t know if you were alone. You didn’t know what was real and what wasn’t.

On evening TV, it was all about what the president or his wife did, everything was so beautiful and colorful, while around you, people were suffering. So in that moment in December 2010, until January 2011 [when the regime was toppled], I was in high school, and many of the people in my high school, especially with the parts of the internet that people couldn’t really censor, it was the only space where you could hear a report by Amnesty International, and not the one in Tunis [which didn’t exist yet]. [The internet] was the only place where you could get access to that kind of information.

In those days, I felt like suddenly there was freedom of expression without fear. Everyone was expressing their opinion, that they wanted the regime out, that they wanted freedom and democracy and all that. Going through the process, and since that moment, we’re slowly building this democracy that’s built on this strong need for freedom of expression along with other freedoms, of course.

York: Thank you for sharing that. I really like what you said about the role of freedom of expression in building democracy. So let’s move past politics for a minute and talk about the role of government, and companies, when it comes to free expression. How do you think that speech online should, or shouldn’t, be regulated?

“Hmm, I don’t know. Every time I see an attempt at regulating speech, it’s really failing. I feel really clueless, because when I look at freedom of expression, I look at it as a principle. I know that it should be the same principle for physical and digital spaces, but I understand that there is a lot of tension about how extreme speech, dangerous speech, or whatever terminology you prefer, is spreading a lot faster through the use of social media. If we talk about freedom, it should protect and consider individuals, but we don’t always know who’s behind what. 

There’s always the challenge of who should have the responsibility. When you put the responsibility on a company or a forum owner to control every piece of content on their platform, this results in a lot of policing and a lot of control and isn’t working out. I don’t know what should be done, but  I think we should just stop trying to regulate it, while anything that would be prohibited in a newspaper should also be prohibited on social media, [like bullying or harassment].

The other thing that comes into my mind is social media trying to exonerate politicians from their rules. Politicians can say things that are extreme or calling for violence, or messages of discrimination, and I think this is not okay. 

I [would like to understand more about regulation], but I don’t feel like it’s working.

York: [laughs] I wouldn’t worry about not knowing enough, if we knew the answers to these questions, we wouldn’t be having centuries-long debates about them. You’re probably a lot more informed than you think you are. But more specifically, I’m curious what role you think companies like Facebook and YouTube should play in regulating expression.

“I mean, I’ve been to your talks [laughs].”

York: It’s totally fine to disagree with me though!

“I know how content moderation work is being implemented, and I know how hard it must be to make the decision, in a few seconds, if something should be left on or off a platform. I do think that if there’s any way for technology to be smarter—not the way it is right now, removing fruit thinking it’s nudity, or removing evidence of [war crimes] as violence—it would be good for it be used to remove certain things. 

But again, I always find myself a little clueless about how to do it in a way that is fair. I know social media companies are working on better automated detection of certain content, but I think that something that needs [a lot more work] is context. For example, if Syrian Archive or independent media is trying to document things. 

I’ve seen in protests against intermittent internet shutdowns, or when the Egyptian government tried to limit access to certain websites, the quickest thing to get things out over social media, and so it’s really important to take context into consideration.

York: You’ve talked quite a bit about your experience as a Tunisian, but I’d like to know more about how you discovered internet censorship. Tunisia before the revolution was one of the world’s strictest internet censors. What was your experience with that like?

“I discovered censorship when people abroad were posting links on social media that for me were quatre cent quatre, 404 errors [ed. note: The Ben Ali regime used the 404 error page, rather than a transparent notification of blocking, to signal to users that a website was blocked]. I’m sure you’ve heard that there were songs, artists that made fun of censorship. We knew that we were basically closed off to the whole world. Any media that criticized or was objective about what was happening in Tunisia [was blocked]. I remember that as a very young person, I was using proxies, trying to access [blocked] content. My dad was like ‘you shouldn’t do that, they’ll arrest you!’

So, for sure, social media was an uncensored part of the internet. Even if a website was blocked, people found ways to share it. 

York: I remember in 2008, when Tunisia blocked Facebook for two weeks…

“Yes, you may remember also Wikileaks, the leaks about Tunisia had a huge impact on Tunisian news. Nawaat [ed. note: Nawaat was a 2011 Pioneer Award winner] had stories about how Ben Ali abused his power, how he had stolen public money.

York: Yes, I remember that. I was just telling somebody the other day about the story where the presidential plane was used by the president’s wife to go shopping.

“Yeah, and the boats, the yachts, the fights with Italy. Nobody heard about these things in Tunisia, but they were all over the international press as scandals. Those leaks somehow made those stories shift from being a rumor—because there was no proof the citizen could have access to—that shift happened when Wikileaks happened.

York: Yes, I remember that, it’s so true. What about you personally? Is there a defining moment in your life that led you to advocate for freedom of expression?

Freedom of expression in my life and upbringing, one of the most defining moments was when, as a teenager, my [conservative] dad had a rule where if he said no, it meant “no.” I didn’t have the right to ask why, or engage in any kind of dialogue. This meant I couldn’t go on school trips, or travel, or go to a friend’s place. It was such a controlled environment. My brother, on the other hand, could do what he wanted. 

The first moments where I started to say “no” and “why?” and demand explanations was seeing that my brother could go out without asking for permission. For me, building my freedom of expression was through breaking rules that have been ingrained since your childhood. Trying to challenge any form of power and oppression, and understanding what that oppression is.

York: Thank you for sharing that, Rima. Okay, here’s a question I ask everyone. Do you have a free speech hero?

“A free speech hero...Yeah, I would say my first is Sami Ben Gharbia. That’s because I’m very close to [2011 EFF Pioneer Award winner] Nawaat, they were my first eye on what was happening in Tunisia before the revolution. When you’re really concerned, you’re following and really engaged in what’s happening worldwide, there are so many people on the daily, journalists and human rights activists, speaking up because they think it’s important and they don’t accept the status quo.

Raif El Badawi in Saudi Arabia, that’s another one. I would sometimes say Julian Assange, although not who he is as a person. Whistleblowers, who share information with the public, defending and advocating for freedom of expression. All the activists I speak with in Egypt, who are still working and trying to get the news out about arrests and torture. 

Women and LGBT people in countries and environments were they do not have the right to exist, and who try their best, despite everything, to exist and build a beautiful life and build communities and solidarity in situations where it’s very risky. For the LGBT community, [the internet] also allowed anyone who couldn’t have any existence in the real world be able to create their personas. They could share intimate photos, share their sexual or gender orientation, express their religious or non-religious views, and share them in a way that they were able to control with whom they shared them. 

And finally, Zouhair Yahyaoui. His legacy, more after the revolution than before it, was massive. He was the moderator of TUNeZINE [ed. Note: TUNeZINE was an early online forum that broke many of Tunisia’s red lines. Yahyaoui, who created and then moderated the platform anonymously, was eventually arrested and died not long after his release from prison of a heart attack. He was 37 years old.]

 I actually spent a lot of time after 2011 going through the TUNeZINE archive and I cannot describe what 18 years under the Ben Ali regime felt like because I just cannot describe how I saw Tunisia and the world because everything was so censored and filtered that it just felt unreal. Being able to have that space, a huge archive where there were monthly updates, human rights violations reporting, a lot of art and poetry...for me one of the most amazing things was reading through the work of those anonymous cyberdissidents, their poetry in Tunisian and French where they couldn’t say and express their opinions directly but they had these implicit and creative ways to criticize the regime, as well as painful ways that move your emotions. 

The digital space is changing a lot and we’re no longer doing forums and anonymous contributors, so most of what was happening in those forums is now on social media. So I have to mention Zouhair Yahyaoui because he really used the internet to document violations, to give a space for all those in civil society and political actors working in secrecy to talk about their work, name the political prisoners and ask for help, connect people with the international community that was in solidarity with Tunisians. This legacy is a huge part of our history, but there’s still a lot that has to be done for it not to be forgotten. 

So really, there are so many. For me, there are so many names and they all matter. The definition for me is all of these people who are speaking up despite fear and danger.

York: I love that too, that’s so important. Was there anything else that you wanted to add that we haven’t covered?

“I’m so eager to keep learning about regulation, what’s okay to say and what isn’t. If I sat at a table with say, three members of my family and a professor and a colleague and a fellow activist, and four strangers, everyone would have a different definition of what’s okay. My dad would say cursing isn’t okay, while another person might say that if you’re not cursing a person you know, but rather a situation or a public figure, it’s not the same.

There have actually been posts taken down from Twitter for using an ‘inappropriate’ word. There are people who might say it’s okay to make jokes about Muslims or people of color. Others would say only if [you’re part of the in-group], whereas still others would say it’s never okay.

For myself, I think there should be more consciousness about how people, how other humans might be affected, that’s really important. It should be clear what discrimination or hate speech, or extremist speech is. But also, there are so many people on the other side who overreact, who take things too personally, and I don’t know where that line is. I wish it was more clear.

I just posted something publicly on Facebook saying that I was so happy that [Moroccan journalist Hajar Raissouni, who was arrested for allegedly having an illegal abortion] was pardoned by the king, and I wrote ‘Long live freedom of expression, Moroccan feminists, and international solidarity’ and I already received so many comments criticizing me, saying things like ‘Why are you praising international NGOs? Why are you [undermining] the king?’

I know this [isn’t really about free speech] but if you’re a person actively talking about politics, you get this kind of criticism … If you, for example, talk about freedom for Raif Badawi, there were many people saying ‘With your message for his freedom, you’re attacking Islam and encouraging blasphemy.’ There were so many Saudis mass-reporting us, and harassing us. So I think that if there’s any kind of regulation, it should be the kind that protects us from that sort of coordinated harassment. This is exactly what shouldn’t be happening. [Companies] shouldn’t be collaborating with a repressive regime.

York: Yes, I absolutely agree with that! Well, I think that’s all the time we have today. Thank you so much Rima, this has been great.

Strengthen California’s Consumer Data Privacy Regulations

EFF - Fri, 12/06/2019 - 3:20pm

EFF and a coalition of privacy advocates have filed comments with the California Attorney General seeking strong regulations to protect consumer data privacy. The draft regulations are a good step forward, but the final regulations should go further.

The California Consumer Privacy Act of 2018 (CCPA) created new ways for the state’s residents to protect themselves from corporations that invade their privacy by harvesting and monetizing their personal information. Specifically, CCPA gives each Californian the right to know exactly what pieces of personal information a company has collected about them; the right to delete that information; and the right to opt-out of the sale of that information. CCPA is a good start, but we want more privacy protection from the California Legislature.

CCPA also requires the California Attorney General to adopt regulations by July 2020 to further the law’s purposes. In March 2019, EFF submitted comments to the California Attorney General with suggestions for CCPA regulations. In October 2019, the California Attorney General published draft regulations and again invited public comment.

In the new comments, EFF and the coalition wrote:

The undersigned group of privacy and consumer-advocacy organizations thank the Office of the Attorney General for its work on the proposed California Consumer Privacy Act regulations. The draft regulations bring a measure of clarity and practical guidance to the CCPA’s provisions entitling consumers to access, delete, and opt-out of the sale of their personal information. The draft regulations overall represent a step forward for consumer privacy, but some specific draft regulations are bad for consumers and should be eliminated. Others require revision.

The coalition made dozens of suggestions. We note two here.

First, to implement CCPA’s right to opt-out of the sale of one’s personal information, the draft regulations at Section 315(c) would require online businesses to comply with user-enabled privacy controls, such as browser plugins, that signal a consumer’s choice to opt-out of such sales. EFF suggested such an approach in our March 2019 comments. The coalition comments now seek a clarification to this draft regulation: that “do not track” browser headers, which thousands of Californians have already adopted, are among the kinds of signals that online businesses must treat as an opt-out from data sale.

Second, the coalition urges the California Attorney General to issue clarifying regulations that bar misguided efforts announced by some members of the adtech industry to evade CCPA’s right to opt-out of sales. Adtech is one of the greatest threats to consumer data privacy, as explained in a new EFF report on third-party tracking. The broad dissemination of personal information throughout the adtech ecology is a form of “sale” plainly subject to CCPA’s right to opt-out. Regulations should now lay to rest the crabbed arguments to the contrary.

The comments were signed by EFF and 11 other privacy advocacy organizations: Access Humboldt, ACLU of California, CALPIRG, Center for Digital Democracy, Common Sense Media, Consumer Federation of America, Consumer Reports, Media Alliance, Oakland Privacy, and Privacy Rights Clearinghouse.

Read the comments here.

We Need To Save .ORG From Arbitrary Censorship By Halting the Private Equity Buy-Out

EFF - Thu, 12/05/2019 - 8:11pm

The .ORG top-level domain and all of the nonprofit organizations that depend on it are at risk if a private equity firm is allowed to buy control of it. EFF has joined with over 250 respected nonprofits to oppose the sale of Public Interest Registry, the (currently) nonprofit entity that operates the .ORG domain, to Ethos Capital. Internet pioneers including Esther Dyson and Tim Berners-Lee have spoken out against this secretive deal. And 12,000 Internet users and counting have added their voices to the opposition.

What’s the harm in this $1.135 billion deal? In short, it would give Ethos Capital the power to censor the speech of nonprofit organizations (NGOs) to advance commercial interests, and to extract ever-growing monopoly rents from those same nonprofits. Ethos Capital has a financial incentive to engage in censorship—and, of course, in price increases. And the contracts that .ORG operates under don’t create enough accountability or limits on Ethos’s conduct.

Take Action

 SIGN THE PETITION TO DEFEND DOT ORGS

Domain Registries Have Censorship Power

Registries like PIR manage the Internet’s top-level domains under policies set out by ICANN, the governing body for the Internet’s domain name system. Registries have the power to suspend domain names, or even transfer them to other Internet users, subject to their contracts with ICANN. When a domain name is suspended, all of the Internet resources that use that name are disrupted, including websites, email addresses, and apps. That power lets registries exert influence over speech on the Internet in much the same way that social networks, search engines, and other well-placed intermediaries can do. And that power can be sold or bartered to other powerful groups, including repressive governments and corporate interests, giving them new powers of censorship.

Using the Internet’s chokepoints for censorship already happens far too often. For example:

  • The registry operators Donuts and Radix, who manage several hundred top-level domains, have private agreements with the Motion Picture Association of America to suspend domains based on accusations of copyright infringement from major movie studios, with no court order or right of appeal.
  • The search engine Bing, along with firewall maintainers and other intermediaries, has suppressed access to websites offering truthful information about obtaining prescription medicines from online pharmacies. They acted at the request of groups with close ties to U.S. pharmaceutical manufacturers who seek to keep drug prices high. The same groups have sought cooperation from domain registries and their governing body, ICANN.
  • The governments of Turkey and the United Arab Emirates, among others, regularly submit a flood of takedown requests to intermediaries, presumably in the hope that those intermediaries won’t examine those requests closely enough to reject the unjustified and illegal requests buried within them.
  • Saudi Arabia has relied on intermediaries like Medium, Snapchat, and Netflix to censor journalism it deems critical of the country’s totalitarian government.
  • DNA, a trade association for the domain name industry, has proposed a broad program of Internet speech regulation, to be enforced with domain suspensions, also with no accountability or due process guarantees for Internet users.

As the new operator of .ORG, Ethos Capital would have the ability to engage in these and other forms of censorship. It could enforce any limitations on nonprofits’ speech, including selective enforcement of particular national laws. For intermediaries with power over speech, such conduct can be lucrative, if it wins the favor of a powerful industry like the U.S. movie studios or of the government of an authoritarian country where the intermediary wishes to do business. Since many NGOs are engaged in speech that seeks to hold governments and industry to account, those powerful interests have every incentive to buy the cooperation of a well-placed intermediary, including an Ethos-owned PIR.

Not Enough Safeguards

The sale of PIR to Ethos Capital erodes the safeguards against this form of censorship.

First, the .ORG TLD has a unique meaning. A new NGO website or project may be able to use a different top-level domain, but none carries the same message. A domain name ending in .ORG is the key signifier of non-commercial, public-minded organizations on the Internet. Even the new top-level domains .NGO and .ONG (also run by PIR), which would appear to be substitutes for .ORG, have seen little use.

Established NGOs are in even more of a bind. The .ORG top-level domain is 34 years old, and many of the world’s most important NGOs have used .ORG names for decades. For established NGOs, changing domain names is scarcely an option. Changing from .ORG to a .INFO or .US domain, for example, means disrupting email communications, losing search engine placement, and incurring massive expenses to change an organization’s basic online identity. Established NGOs are effectively a captive audience for the policies and prices set by PIR.

Second, the top-level domain for nonprofits should itself be run by a nonprofit. Today, PIR is a subsidiary of the Internet Society (ISOC), which also promotes Internet access worldwide and oversees the Internet’s basic technical standards. ISOC is a longstanding part of the community of Internet governance organizations. When ISOC created PIR in 2002, it touted its nonprofit status and position in the community as the reasons it should run .ORG. And those community ties help explain why, when PIR proposed building its own copyright enforcement system in 2016, outcry from the community caused it to back down. If PIR is operated for private profit, it will inevitably be less attentive to the Internet governance community.

Third, ICANN, the organization that sets policy for the domain name system, has been busy removing the legal guardrails that could protect nonprofit users of .ORG. Earlier this year, ICANN removed caps on registration fees for .ORG names, allowing PIR to raise prices at will on its captive customer base of nonprofits. And ICANN also gave PIR explicit permission to create new “protections for the rights of third parties”—often used as a justification and legal cover for censorship—without community input or accountability.

Without these safeguards, the sale of PIR to Ethos raises unacceptable risks of censorship and financial exploitation for nonprofits the world over. Yet Ethos and ISOC insist on completing the sale as quickly as possible, without addressing the community’s concerns. Their only response to the massive public outcry against the deal has been vague, unenforceable promises of good behavior.

The sale needs to be halted, and a process begun to guarantee the rights of nonprofit Internet users. You can help by signing the petition:

TAKE ACTION

 SIGN THE PETITION TO DEFEND DOT ORGS

Mint: Late-Stage Adversarial Interoperability Demonstrates What We Had (And What We Lost)

EFF - Thu, 12/05/2019 - 2:18pm

In 2006, Aaron Patzer founded Mint. Patzer had grown up in the city of Evansville, Indiana—a place he described as "small, without much economic opportunity"—but had created a successful business building websites. He kept up the business through college and grad school and invested his profits in stocks and other assets, leading to a minor obsession with personal finance that saw him devoting hours every Saturday morning to manually tracking every penny he'd spent that week, transcribing his receipts into Microsoft Money and Quicken.

Patzer was frustrated with the amount of manual work it took to track his finances with these tools, which at the time weren't smart enough to automatically categorize "Chevron" under fuel or "Safeway" under groceries. So he conceived on an ingenious hack: he wrote a program that would automatically look up every business name he entered into the online version of the Yellow Pages—constraining the search using the area code in the business's phone number so it would only consider local merchants—and use the Yellow Pages' own categories to populate the "category" field in his financial tracking tools.

It occurred to Patzer that he could do even better, which is where Mint came in. Patzer's idea was to create a service that would take all your logins and passwords for all your bank, credit union, credit card, and brokerage accounts, and use these logins and passwords to automatically scrape your financial records, and categorize them to help you manage your personal finances. Mint would also analyze your spending in order to recommend credit cards whose benefits were best tailored to your usage, saving you money and earning the company commissions.

By international standards, the USA has a lot of banks: around 12,000 when Mint was getting started (in the US, each state gets to charter its own banks, leading to an incredible, diverse proliferation of financial institutions). That meant that for Mint to work, it would have to configure its scrapers to work with thousands of different websites, each of which was subject to change without notice.

If the banks had been willing to offer an API, Mint's job would have been simpler. But despite a standard format for financial data interchange called OFX (Open Financial Exchange), few financial institutions were offering any way for their customers to extract their own financial data. The banks believed that locking in their users' data could work to their benefit, as the value of having all your financial info in one place meant that once a bank locked in a customer for savings and checking, it could sell them credit cards and brokerage services. This was exactly the theory that powered Mint, with the difference that Mint wanted to bring your data together from any financial institution, so you could shop around for the best deals on cards, banking, and brokerage, and still merge and manage all your data.

At first, Mint contracted with Yodlee, a company that specialized in scraping websites of all kinds, combining multiple webmail accounts with data scraped from news sites and other services in a single unified inbox. When Mint outgrew Yodlee's services, it founded a rival called Untangly, locking a separate team in a separate facility that never communicated with Mint directly, in order to head off any claims that Untangly had misappropriated Yodlee's proprietary information and techniques—just as Phoenix computing had created a separate team to re-implement the IBM PC ROMs, creating an industry of "PC clones."

Untangly created a browser plugin that Mint's most dedicated users would use when they logged into their banks. The plugin would prompt them to identify elements of each page in the bank's websites so that the scraper for that site could figure out how to parse the bank's site and extract other users' data on their behalf.

To head off the banks' countermeasures, Untangly maintained a bank of cable-modems and servers running "headless" versions of Internet Explorer (a headless browser is one that runs only in computer memory, without drawing the actual browser window onscreen) and they throttled the rate at which the scripted interactions on these browsers ran, in order to make it harder for the banks to determine which of its users were Mint scrapers acting on behalf of its customers and which ones were the flesh-and-blood customers running their own browsers on their own behalf.

As the above implies, not every bank was happy that Mint was allowing its customers to liberate their data, not least because the banks' winner-take-all plan was for their walled gardens to serve as reasons for customers to use their banks for everything, in order to get the convenience of having all their financial data in one place.

Some banks sent Mint legal threats, demanding that they cease-and-desist from scraping customer data. When this happened, Mint would roll out its "nuclear option"—an error message displayed to every bank customer affected by these demands informing them that their bank was the reason they could no longer access their own financial data. These error messages would also include contact details for the relevant decision-makers and customer-service reps at the banks. Even the most belligerent bank's resolve weakened in the face of calls from furious customers who wanted to use Mint to manage their own data.

In 2009, Mint became a division of Intuit, which already had a competing product with a much larger team. With the merged teams, they were able to tackle the difficult task of writing custom scrapers for the thousands of small banks they'd been forced to sideline for want of resources.

Adversarial interoperability is the technical term for a tool or service that works with ("interoperates" with) an existing tool or service—without permission from the existing tool's maker (that's the "adversarial" part).

Mint's story is a powerful example of adversarial interoperability: rather than waiting for the banks to adopt standards for data-interchange—a potentially long wait, given the banks' commitment to forcing their customers into treating them as one-stop-shops for credit cards, savings, checking, and brokerage accounts—Mint simply created the tools to take its users' data out of the bank's vaults and put it vaults of the users' choosing.

Adversarial interoperability was once commonplace. It's a powerful way for new upstarts to unseat the dominant companies in a market—rather than trying to convince customers to give up an existing service they rely on, an adversarial interoperator can make a tool that lets users continue to lean on the existing services, even as they chart a path to independence from those services.

But stories like Mint are rare today, thanks to a sustained, successful campaign by the companies that owe their own existence to adversarial interoperability to shut it down, lest someone do unto them as they had done unto the others.

Thanks to decades of lobbying and lawsuits, we've seen a steady expansion of copyright rules, software patents (though these are thankfully in retreat today), enforceable terms-of-service and theories about "interference with contract" and "tortious interference."

These have grown to such an imposing degree that big companies don't necessarily need to send out legal threats or launch lawsuits anymore—the graveyard of new companies killed by these threats and suits is scary enough that neither investors nor founders have much appetite for risking it.

For Mint to have launched when it did, and done as well as it did, tells us that adversarial interoperability may be down, but it's not out. With the right legal assurances, there are plenty of entrepreneurs and investors who'd happily provide users with the high-tech ladders they need to scale the walled gardens that Big Tech has imprisoned them within.

The Mint story also addresses an important open question about adversarial interoperability: if we give technologists the right to make these tools, will they work? After all, today's tech giants have entire office-parks full of talented programmers. Can a new market entrant hope to best them in the battle of wits that plays out when they try to plug some new systems into Big Tech's existing ones?

The Mint experience points out that attackers always have an advantage over defenders. For the banks to keep Mint out, they'd have to have perfect scraper-detection systems. For Mint to scrape the banks' sites, they only need to find one flaw in the banks' countermeasures.

Mint also shows how an incumbent company's own size works against it when it comes to shutting out competitors. Recall that when a bank decided to send its lawyers after Mint, Mint was able to retaliate by recruiting the bank's own customers to blast it for that decision. The more users Mint had, the more complaints it would generate—and the bigger a bank was, the more customers it had to become Mint users, and defenders of Mint's right to scrape the bank's site.

It's a neat lesson about the difference between keeping out malicious hackers versus keeping out competitors. If a "bad guy" was attacking the bank's site, it could pull out all the stops to shut the activity down: lawsuits, new procedures for users to follow, even name-and-shame campaigns against the bad actor.

But when a business attacks a rival that is doing its own customers' bidding, its ability to do so has to be weighed against the ill will it will engender with those customers, and the negative publicity this kind of activity will generate. Consider that Big Tech platforms claim billions of users—that's a huge pool of potential customers for adversarial interoperators who promise to protect those users from Big Tech's poor choices and exploitative conduct!

This is also an example of how "adversarial interoperability" can peacefully co-exist with privacy protection: it's not hard to see how a court could distinguish between a company that gets your data from a company's walled garden at your request so that you can use it, and a company that gets your data without your consent and uses it to attack you.

Mint's pro-competitive pressure made banks better, and gave users more control. But of course, today Mint is a division of Intuit, a company mired in scandal over its anticompetitive conduct and regulatory capture, which have allowed it to subvert the Free File program that should give millions of Americans access to free tax-preparation services.

Imagine if an adversarial interoperator were to enter the market today with a tool that auto-piloted its users through the big tax-prep companies' sites to get them to Free File tools that would actually work for them (as opposed to tricking them into expensive upgrades, often by letting them get all the way to the end of the process before revealing that something about the user's tax situation makes them ineligible for that specific Free File product).

Such a tool would be instantly smothered with legal threats, from "tortious interference" to hacking charges under the Computer Fraud and Abuse Act. And yet, these companies owe their size and their profits to exactly this kind of conduct.

Creating legal protections for adversarial interoperators won't solve all our problems of market concentration, regulatory capture, and privacy violations—but giving users the right to control how they interact with the big services would certainly open a space where technologists, co-ops, entrepreneurs and investors could help erode the big companies' dominance, while giving the public a better experience and a better deal.

Certbot Leaves Beta with the Release of 1.0

EFF - Thu, 12/05/2019 - 11:47am

Earlier this week EFF released Certbot 1.0, the latest version of our free, open source tool that helps websites encrypt their traffic. The release of 1.0 is a significant milestone for the project and is the culmination of the work done over the past few years by EFF and hundreds of open source contributors from around the world.

Certbot was first released in 2015 to automate the process of configuring and maintaining HTTPS encryption for site administrators by obtaining and deploying certificates from Let's Encrypt. Since its initial launch, many features have been added, including beta support for Windows, automatic nginx configuration, and support for over a dozen DNS providers for domain validation.

Certbot is part of EFF's project to encrypt the web. Using HTTPS instead of unencrypted HTTP protects people from eavesdropping, content injection, and cookie stealing, which can be used to take over your online accounts. Since the release of Let's Encrypt and Certbot, the percentage of web traffic using HTTPS has increased from 40% to 80%. This is significant progress in building a web that is encrypted by default but there is more work to be done.

The release of 1.0 officially marks the end of Certbot's beta phase, during which it has helped over 2 million users maintain HTTPS access to over 20 million websites. We’re very excited to see how many more users, and more websites, Certbot will assist in 2020 and beyond.

It’s thanks to our 30,000+ members that we’re able to maintain Certbot and push for 100% encrypted web.

Support Certbot!

Contribute to EFF's Security-enhancing tech projects

The FCC Is Opening up Some Very Important Spectrum for Broadband

EFF - Wed, 12/04/2019 - 11:25am

Decisions about who gets to use the public airwaves and how they use it impact our lives every day. From the creation of WiFi routers to the public auctions that gave us more than two options for our cell phone providers, the Federal Communications Commission (FCC)’s decisions reshape our technological world. And they’re about to make another one.

In managing the public spectrum, aka “the airwaves,” the FCC has a responsibility to ensure that commercial uses benefit the American public. Traditionally, the FCC either assigns spectrum to certain entities with specific use conditions (for example, television, radio, and broadband are “licensed uses”) or simply designating a portion of spectrum as an open field with no specific use in mind called “unlicensed spectrum,” which is what WiFi routers use.

The FCC is about to make two incredibly important spectrum decisions. The first we’ve written about previously, but, in short, the FCC intends to reallocate spectrum currently used for satellite television to broadband providers through a public auction. The second is reassigning spectrum located in the 5.9 Ghz frequency band from being exclusively licensed to the auto industry to being an open, unlicensed use.

We support this FCC decision because unlicensed spectrum allows innovators big and small to make use of a public asset without paying license fees or obtaining advance government permission. Users gain improved wireless services, more competition, and more services making the most of an asset that belongs to all of us.

Why Is 5.9 GHz Licensed to the Auto Industry?

In 1999, the FCC allocated a portion of the public airwaves to a new type of car safety technology using Dedicated Short Range Communications (DSRC). In theory, cars equipped with DSRC devices on the 5.9 GHz band would communicate with each other and coordinate to avoid collisions. 20 years later, very few cars actually use DSRC. In fact, so few cars are using it that a study found that its current automotive use is worth about $6.2 million, while opening up the spectrum would be worth tens of billions of dollars. In other words, a public asset that could be used for next generation WiFi is effectively laying fallow until the FCC changes part of the license held by the auto industry into an unlicensed use.

Even though it’s barely using it, what are the chances the auto industry will give up exclusive access to a multi-billion public asset it gets for free? This is why last ditch efforts to argue that the auto industry must maintain exclusive use over a huge amount of spectrum as a matter of public safety is hollow. Nothing the FCC is doing here is preventing cars from using this spectrum, and given that its high-frequency a lot of data can travel over the airwave in question.

It isn’t the FCC’s job to stand idly by while someone essentially squats on public property and let it go to waste. Rather, the FCC’s job is to continually evaluate who is given special permission by the government and to decide if they are producing the most benefit to the public.

Unlicensing 5.9 GHz Means Faster WiFi, Improved Broadband Competition, and Better Wireless Service

Spectrum is necessary for transmitting data, and more of it means more available bandwidth. WiFi routers today have a bandwidth speed limit because you can only move as much data as you have spectrum available. In addition, the frequency range affects how much data you can move as well. So earlier WiFi routers that used 2.5 GHz generally transmitted 100s of megabits per second while today’s routers also use 5.0 GHz to deliver gigabit speeds. More spectrum in the range of 5.9 GHz with similar properties to current gigabit routers will mean the next line of WiFi routers will be able to transmit even greater amounts of data.

Adding more high-capacity spectrum into the unlicensed space also means that smaller competitive wireless ISPs that compete with incumbents will be given more capacity for free. Typically small wireless ISPs (WISPs) are reliant on unlicensed spectrum because they do not have the multi-billion dollar financing that AT&T and Sprint do to purchase exclusive licenses. Their lack of financing also limits their ability to immediately deploy fiber wires to the home and unlicensed spectrum allows them to bypass the infrastructure costs until they have enough customers to fund fiber infrastructure. In essence, improving the competitiveness of small wireless players is an essential part of eventually reaching a fiber for all future because smaller competitive ISPs are also more aggressive in deploying fiber to the home than incumbent big telecommunications companies that have generally abandoned their buildouts.

Lastly, wireless broadband service in general will improve because unlicensed spectrum has dramatically reduced congestion on cell towers by offloading traffic to WiFi hotspots and routers. This offloading process is so pervasive that, these days, 59 percent of our smartphone traffic is over WiFi instead of over 4G. It’s estimated that 71 percent of 5G traffic will actually be over WiFi in its early years. Adding more capacity to offloading and higher speeds will mean less congestion on public and small business guest WiFi as well.

As it stands, the 5.9 GHz band of the public airwaves is barely serving the public at all. The FCC deciding to open it up has only benefits and is a good idea.

How a Patent on Sorting Photos Got Used to Sue a Free Software Group

EFF - Tue, 12/03/2019 - 6:45pm

Taking and sharing pictures with wireless devices has become a common practice. It’s hardly a recent development: the distinction between computers and cameras has shrunk, especially since 2007 when smartphone cameras became standard. Even though devices that can take and share photos wirelessly have become ubiquitous over a period spanning more than a decade, the Patent Office granted a patent on an “image-capturing device” in 2018.

A patent on something so commonplace might be comical, but unfortunately, U.S. Patent No. 9,936,086 is already doing damage to software innovation. It’s creating litigation costs for real developers. The creator of this patent is Rothschild Patent Imaging LLC, or RPI, a company linked to a network of notorious patent trolls connected to inventor Leigh Rothschild. We've written about two of them before: Rothschild Connected Devices Innovations, and Rothschild Broadcast Distribution Systems. Now, RPI has used the ’086 patent to sue the Gnome Foundation, a non-profit that makes free software.

The patent claims a generic “image-capturing mobile device” with equally generic components: a “wireless receiver,” “wireless transmitter,” and “a processor operably connected to the wireless receiver and the wireless transmitter.”  That processor is configured i: to (1) receive multiple photographic images, (2) filter those images using criteria “based on a topic, theme or individual shown in the respective photographic image,” and (3) transmit the filtered photographic images to another wireless device. In other words: the patent claims a smartphone that can receive images that a user can filter by content before sending to others.

According to Rothschild’s complaint, all it takes to infringe its patent is to provide a product that “offers a number of ways to wirelessly share photos online such as through social media.” How in the world could a patent on something so basic and established qualify as inventive in 2018?

At least part of the answer is that the Patent Office simply failed to apply the Supreme Court’s Alice decision. The Alice decision makes clear that using generic computers to automate established human tasks cannot qualify as an “invention” worthy of patent protection. Applying Alice, the Federal Circuit has specifically rejected a patent on the “abstract idea of classifying and storing digital images in an organized manner” in TLI Communications

Inexplicably, there’s no sign the Patent Office gave either decision any consideration before granting this application. Alice was decided in 2014; TLI in 2016. Rothschild filed the application that became the ‘086 patent in June 2017. Before being granted, the application received only one non-final rejection from an examiner at the Patent Office. That examiner did not raise any concerns about the application’s eligibility for patent protection, let alone any concerns specifically stemming from Alice or TLI.

The examiner only compared the application to one earlier reference—a published patent application from 2005. Rothschild claimed that system was irrelevant, because the filter was based on the image’s quality; in Rothschild’s “invention,” the filter was based on “subject identification” criteria, such as the topic, theme, or individual in the photo.

Rothschild didn’t describe how the patent performed the filtering step, or explain why filtering on these criteria would be a technical invention. Nor did the Patent Office ask. But under Alice, it should have. After all, humans have been organizing photos based on topic, theme, and individuals depicted for as long as humans have been organizing photos.

Because the Patent Office failed to apply Alice and granted the ’086 patent, the question of its eligibility may finally get the attention it needs in court. The Gnome Foundation has filed a motion to dismiss the case, pointing out that the patent’s lack of eligibility. We hope the district court will apply Alice and TLI to this patent. But a non-profit that exists to create and spread free software never should have had to spend its limited time and resources on this patent litigation in the first place.

Sen. Cantwell Leads With New Consumer Data Privacy Bill

EFF - Tue, 12/03/2019 - 6:13pm

There is a lot to like about U.S. Sen. Cantwell’s new Consumer Online Privacy Rights Act (COPRA). It is an important step towards the comprehensive consumer data privacy legislation that we need to protect us from corporations that place their profits ahead of our privacy.

The bill, introduced on November 26, is co-sponsored by Sens. Schatz, Klobuchar, and Markey. It fleshes out the framework for comprehensive federal privacy legislation announced a week earlier by Sens. Cantwell, Feinstein, Brown, and Murray, who are, respectively, the ranking members of the Senate committees on Commerce, Judiciary, Banking, and Health, Education, Labor and Pensions.

This post will address COPRA’s various provisions in four groupings: EFF’s key priorities, the bill’s consumer rights, its business duties, and its scope of coverage.

EFF’s Key Priorities

COPRA satisfies two of EFF’s three key priorities for federal consumer data privacy legislation: private enforcement by consumers themselves; and no preemption of stronger state laws. COPRA makes a partial step towards EFF’s third priority: no “pay for privacy” schemes.

Private enforcement. All too often, enforcement agencies lack the resources or political will to enforce statutes that protect the public, so members of the public must be  empowered to step in. Thus, we are pleased that COPRA has a strong private right of action to enforce the law. Specifically, in section 301(c), COPRA allows any individual who is subjected to a violation of the Act to bring a civil suit. They may seek damages (actual, liquidated, and punitive), equitable and declaratory relief, and reasonable attorney’s fees.

COPRA also bars enforcement of pre-dispute arbitration agreements, in section 301(d). EFF has long opposed these unfair limits on user enforcement of their legal rights in court.

Further, COPRA in section 301(a) provides for enforcement by a new Federal Trade Commission (FTC) bureau comparable in size to existing FTC bureaus. State Attorneys General and consumer protection officers may also enforce the law, per section 301(b). It is helpful to diffuse government enforcement in this manner.

No preemption of stronger state laws. COPRA expressly, in section 302(c), does not preempt state laws unless they are in direct conflict with COPRA, and a state law is not in direct conflict if it affords greater protection. This is most welcome. Federal legislation should be a floor and not a ceiling for data privacy protection. EFF has long opposed preemption by federal laws of stronger state privacy laws.

“Pay for privacy.” COPRA only partially addresses EFF’s third priority: that consumer data privacy laws should bar businesses from retaliating against consumers who exercise their privacy rights. Otherwise, businesses will make consumers pay for their privacy, by refusing to serve privacy-minded consumers at all, by charging them higher prices, or by providing them services of a lower quality. Such “pay for privacy” schemes discourage everyone from exercising their fundamental human right to data privacy, and will result in a society of income-based “privacy haves” and “privacy have nots.”

In this regard, COPRA is incomplete. On the bright side, it bars covered entities from conditioning the provision of service on the individual’s waiver of their privacy rights in section 109. But COPRA allows covered entities to charge privacy-minded consumers a higher price or provide a lower quality. We urge amendment of COPRA to bar such “pay for privacy” schemes.

Consumer Rights Under COPRA

COPRA would provide individuals with numerous data privacy rights that they may assert against covered entities.

Right to opt-out of data transfer. An individual may require a covered entity to stop transferring their data to other entities. This protection, in section105(b), is an important one. COPRA requires the FTC to establish processes for covered entities to use to facilitate opt-out requests. In doing so, the FTC shall “minimize the number of opt-out designations of a similar type that a consumer must take.” We hope these processes include browser headers and similar privacy settings, such as the “do not track” system, that allow tech users at once to signal to all online entities that they have opted-out.

Right to opt-in to sensitive data processing. An individual shall be free from any data processing or transfer of their “sensitive” data, unless they affirmatively consent to such processing, under section 105(c). There is an exception for certain “publicly available information.

The bill has a long list of what is considered “sensitive” data: government-issued identifiers; information about physical and mental health; credentials for financial accounts; biometrics; precise geolocation; communications content and metadata; email, phone number, or account log-in credentials; information revealing race, religion, union membership, sexual orientation, sexual behavior, or online activity over time and across websites; calendars, address books, phone and text logs, photos, or videos on a device; nude pictures; any data processed in order to identify the above data; and any other data designated by the FTC.

Of course, a great deal of information that the bill does not deem “sensitive” is in fact extraordinarily sensitive. This includes, for example, immigration status, marital status, lists of familial and social contacts, employment history, sex, and political affiliation. So COPRA’s list of sensitive data is under-inclusive. In fact, any such list will be under-inclusive, as new technologies make it ever-easier to glean highly personal facts from apparently innocuous bits of data. Thus, all covered information should be free from processing and transfer, absent opt-in consent, and a few other tightly circumscribed exceptions.

Right to access. An individual may obtain from a covered entity, in a human-readable format, the covered data about them, and the names of third parties their data was disclosed to. Affirming this right, in section 102(a), is good. But requesters should also be able to learn the names of the third parties who provided their personal data to the responding entity. To map the flow of their personal data, consumers must be able to learn both where it came from and where it went.

Right to portability. An individual may export their data from a covered entity in a “structured, interoperable, and machine-readable format.” This right to data portability, in section 105(a), is an important aspect of user autonomy and the right-to-know. It also may promote competition, by making it easier for tech users to bring their data from one business to another.

Rights to delete and to correct. An individual may require a covered entity to delete or correct covered data about them, in sections 103 and 104.

Business Duties Under COPRA

COPRA would require businesses to shoulder numerous duties, even if a consumer does not exercise any of the aforementioned rights.

Duty to minimize data processing. COPRA, in section 106, would bar a covered entity from processing or transferring data “beyond what is reasonably necessary, proportionate, and limited” to certain kinds of purposes. This is “data minimization,” that is, the principle that an entity should minimize its processing of consumer data. Minimization is an important tool in the data privacy toolbox. We are glad COPRA has a minimization rule. We also are also glad COPRA would apply this rule to all the ways an entity processes data (and not just, for example, to data collection or sharing).

However, COPRA should improve its minimization yardstick. Data privacy legislation should bar companies from processing data except as reasonably necessary to give the consumer what they asked for, or for a few other narrow purposes. Along these lines, COPRA allows processing to carry out the “specific” purpose “for which the covered entity has obtained affirmative express consent,” or to “complete a transaction … specifically requested by an individual.” Less helpful is COPRA’s additional allowance of processing for the purpose “described in the privacy policy made available by the covered entity.” We suggest deletion of this allowance, because most consumers will not read the privacy policy.

Duty of loyalty. COPRA, in section 101, would bar companies from processing or transferring data in a manner that is “deceptive” or “harmful.” The latter term means likely to cause: a financial, physical, or reputational injury; an intrusion on seclusion; or “other substantial injury.” This is a good step. We hope legislators will also explore “information fiduciary” obligations where the duty of loyalty would require the business to place the consumer’s data privacy rights ahead of the business’ own profits.

Duty to assess algorithmic decision-making impact. An entity must conduct an annual impact assessment if it uses algorithmic decision-making to determine: eligibility for housing, education, employment, or credit; distribution of ads for the same; or access to public accommodations. This annual assessment—as described in section 108(b)—must address, among other things, whether the system produces discriminatory results. This is good news. EFF has long sought greater transparency about algorithmic decision-making.

Duty to build privacy protection systems. A covered entity must designate a privacy officer and a data security officer. These officers must implement a comprehensive data privacy program, annually assess data risks, and facilitate ongoing compliance with COPRA’s section 202. Moreover, the CEO of a “large” covered entity must certify, based on review, the existence of adequate internal controls and reporting structures to ensure compliance. COPRA in section 2(15) defines a “large” entity as one that processes the data of 5 million people or the sensitive data of 100,000 people. These COPRA rules will help ensure that businesses build the privacy protections systems needed to safeguard consumers’ personal information.

Duty to publish a privacy policy. A covered entity must publish a privacy policy that states, among other things, the categories of data it collects, the purpose of collection, the identity of entities to which it transfers data, and the duration of retention. This language, in section 102(b), will advance transparency.

Duty to secure data. A covered entity must establish and implement reasonable data security practices, as described in section 107.

Scope of Coverage

Consumer data privacy laws must be scoped to particular data, to particular covered entities, and with particular exceptions.

Covered data. COPRA, in section 2(8)(A) protects “covered data,” defined as “information that identifies, or is linked or reasonably linkable to an individual or a consumer device, including derived data.” This term excludes de-identified data, and information lawfully obtained from government records.

We are pleased that “covered data” extends to “devices,” and that “derived” data includes “data about a household” in section 2(11). Some businesses track devices and households, without ascertaining the identity of individuals.

Unfortunately, COPRA defines “covered data” to exclude “employee data,” meaning personal data collected in the course of employment and processed solely for employment in sections 2(8)(B)(ii) and 2(12). For many people, the greatest threat to data privacy comes from their employers and not from other businesses. Some businesses use cutting-edge surveillance tools to closely scrutinize employees at computer workstations (including their keystrokes) and at assembly lines (including wristbands to monitor physical movements). Congress must protect the data privacy of workers as well as consumers.

Covered entities. COPRA, as outlined in section 2(9) applies to every entity or person subject to the FTC Act. That Act, in turn, excludes various economic sectors, such as common carriers, per 15 U.S.C. 45(a)(2). Hopefully, this COPRA limitation reflects the jurisdictional frontiers of the various congressional committees—and the ultimate federal consumer data privacy bill will apply across economic sectors.

COPRA excludes “small business” from the definition of “covered entity” under sections 2(9) & (23). EFF supports such exemptions, among other reasons because small start-ups often are engines of innovation. Two of COPRA’s three size thresholds would exclude small businesses: $25 million in gross annual revenue, or 50% of revenue from transferring personal data. But COPRA’s third size threshold would capture many small businesses: annual processing of the personal data of 100,000 people, households, or devices. Many small businesses have websites that process the IP addresses of 300 visitors per day. We suggest deleting this third threshold, or raising it by an order of magnitude.

Exceptions. COPRA contains various exemptions, listed in sections 110(c) through 110(g).

Importantly, it includes a journalism exemption in section 110(e): “Nothing in this title shall apply to the publication of newsworthy information of legitimate public concern to the public by a covered entity, or to the processing or transfer of information by a covered entity for that purpose. This exemption is properly framed by the activity of journalism, which all people and organizations have a First Amendment right to exercise, regardless of whether they are a professional journalist or a news organization.

COPRA, in section 110(d)(1)(D), exempts the processing and transfer of data as reasonably necessary “to protect against malicious, deceptive, fraudulent or illegal purposes.” Unfortunately, many businesses may interpret such language to allow them to process all manner of personal data, in order to identify patterns of user behavior that the businesses deem indicative of attempted fraud. We urge limitation of this exemption.

Conclusion

We thank Sen. Cantwell for introducing COPRA. It is a strong step forward in the national conversation over how government should protect us from businesses that harvest and monetize our personal information. While we will seek strengthening amendments, COPRA is an important data privacy framework for legislators and privacy advocates.

EFF Releases Certbot 1.0 to Help More Websites Encrypt Their Traffic

EFF - Tue, 12/03/2019 - 4:08pm
Two Million Users Already Actively Using Certbot to Keep Sites Secure

San Francisco - The Electronic Frontier Foundation (EFF) today released Certbot 1.0: a free, open source software tool to help websites encrypt their traffic and keep their sites secure.

Certbot was first released in 2015, and since then it has helped more than two million website administrators enable HTTPS by automatically deploying Let’s Encrypt certificates. Let’s Encrypt is a free certificate authority that EFF helped launch in 2015, now run for the public’s benefit through the Internet Security Research Group (ISRG).

HTTPS is a huge upgrade in security from HTTP. For many years, web site owners chose to only implement HTTPS for a small number of pages, like those that accepted passwords or credit card numbers. However, in recent years, it has become clear that all web pages need protection. Pages served over HTTP are vulnerable to eavesdropping, content injection, and cookie stealing, which can be used to take over your online accounts.

“Securing your web browsing with HTTPS is an important part of protecting your information, like your passwords, web chats, and anything else you look at or interact with online,” said EFF Senior Software Architect Brad Warren. “However, Internet users can’t do this on their own—they need site administrators to configure and maintain HTTPS. That's where Certbot comes in. It automates this process to make it easy for everyone to run secure websites.”

Certbot is part of EFF’s larger effort to encrypt the entire Internet. Along with our browser add-on, HTTPS Everywhere, Certbot aims to build a network that is more structurally private, safe, and protected against censorship. The project is encrypting traffic to over 20 million websites, and has recently added beta support for Windows-based servers. Before the release of Let’s Encrypt and Certbot, only 40% of web traffic was encrypted. Now, that number is up to 80%.

“A secure web experience is important for everyone, but for years it was prohibitively hard to do,” said Max Hunter, EFF’s Engineering Director for Encrypting the Internet. “We are thrilled that Certbot 1.0 now makes it even easier for anyone with a website to use HTTPS.”

For more about Certbot:
https://certbot.eff.org/

Contact:  BradWarrenSenior Software Architectbmw@eff.org MaxHunterEngineering Director, Encrypting the Internetmax@eff.org

EFF Report Exposes, Explains Big Tech’s Personal Data Trackers Lurking on Social Media, Websites, and Apps

EFF - Mon, 12/02/2019 - 9:26am
User Privacy Under Relentless Attack by Trackers Following Every Click and Purchase

San Francisco—The Electronic Frontier Foundation (EFF) today released a comprehensive report that identifies and explains the hidden technical methods and business practices companies use to collect and track our personal information from the minute we turn on our devices each day.

Published on Cyber Monday, when millions of consumers are shopping online, “Behind the One-Way Mirror” takes a deep dive into the technology of corporate surveillance. The report uncovers and exposes the myriad techniques—invisible pixel images, browser fingerprinting, social widgets, mobile tracking, and face recognition—companies employ to collect information about who we are, what we like, where we go, and who our friends are. Amazon, Facebook, Google, Twitter, and hundreds of lesser known and hidden data brokers, advertisers, and marketers drive data collection and tracking across the web.

“The purpose of this paper is to demystify tracking by focusing on the fundamentals of how and why it works and explain the scope of the problem. We hope the report will educate and mobilize journalists, policy makers, and concerned consumers to find ways to disrupt the status quo and better protect our privacy,” said Bennett Cyphers, EFF staff technologist and report author.

“Behind the One-Way Mirror” focuses on third-party tracking, which is often not obvious or visible to users. Webpages contain embedded images and invisible codes that come from entities other than the website owner. Most websites contain dozens of these bugs that go on to record and track your browsing, activity, purchases, and clicks. Mobile apps are equally rife with tracking code which can relay app activity, physical location, and financial data to unknown entities.

With this information companies create behavioral profiles that can reveal our political affiliation, religious beliefs, sexual identity and activity, race and ethnicity, education level, income bracket, purchasing habits, and physical and mental health. The report shows how relentless data collection and profile building fuels the digital advertising industry that targets users with invasive ads and puts our privacy at risk.

“Today online shoppers will see web pages, ads, and their social media feeds. What they won’t see are trackers controlled by tech companies, data brokers, and advertisers that are secretly taking notes on everything they do,” said Cyphers. “Dominant companies like Facebook can deputize smaller publishers into installing its tracking code, so it can track individuals who don’t even use Facebook.

"Behind the One-Way Mirror" offers tips for users to fight back against online tracking by installing EFF’s tracker-blocker extension Privacy Badger in their browser and changing phone settings. Online tracking is hard to avoid, but there are steps users can take to seriously cut back on the amount of data that trackers can collect and share.

“Privacy is often framed as a matter of personal responsibility, but a huge portion of the data in circulation isn’t shared willingly—it’s collected surreptitiously and with impunity. Most third-party data collection in the U.S. is unregulated,” said Cyphers. “The first step in fixing the problem is to shine a light, as this report does, on the invasive third-party tracking that, online and offline, has lurked for too long in the shadows.”

For the report:
https://www.eff.org/wp/behind-the-one-way-mirror

For more on behavioral tracking:
https://www.eff.org/issues/online-behavioral-tracking

Contact:  BennettCyphersStaff Technologistbennett@eff.org

Video: Ruth Taylor Describes Her Win Against an Online Voting Patent

EFF - Mon, 11/25/2019 - 5:55pm

We’ve been fighting abuses of the patent system for years. Many of the worst abuses we see are committed by software patent owners who make money suing people instead of building anything. These are patent owners we call patent trolls. They demand money from people who use technology to perform ordinary activities. And they’re able to do that because they’re able to get patents on basic ideas that aren’t inventions, like running a scavenger hunt and teaching foreign languages.

Efforts at reforming this broken system got a big boost in 2014, when the Supreme Court decided the Alice v. CLS Bank case. In a unanimous decision, the high court held that you can’t get a patent on an abstract idea just by adding generic computer language. Now, courts are supposed to dismiss lawsuits based on abstract patents as early as possible.

We need an efficient way to throw out bad software patents because patent litigation is so outrageously expensive. Small businesses simply can’t afford the millions of dollars it costs to go through a full patent trial. And thanks to Alice, they haven’t had to: since the decision came down, U.S. courts have thrown out bad patents in hundreds of cases.

Our Saved by Alice project tells the stories of these businesses and the people behind them. One is the story of Ruth Taylor, a photographer who ran a website called Bytephoto.com. Bytephoto hosted forums for a passionate community of photographers, and also ran weekly photo competitions where users voted on the photos they liked best.

Today, we’re publishing a short video in which Ruth tells her story about how a company called Garfum.com, claiming that her online photo contests infringed its patent, demanded she pay $50,000 or face a lawsuit for patent infringement.

%3Ciframe%20align%3D%22center%22%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2FSvovyPIT32M%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22autoplay%3B%20encrypted-media%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube-nocookie.com

“I wasn’t about to hand over $50,000, because I didn’t have $50,000,” Ruth said in our interview. “And even if I did, I still wouldn’t have handed it over.”

Instead, Ruth called EFF. We were able to take her case pro bono and prepare a strong defense, arguing that the ridiculous Garfum.com patent never should have been issued in the first place. Garfum’s patent simply takes the well-known idea of a competition by popular vote and applies it to networked computers. This is exactly the type of patent Alice prohibits. After our brief was filed, and Garfum was scheduled to meet us in court, they dropped their lawsuit.

We were able to take and win Ruth’s case because of the Alice decision. Faced with a hearing in which they’d have to justify their patent, Garfum backed down from its bullying behavior. Large patent holders like IBM and Qualcomm have been pushing Congress to weaken one of the few safeguards we have against bad patents, and earlier this year, U.S. Senators began considering a draft proposal that would have thrown out Alice altogether.

When we asked Ruth what it would mean for small businesses if the Alice decision was overturned. “They would be sued, and probably go under,” she said. “To me, a patent was always an amazing invention, that would help people—not something that’s used to extort money from people.”

We hope you’ll take time to listen to Ruth’s story, and understand why we can’t let Congress chip away at the Alice decision. Let’s defend that original vision of the patent system, as promoting real inventions that help people—not extortionate scams like the one that threatened Ruth Taylor’s livelihood. And check out our video of Justus Decher, another Saved by Alice small business.

Sanctions, Protests, and Shutdowns: Fighting to Open Iran’s Internet

EFF - Mon, 11/25/2019 - 4:44pm

Last week, Iranians took to the streets nationwide in protest after an abrupt spike in fuel prices. As the protests grew, the government disrupted the internet across Iran in an apparent attempt to quell unrest. The slowdown was, for most, experienced as a full blackout of internet and mobile connectivity. EFF joins a number of Iranian and international organizations in expressing grave concerns over the internet blackout and violence against protesters.

What happened?

A number of complicating factors have led to this shutdown. Renewed US sanctions have exacerbated economic hardship for Iranians, and tech companies’ compliance—and at times over-compliance—with these sanctions has led to diminished reliance on international services (such as Amazon Web Services, Apple and Github outright prohibiting access to users in Iran). This trend has led to further isolation of Iranians from the global Internet.

As Mahsa Alimardani noted in her New York Times opinion piece, Iranian national technologies have been created and promoted in reaction to the banning of these apps and services, and aligned with the goals of at least some parts of the Iranian government. The domestic apps include Souroush (an app created by the Islamic Republic of Iran Broadcasting to supplant Telegram), as well as proposals for a state-created Virtual Private Network. The impact of this is a national Internet further centralizing services, pushing further censorship and surveillance, and making such an Internet shutdown feasible.

As of last Thursday, internet connectivity is being restored in parts of Iran. Wired reported on how the nationwide shutdown was achieved by the Iranian government. For an updated timeline of the shutdown, you can refer to Amir Rashidi’s tweets documenting internet service disruption beginning the evening of November 15, or follow conversations under the hashtags #Internet4Iran, as well as #KeepItOn, from Access Now’s project against internet shutdowns.

DEEP DIVE: EFF to DHS: Stop Mass Collection of Social Media Information

EFF - Mon, 11/25/2019 - 4:09pm

The Department of Homeland Security (DHS) recently released a proposed rule expanding the agency’s collection of social media information on key visa forms and immigration applications. Earlier this month, EFF joined over 40 civil society organizations that signed on to comments drafted by the Brennan Center for Justice. These comments identify the free speech and privacy risks the proposed rule poses to U.S. persons both directly, if they are required to fill out these forms, and indirectly, if they are connected via social media to friends, family, or associates required to fill out these forms.

DHS’s Proposed Rule

In the proposed rule, “Generic Clearance for the Collection of Social Media Information on Immigration and Foreign Travel Forms,” DHS claims that it has “identified the collection of social media user identifications . . . as important for identity verification, immigration and national security vetting.” The proposed rule identifies 12 forms adjudicated by DHS agencies U.S. Customs and Border Protection (CBP) and U.S. Citizenship and Immigration Services (USCIS) that will now collect social media handles and associated social media platforms for the last five years. The applications will not collect passwords. DHS will be able to only view information that the user publicly shares.

U.S. Customs and Border Protection

The proposed rule mandates social media collection on three CBP forms:

  • Electronic System for Travel Authorization (ESTA, known as the visa waiver program)
  • I-94W Nonimmigrant Visa Waiver Arrival/Departure Record
  • Electronic Visa Update System (EVUS, the system used by Chinese nationals with 10-year visitor visas).

EFF previously highlighted the government’s proposals to collect social media information from visa waiver and EVUS applicants. In 2016, the government finalized CBP’s proposed rule to collect social media handles on the ESTA form as an optional question. Under DHS’s current proposed rule, this question would not longer be optional. DHS claims that this question is not “mandatory” in order to obtain or retain a benefit, such as a visa waiver, but it is mandatory to submit the ESTA and EVUS forms. Applicants may choose “none” or “other” as responses.

U.S. Citizenship and Immigration Services

The proposed rule mandates collection of social media handles on nine USCIS forms, including applications for citizenship, permanent residency (green card), asylum, refugee status, and refugee and asylum family petitions. The proposed rule marks the first time that USCIS has sought to collect social media information from individuals seeking an immigration benefit. 

USCIS claims that it is not “mandatory” to provide social media information on all of these forms. But, for both CBP and USCIS, the proposed rule states that “failure to provide the requested data may either delay or make it impossible for [the agency] to determine an individual’s ability for the requested benefit.” Thus, though the agency may still process forms without a response to the social media question, applicants risk being denied if they fail to provide the information.

Civil Liberties and Privacy Concerns

As we’ve previously argued, collection of social media handles and information in public posts raises a number of First Amendment concerns.

First, the proposed rule will chill the exercise of free speech and lead to self-censorship. As we argued in the comments, social media platforms have become the de facto town square, where people around the world share news and ideas and connect with others. If individuals know that the government is monitoring their social media pages, they are likely to self-censor. Indeed, studies have shown that fears about online government surveillance lead to a chilling effect among both U.S. Muslims and broader samples of Internet users. The proposed rule may cause individuals to delete their accounts, limit their postings, and maximize privacy settings when they otherwise may have shared their social media activity more freely.

Second, the proposed rule infringes upon anonymous speech. Under the proposed rule, individuals running anonymous social media accounts could be at risk of having their true identities unmasked, despite the Supreme Court’s ruling that anonymous speech is protected by the First Amendment. Given that the proposed rule states that “[n]o assurance of confidentiality is provided,” collection of anonymous social media handles tied to their real-world identities could present a dangerous situation for individuals living under oppressive regimes who use such accounts to criticize their government or advocate for the rights of minority communities.

Third, the proposed rule threatens freedom of association. Collection of social media information implicates not just an applicant for a visa or an immigration benefit, but also any person with whom that applicant engages with on social media, including U.S. citizens. This may lead to applicants disassociating from online connections for fear that others’ postings may endanger the applicant’s immigration benefit. Earlier this year, CBP cancelled a Palestinian Harvard student’s visa and deported him back to Lebanon, allegedly based on the social media postings of his online connections. And conversely, the proposed rule may lead to family and friends disassociating from applicants for fear of government social media surveillance.

In addition, the proposed rule raises issues around privacy. Often, people’s social media presence can reveal much more than they intend to share. A recent study demonstrated that using embedded geolocation data, researchers accurately predicted where Twitter users lived, worked, visited, and worshipped—information that many users hadn’t even known they had shared. The proposed rule’s collection of public social media information may allow the government to piece together and document users’ personal lives.

These civil liberties concerns are why EFF and other civil society organizations signed on to the Brennan Center’s comments urging DHS to rescind the proposed rule and abandon its initiative to collect social media information from over 33 million people.

New USCIS Policy on Fake Accounts

The release of the DHS proposed rule dovetailed with the release of a USCIS Privacy Impact Assessment (PIA) on the agency’s use of fake social media accounts to conduct social media surveillance. Under the PIA, the USCIS Fraud Detection and National Security Directorate (FDNS) can create fake social media accounts to view publicly available social media information to:

  1. Identify individuals who may pose a threat to national security or public safety and are seeking an immigration benefit;
  2. Detect and pursue cases when there is an indicator of potential fraud; or
  3. Randomly select previously adjudicated cases for review to identify and remove systemic vulnerabilities.

Under the PIA, FDNS officers may use fake accounts only with supervisor approval. Officers can access only publicly available content and cannot engage on social media (for example, through “friending”).

This USCIS PIA and the DHS proposed rule together involve two separate units within USCIS that engage in social media surveillance: one through social media collection on forms and the other through fake accounts. In the first instance, the applicant is aware that USCIS may monitor their social media activity, while in the second, the applicant may not be aware that USCIS is engaging in such monitoring. The PIA also discusses reevaluation for previously adjudicated decisions, indicating that an applicant may be under a “review” process long after their case has been adjudicated.

The PIA is concerning for several reasons. To begin, the PIA’s authorization of use of fake accounts directly contradicts previous policy. Prior USCIS and DHS guidance required any officer using social media for government purposes to identify themselves with a government identifier. Moreover, as we’ve previously highlighted, the PIA’s authorization of fake accounts violates the terms of service of many social media platforms such as Facebook.

In addition, the PIA provides only vague justifications for why USCIS officers need to create fake accounts to engage in this type of immigration vetting. The PIA claims that using fake accounts is an operational security measure that protects USCIS employees and DHS information technology systems. This explanation provides little clarity, especially since officers are not allowed to engage with other social media users, whether through a government-identified profile or a fake profile. While the PIA claims that any risk to users is mitigated because users are allowed to control what content they make public, the use of fake accounts makes it harder for individuals to use the “block” feature effectively—a key user tool for content control, akin to a privacy setting. By hiding law enforcement’s identity, a user may not block accounts they otherwise might.

Finally, the PIA raises similar concerns as the proposed rule around First Amendment issues and privacy. In particular, the third category allows for social media review of someone who has already been granted an immigration benefit. This means that someone who is already a naturalized U.S. citizen or permanent resident would have to be on alert for the possibility of having their social media content reviewed—and even having their immigration benefit revoked—years after the immigration benefit is granted. The PIA also contemplates the collection of publicly available information from an associate of a person under investigation—for example, comments on a photo. These dual risks could result in the indefinite chilling of individuals’ speech online.

The DHS Privacy Office recommends three ways to limit USCIS’s use of fake social media profiles. First, the PIA states that fake accounts should not be the default, but rather should only be used when there is an “articulated need.” Second, the Privacy Office will initiate a Privacy Compliance Review within 12 months of the PIA’s publishing. Third, the Privacy Office recommends that FDNS implement an efficacy review. We hope that, at minimum, USCIS follows these recommendations. We further ask that USCIS explain why its position has changed from previous guidance prohibiting the use of fake accounts.

About Face: Ending Government use of Face Surveillance

EFF - Fri, 11/22/2019 - 5:58pm

EFF wants to help end government use of face surveillance in your community. To aid in that effort, we’ve partnered with community-based organizations in the Electronic Frontier Alliance—and other concerned civil society organizations—in launching About Face. Our About Face campaign is a way for residents in communities throughout the United States to call for an end to government use of face surveillance.

TAKE ACTION

End Face Surveillance in your community

The About Face campaign site (aboutfacenow.org) offers a petition where you can show your support for ending face surveillance in your community. The About Face campaign site also features helpful resources to support your local efforts, and draft legislation community members and local lawmakers can adapt to meet the needs of your community. Our model legislation addresses the most critical concerns: defining the technology, setting the scope of the ban, and assuring that your ban is enforceable.

Why It's So Important

Many forms of biometric data collection raise a wealth of privacy, security, and ethical concerns. Face surveillance ups the ante. We expose our faces to public view every time we go outside. Paired with the growing ubiquity of surveillance cameras in our public, face surveillance technology allows for the covert and automated collection of information related to when and where we worship or receive medical care, and who we associate with professionally or socially.

Many proponents of the technology argue that there is no reasonable expectation of privacy when we spend time in public, and that if we have nothing to hide, we have nothing to fear. EFF is not alone in finding this argument meritless. In his recent majority opinion in the watershed Carpenter v. United States (2018), Supreme Court Chief Justice John Roberts wrote: “A person does not surrender all Fourth Amendment protection by venturing in the public sphere.” In a recent Wired interview, Attorney Gretchen Greene explains: “Even if I trust the government, I do care. I would rather live in a world where I feel like I have some privacy, even in public spaces.” Greene goes on to identify the inherent First-Amendment concerns implicated by government use of face surveillance: “If people know where you are, you might not go there. You might not do those things.”

Like many of us, Greene is particularly concerned about how the technology will impact members of already marginalized communities. “Coming out as gay is less problematic professionally than it was, in the US, but still potentially problematic. So, if an individual wants to make the choice [of] when to publicly disclose that, then they don’t want facial recognition technology identifying that they are walking down the street to the LGBTQ center.” These concerns are not limited to any one community, and the impacts will be felt regardless of intent. “We’re not trying to stop people from going to church, we’re not trying to stop them from going to community centers, but we will if they are afraid of [the consequence] in an environment that is hostile to, for instance, a certain ethnicity or a certain religion.”

A 2013 study conducted by The Muslim American Civil Liberties Coalition (MACLC), The Creating Law Enforcement Accountability & Responsibility (CLEAR) Project, and The Asian American Legal Defense and Education Fund (AALDEF) found that excessive police surveillance in Muslim communities had a significant chilling effect on First Amendment-protected activities. Specifically, people were less likely to attend mosques they thought were under government surveillance, to engage in religious practices in public, or even to dress or grow their hair in ways that might identify them as members of a targeted community. 

Law enforcement has already used face surveillance technology at political protests. In a “Case Study” obtained by the ACLU, Geofeedia, a company with a history of labeling Unions and Activist Groups as “Overt Threats”, bragged that during the protests surrounding the death of Freddie Gray, the Baltimore Police Department ran social media photos of protesters against a face recognition database to identify participants and arrest them.

Unlike a driver’s license, credit card, license plate, or social security number, you can’t simply replace your face the next time a government agency or contractor fails to effectively protect the sensitive data they’ve been trusted to safeguard.

How you can help

Persuading government agencies across the United States to end their use of face surveillance is not a small undertaking. We need your help. 

TAKE ACTION

End Face Surveillance in your community

Each time 100 people sign the petition to end their city’s use of face surveillance, we will work with our local partners to make sure area lawmakers know how critically important it is to their constituents that we change course and ban this practice. There is no substitute for local, on-the-ground activism. The success of About Face relies on our network of local activists in the Electronic Frontier Alliance and beyond. If your community-based group or hackerspace would like to join us in bringing an end to government use of face surveillance, consider adding your names to the petition and joining the Alliance.

Nonprofit Community Stands Together to Protect .ORG

EFF - Fri, 11/22/2019 - 1:07pm

EFF and 26 other organizations just sent a letter to the Internet Society (ISOC) urging it to stop the sale of the Public Interest Registry (PIR)—the organization that manages the .ORG top-level domain—to private equity firm Ethos Capital. Our message is clear: .ORG is extremely important to the non-governmental organization (NGO) community, and our community should have a voice in decisions affecting the future of .ORG. From our letter:

Non-governmental organizations all over the world rely on the .ORG top-level domain. Decisions affecting .ORG must be made with the consultation of the NGO community, overseen by a trusted community leader. If the Internet Society (ISOC) can no longer be that leader, it should work with the NGO community and the Internet Corporation for Assigned Names and Numbers (ICANN) to find an appropriate replacement.

New .ORG Agreement Spells Danger for NGOs

EFF was stunned when ISOC announced the sale last week. We’ve spent the last six months voicing our concerns to Internet Corporation for Assigned Names and Numbers (ICANN) about several terms in the 2019 .ORG Registry Agreement, urging ICANN to remove provisions that would make it easier for people in power to censor NGOs’ websites. Other organizations objected to ICANN’s removal of the price cap on .ORG domains, which allows the owners of .ORG to charge NGOs unlimited fees for continuing to keep their .ORG domains.

Throughout that six-month process, none of us knew that ISOC would soon be selling PIR to a private equity firm. Suddenly, those fears about the registry abusing the new powers ICANN was handing it became a little more palpable. Without the oversight of a trusted nonprofit organization like ISOC, a registry could abuse those rules to take advantage of the NGO sector.

As we explain in our letter:

The 2019 .ORG Registry Agreement represents a significant departure from .ORG’s 34-year history. It gives the registry the power to make several policy decisions that would be detrimental to the .ORG community:

  • The power to raise .ORG registration fees without the approval of ICANN or the .ORG community. A .ORG price hike would put many cash-strapped NGOs in the difficult position of either paying the increased fees or losing the legitimacy and brand recognition of a .ORG domain.
  • The power to develop and implement Rights Protection Mechanisms unilaterally, without consulting the .ORG community. If such mechanisms are not carefully crafted in collaboration with the NGO community, they risk censoring completely legal nonprofit activities.
  • The power to implement processes to suspend domain names based on accusations of “activity contrary to applicable law.” The .ORG registry should not implement such processes without understanding how state actors frequently target NGOs with allegations of illegal activity.

A registry could abuse these powers to do significant harm to the global NGO sector, intentionally or not. We cannot afford to put them into the hands of a private equity firm that has not earned the trust of the NGO community. .ORG must be managed by a leader that puts the needs of NGOs over profits.

Community Oversight Is Essential for .ORG

The sale is especially troubling because PIR was never supposed to be a for-profit venture. ISOC created PIR expressly for the purpose of managing .ORG, with ISOC’s continued oversight. When ISOC made its proposal to the ICANN board in 2002 to transfer management of .ORG to PIR, part of the pitch was that ISOC would continue to ensure that the NGO sector had a say in policy decisions affecting the .ORG ecosystem. In the words of ISOC’s then-president and CEO Lynn St. Amour:

We’re proposing [to] set up a separate non-profit company called Public Interest Registry that will draw upon the resources of ISOC’s extended global network to drive policy and management. [...]

We propose that the Public Interest Registry will be able to avail itself of the resources of the Internet Society, which provides an existing and globally extensive network of contacts with noncommercial Internet users.

ICANN and PIR let down that global network of noncommercial Internet users when they disregarded the NGO sector’s feedback on a major change to the .ORG Registry Agreement. The new agreement lets the registry in charge of .ORG do significant harm to NGOs—not just to their pocketbooks, but also to their right to speak out, organize, and criticize people in power.

Today, the NGO community is standing together to urge ISOC not to hand that authority to a for-profit company.

For more information, visit savedotorg.org.

Pages