EFF

Subscribe to EFF feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 16 min 31 sec ago

EU Parliament Paves the Way for an Ambitious Internet Bill

9 hours 31 min ago

The European Union has made the first step towards a significant overhaul of its core platform regulation, the e-Commerce Directive.

In order to inspire the European Commission, which is currently preparing a proposal for a Digital Services Act Package, the EU Parliament has voted on three related Reports (IMCO, JURI, and LIBE reports), which address the legal responsibilities of platforms regarding user content, include measures to keep users safe online, and set out special rules for very large platforms that dominate users’ lives.

Clear EFF's Footprint

Ahead of the votes, together with our allies, we argued to preserve what works for a free Internet and innovation, such as to retain the E-Commerce directive’s approach of limiting platforms’ liability over user content and banning Member States from imposing obligations to track and monitor users’ content. We also stressed that it is time to fix what is broken: to imagine a version of the Internet where users have a right to remain anonymous, enjoy substantial procedural rights in the context of content moderation, can have more control over how they interact with content, and have a true choice over the services they use through interoperability obligations.

It’s a great first step in the right direction that all three EU Parliament reports have considered EFF suggestions. There is an overall agreement that platform intermediaries have a pivotal role to play in ensuring the availability of content and the development of the Internet. Platforms should not be held responsible for ideas, images, videos, or speech that users post or share online. They should not be forced to monitor and censor users’ content and communication--for example, using upload filters. The Reports also makes a strong call to preserve users’ privacy online and to address the problem of targeted advertising. Another important aspect of what made the E-Commerce Directive a success is the “country or origin” principle. It states that within the European Union, companies must adhere to the law of their domicile rather than that of the recipient of the service. There is no appetite from the side of the Parliament to change this principle.

Even better, the reports echo EFF’s call to stop ignoring the walled gardens big platforms have become. Large Internet companies should no longer nudge users to stay on a platform that disregards their privacy or jeopardizes their security, but enable users to communicate with friends across platform boundaries. Unfair trading, preferential display of platforms’ own downstream services and transparency of how users’ data are collected and shared: the EU Parliament seeks to tackle these and other issues that have become the new “normal” for users when browsing the Internet and communicating with their friends. The reports also echo EFF’s concerns about automated content moderation, which is incapable of understanding context. In the future, users should receive meaningful information about algorithmic decision-making and learn if terms of service change. Also, the EU Parliament supports procedural justice for users who see their content removed or their accounts disabled. 

Concerns Remain 

The focus on fundamental rights protection and user control is a good starting point for the ongoing reform of Internet legislation in Europe. However, there are also a number of pitfalls and risks. There is a suggestion that platforms should report illegal content to enforcement authorities and there are open questions about public electronic identity systems. Also, the general focus of consumer shopping issues, such as liability provision for online marketplaces, may clash with digital rights principles: the Commission itself acknowledged in a recent internal document that “speech can also be reflected in goods, such as books, clothing items or symbols, and restrictive measures on the sale of such artefacts can affect freedom of expression." Then, the general idea to also include digital services providers established outside the EU could turn out to be a problem to the extent that platforms are held responsible to remove illegal content. Recent cases (Glawischnig-Piesczek v Facebook) have demonstrated the perils of worldwide content takedown orders.

It’s Your Turn Now @EU_Commission

The EU Commission is expected to present a legislative package on 2 December. During the public consultation process, we urged the Commission to protect freedom of expression and to give control to users rather than the big platforms. We are hopeful that the EU will work on a free and interoperable Internet and not follow the footsteps of harmful Internet bills such as the German law NetzDG or the French Avia Bill, which EFF helped to strike down. It’s time to make it right. To preserve what works and to fix what is broken.

Members of Congress Join the Fight for Protest Surveillance Transparency

Tue, 10/20/2020 - 6:54pm

Three members of Congress have joined the fight for the right to protest by sending a letter to the Privacy and Civil Liberties Oversight Board (PCLOB) to investigate federal surveillance against protesters. We commend these elected officials for doing what they can to help ensure our constitutional right to protest and for taking the interests and safety of protesters to heart.

It often takes years, if not longer, to learn the full scope of government surveillance used against demonstrators involved in a specific action or protest movement. Four months since the murder of George Floyd began a new round of Black-led protests against police violence, there has been a slow and steady trickle of revelations about law enforcement agencies deploying advanced surveillance technology at protests around the country. For example, we learned recently that the Federal Bureau of Investigation sent a team specializing in cellular phone exploitation to Portland, site of some of the largest and most sustained protests.  Before that, we learned about federal, state, and local aerial surveillance done over protests in at least 15 cities. Now, Rep. Anna Eshoo, Rep. Bobby Rush, and Sen. Ron Wyden have asked the PCLOB to dig deeper..

The PCLOB is an independent agency in the executive branch, created in 2004, which undertakes far-ranging investigations into issues related to privacy and civil liberties including mass surveillance of the internet and cellular communications, facial recognition technology at airports, and terrorism watchlists.  In addition to asking the PCLOB to investigate who used what surveillance where, and how it negatively impacted the First Amendment right to protest, the trio of Eshoo, Rush, and Wyden, ask the PCLOB to investigate and enumerate the legal authorities under which agencies are surveilling protests and whether agencies have followed required processes for use of intelligence equipment domestically. The letter continues: 

“PCLOB should investigate what legal authorities federal agencies are using to surveil protesters to help Congress understand if agencies’ interpretations of specific provisions of federal statutes or of the Constitution are consistent with congressional intent. This will help inform whether Congress needs to amend existing statutes or consider legislation to ensure agency actions are consistent with congressional intent.”

We agree with these politicians that government surveillance of protesters is a threat to all of our civil liberties and an affront to a robust, active, and informed democracy. With a guarantee of more protests to come in the upcoming weeks and months, Congress and the PCLOB board must act swiftly to protect our right to protest, investigate how much harm government surveillance has caused, and identify  illegal behavior by these agencies.

In the meantime, if you plan on protesting, make sure you’ve reviewed EFF’s surveillance self-defense guide for protesters.

Video Hearing Wednesday: Advocacy Orgs Go to Court to Block Trump’s Retaliation Against Fact-Checking

Tue, 10/20/2020 - 6:31pm
Lawsuit Challenges Executive Order Pressuring Social Media Companies to Ignore President’s False Claims

San Francisco – On Wednesday, October 21 at 11 am ET/2 pm PT, voter advocacy organizations will ask a district court to block an unconstitutional Executive Order that retaliates against online services for fact-checking President Trump’s false posts about voting and the upcoming election. Information on attending the video hearing can be found on the court’s website.

The plaintiffs— Common CauseFree PressMaplightRock the Vote, and Voto Latino—are represented by the Electronic Frontier Foundation (EFF), Protect Democracy, and Cooley LLP. At Wednesday’s hearing, Cooley partner Kathleen Hartnett will argue that the president’s Executive Order should not be enforced until the lawsuit is resolved.

Trump signed the “Executive Order on Preventing Online Censorship” in May, after a well-publicized fight with Twitter. First, the president tweeted false claims about the reliability of online voting, and then Twitter decided to append a link to “get the facts about mail-in ballots.” Two days later, Trump signed the order, which directs government agencies to begin law enforcement actions against online services for any supposedly “deceptive acts” of moderation that aren’t in “good faith.” The order also instructs the government to withhold advertising through social media companies who act in “bad faith,” and to kickstart a process to gut platforms’ legal protections under Section 230. Section 230 is the law that allows online services—like Twitter, Facebook, and others—to host and moderate diverse forums of users’ speech without being liable for their users’ content.

The plaintiffs in this case filed the lawsuit because they want to make sure that voting information found online is accurate, and they want social media companies to be able to take proactive steps against misinformation. Instead, the Executive Order chills social media companies from moderating the president’s content in a way that he doesn’t like—an unconstitutional violation of the First Amendment.

WHAT:
Rock the Vote v. Trump

WHEN:
Wednesday
October 21
2 pm

HOW:
To attend the hearing and see the guidelines for the watching the video stream

Contact:  RebeccaJeschkeMedia Relations Director and Digital Rights Analystpress@eff.org

Pioneer Award Ceremony 2020: A Celebration of Communities

Tue, 10/20/2020 - 1:30pm

Last week, we celebrated the 29th Annual—and first ever online—Pioneer Award Ceremony, which EFF convenes for our digital heroes and the folks that help make the online world a better, safer, stronger, and more fun place. Like the many Pioneer Award Ceremonies before it, the all-online event was both an intimate party with friends, and a reminder of the critical digital rights work that’s being done by so many groups and individuals, some of whom are not as well-known as they should be.    

Perhaps it was a feature of the pandemic — not a bug — that anyone could attend this year’s celebration, and anyone can now watch it online. You can also read the full transcript. More than ever before, this year’s Pioneer Award Ceremony was a celebration of online communities— specifically, the Open Technology Fund community working to create better tech globally; the community of Black activists pushing for racial justice in how technology works and is used; and the sex worker community that’s building digital tools to protect one another, both online and offline. 

But it was, after all, a celebration. So we kicked off the night by just vibing to DJ Redstickman, who brought his characteristic mix of fun, funky music, as well as some virtual visuals. 

DJ Redstickman

EFF’s Executive Director, Cindy Cohn, began her opening remarks with a reminder that this is EFF’s 30th year, and though we’ve been at it a long time, we’ve never been busier: 

EFF Executive Director Cindy Cohn

We’re busy in the courts -- including a new lawsuit last week against the City of San Francisco for allowing the cops to spy on Black Lives Matter protesters and the Pride Parade in violation of an ordinance that we helped pass. We’re busy building technologies - including continuing our role in encrypting the web. We’re busy in the California legislature -- continuing to push for Broadband for All, which is so desperately needed for the millions of Californians now required to work and go to school from home. We’re busy across the nation and around the world standing up for your right to have a private conversation using encryption and for your right to build interoperable tools. And we’re blogging, tweeting and posting on all sorts of social media to keeping you aware of what’s going on and hopefully, occasionally amused.   

Cindy was followed by our keynote speaker, longtime friend of EFF, author, and one of the top reporters researching all things tech, Cyrus Farivar. Cyrus’s recent book, Habeus Data, covers 50 years of surveillance law in America, and his previous book The Internet of Elsewhere, focuses on the history and effects of the Internet on different countries around the world. 

Keynote speaker, Cyrus Farivar

Cyrus detailed his journey to becoming a tech reporter, from his time on IRC chats in his teenage years to his realization, in Germany in 2010, about “what it means to be private and what it means to have surveillance.” At the time, German politicians were concerned with the privacy implications of Google Streetview. In Germany, Cyrus explained, specifically in every German state, there is a data protection agency: “In a way, I kind of think about EFF as one of the best next things.  We don't really have a data protection agency or authority in this country.  Sure, we have the FCC.  We have other government agencies that are responsible for taking care of us, but we don't have something like that.  I feel like one of the things the EFF does probably better than most other organizations is really try to figure out what makes sense in this new reality.”

Cyrus, of course, is one of the many people helping us all make sense of this new reality, through his reporting—and we’re glad that he’s been fighting the good fight ever since encountering EFF during the Blue Ribbon Campaign. 

Following Cyrus was EFF Staff Technologist Daly Barnett, who introduced the winner of the first Barlow—Ms. Danielle Blunt, aka Mistress Blunt. Danielle Blunt is a sex worker activist and tech policy researcher, and is one of the co-founders of Hacking//Hustling, a collective of sex workers and accomplices working at the intersection of tech and social justice. Her research into sex work and equitable access to technology from a public health perspective has led her to being one of the primary experts on the impacts of the censorship law FOSTA-SESTA, and on how content moderation affects the movement work of sex workers and activists. As Daly said during her introduction, “there are few people on this planet that are as well equipped to subvert the toxic power dynamic that big tech imposes on many of us. Mistress Blunt can look at a system like that, pinpoint the weak spots, and leverage the right tools to exploit them.”

Pioneer Award Winner, Danielle Blunt

Mistress Blunt showcased and highlighted specifically how Hacking//Hustling bridges the gaps between sex worker rights, tech policy, and academia, and pointed out the ways in which sex workers, who are often early adopters, are also exploited by tech companies:

Sex workers were some of the earliest adopters of the web.  Sex workers were some of the first to use ecommerce platforms and the first to have personal websites. The rapid growth of countless tech platforms was reliant on the early adoption of sex workers...  [but ]Not all sex workers have equitable access to technologies or the Internet. This means that digital freedom for sex workers means equitable access to technologies. It means cultivating a deeper understanding of how technology is deployed to surveil and restrict movement of sex workers and how this impacts all of us, because it does impact all of us.  

After Mistress Blunt’s speech, EFF Director of International Freedom of Expression Jillian York joined us from Germany to introduce the next honoree, Laura Cunningham. Laura accepted the award for the Open Technology Fund community, a group which has fostered a global community and provided support—both monetary and in-kind—to more than 400 projects that seek to combat censorship and repressive surveillance. This has resulted in over 2 billion people in over 60 countries being able to access the open Internet more safely.

Unfortunately, new leadership has recently been appointed by the Trump administration to run OTF’s funder, the U.S. Agency for Global Media (USAGM). As a result, there is a chance that the organization's funds could be frozen—threatening to leave many well-established global freedom tools, their users, and their developers in the lurch. As a result, this award was offered to the entire OTF community for their hard work and dedication to global Internet freedom—and because EFF recognizes the need to protect this community and ensure its survival despite the current political attacks. As Laura said in accepting it, the award “recognizes the impact and success of the entire OTF community,” and is “a poignant reminder of what a committed group of passionate individuals can accomplish when they unite around a common goal.”

Laura Cunningham accepted the Barlow on behalf of the Open Technology Fund Community

But because OTF is a community, Laura didn’t accept the award alone. A pre-recorded video montage of OTF community members gave voice to this principle as they described what the community means to them: 

For me, the OTF community is resourceful.  I've never met a community that does so much with so little considering how important their work is for activists and journalists across the world to fight surveillance and censorship.

I love OTF because apart from providing open-source technology to marginalized communities, I have found my sisters in struggle and solidarity in this place for a woman of color and find my community within the OTF community.  Being part of the OTF community means I'm not alone in the fight against injustice, inequality, against surveillance and censorship.  

Members of the OTF Community spoke about what it means to them

For me, the OTF community plays an important role in the work that I do because it allows me to be in a space where I see people from different countries around the world working towards a common goal of Internet freedom.  

I'm going to tell you a story about villagers in Vietnam.  The year is 2020.  An 84-year-old elder of a village shot dead by police while defending his and the villagers' land.  His two sons sentenced to death.  His grandson sentenced to life in prison.  That was the story of three generations of family in a rural area of the Vietnam.  It was the online world that brought the stories to tens of millions of Vietnamese and prompted a series of online actions.  Thanks to our fight against internet censorship, Vietnamese have access to information.  

We are one community fighting together for Internet freedom, a precondition today to enjoy fundamental rights.  

These are just a few highlights. We hope you’ll watch the video to see exactly why OTF is so important, and so appreciated globally. 

Following this stunning video, EFF’s Director of Community Organizing, Nathan Sheard, introduced the final award winners—Joy Buolamwini, Dr. Timnit Gebru, and Deborah Raji.

Pioneer Award Winners Joy Buolamwini, Dr. Timnit Gebru, and Deborah Raji

The trio have done groundbreaking research on race and gender bias in facial analysis technology, which laid the groundwork for the national movement to ban law enforcement’s use face surveillance in American cities. In accepting the award, each honoree spoke, beginning with Deborah Raji, who detailed some of the dangers of face recognition that their work together on the Gender Shades Project helped uncover: “Technology requiring the privacy violation of numerous individuals doesn't work. Technology hijacked to be weaponized and target and harass vulnerable communities doesn't work. Technology that fails to live up to its claims to some subgroups over other subgroups certainly doesn't work at all.”

Following Deborah, Dr. Timnit Gebru described how this group came together, beginning with Joy founding the Algorithmic Justice League, Deb founding Project Include, and her own co-founding of Black in AI. Importantly, Timnit noted how these three—and their organizations—look out for each other: “Joy actually got this award and she wanted to share it with us. All of us want to see each other rise, and all of the organizations we've founded—we try to have all these organizations support each other.” 

Lastly, Joy Buolamwini closed off the acceptance with a performance—a “switch from performance metrics to performance art.” Joy worked with Brooklyn tenants who were organizing against compelled use of face recognition in their building, and her poem was an ode to them—and to everyone “resisting and revealing the lie that we must accept the surrender of our faces.” The poem is here in full: 

To the Brooklyn tenants resisting and revealing the lie that we must accept the surrender of our faces, the harvesting of our data, the plunder of our traces, we celebrate your courage. No silence.  No consent. You show the path to algorithmic justice requires a league, a sisterhood, a neighborhood, hallway gathering, Sharpies and posters, coalitions, petitions, testimonies, letters, research, and potlucks, livestreams and twitches, dancing and music. Everyone playing a role to orchestrate change. To the Brooklyn tenants and freedom fighters around the world and the EFF family going strong, persisting and prevailing against algorithms of oppression, automating inequality through weapons of math destruction, we stand with you in gratitude. You demonstrate the people have a voice and a choice.  When defiant melodies harmonize to elevate human life, dignity, and rights, the victory is ours.  

Joy Buolamwini, aka Poet of Code

There is really no easy way to summarize such a celebration as the Pioneer Award Ceremony—especially one that brought together this diverse set of communities to show, again and again, how connected we all are, and must be, to fight back against oppression. As Cindy said, in closing: 

We all know no big change happens because of a single person and how important the bonds of community can be when we're up against such great odds…The Internet can give us the tools, and it can help us create others to allow us to connect and fight for a better world, but really what it takes is us.  It takes us joining together and exerting our will and our intelligence and our grit to make it happen. But when we get it right, EFF, I hope, can help lead those fights, but also we can help support others who are leading them and always, always help light the way to a better future.  

EFF would like to thank the members around the world who make the Pioneer Award Ceremony and all of EFF's work possible. You can help us work toward a digital world that supports freedom, justice, and innovation for all people by donating to EFF. We know that these are deeply dangerous times, and with your support, we will stand together no matter how dark it gets and we will still be here to usher in a brighter day. 

Thanks again to Dropbox, No Starch Press, Ridder Costa & Johnstone LLP, and Ron Reed for supporting this year’s ceremony! If you or your company are interested in learning more about sponsorship, please contact Nicole Puller.

Coinbase’s Transparency Report Is a Welcome First Step

Mon, 10/19/2020 - 3:01pm

Coinbase has released its first transparency report, and we’re encouraged to see the company take this first step and commit to issuing future reports that go even further to provide transparency for their customers.

Last month, we renewed a call for Coinbase—one of the largest cryptocurrency exchanges in the country—to start releasing regular transparency reports that provide insight into how many government requests for information it receives and how it deals with them. Financial data can be particularly sensitive, and decisions about turning over that data or shutting down accounts should not be made in the dark.

Coinbase’s first transparency report shares some important information. It offers an aggregate number of requests received from law enforcement agencies in more than thirty countries and a breakdown of what types of agencies have asked for information in the United States. Overall, Coinbase has received 1,914 requests—the vast majority of which it’s categorized as for “criminal” investigations.

Transparency reports are important tools for accountability for companies that make decisions about when to turn over financial information or shut down accounts, which can have a huge impact on individual privacy and speech online. Publishing information about law enforcement requests that providers receive—and their responses to them—enables users to make informed decisions about the services they use. Reports also help journalists, advocates, and the public get insight into the patterns and practices of law enforcement data collection.

Coinbase’s report is an important but modest step toward the transparency reports that people should expect from their financial institutions. We're encouraged to see Coinbase acknowledge the limits of this report in its own announcement: "While we are restricted from disclosing some of the information requests we receive, over time we hope to update and improve our reports with additional information, resources, and observations to provide more granular insights into our government response process."

We have some ideas on how Coinbase can improve its reports in the future. First, it would be helpful for consumers and advocates to know how many requests Coinbase may have challenged, or how many accounts were shut down as a result of these requests. Other companies routinely provide that level of detail.

For future reports from Coinbase and other financial institutions, EFF would also like to see transparency reports that outline informal government requests that don’t come from a subpoena, warrant, or other legal process, such as when law enforcement agencies have bullied companies to shut down accounts through coercion. We’d also like to see more information on how companies such as Coinbase handle government requests, which companies often make publicly available. It would also be useful for financial services such as Coinbase to start publishing how many Suspicious Activity Reports they file with the Financial Crimes Enforcement Network annually, and about how many accounts.

It’d also be valuable for payment processors to offer aggregate information about how many accounts are either frozen or terminated, broken down by the justification, such as for fraud or Terms of Service violations.

We also want to echo the call that Coinbase General Counsel Paul Grewal made asking others in the financial services industry to step up with their own reports. “[While] transparency reports have become more common in tech, they remain rare in financial services,” Grewal said, “We think it is important not just for cryptocurrency companies, but for fintechs and banks at large to shed light on financial data sharing practices and contribute to the understanding of industry trends in a meaningful way.”

We agree, and call on other payment intermediaries and custodial blockchain services to follow the lead of Kraken and Coinbase in providing this vital information to their users.

Augmented Reality Must Have Augmented Privacy

Fri, 10/16/2020 - 7:12pm

Imagine walking down the street, looking for a good cup of coffee. In the distance, a storefront glows in green through your smart glasses, indicating a well-reviewed cafe with a sterling public health score. You follow the holographic arrows to the crosswalk, as your wearables silently signal the self-driving cars to be sure they stop for your right of way. In the crowd ahead you recognize someone, but can’t quite place them. A query and response later, “Cameron” pops above their head, along with the context needed to remember they were a classmate from university. You greet them, each of you glad to avoid the awkwardness of not recalling an acquaintance. 

This is the stuff of science fiction, sometimes utopian, but often as a warning against a dystopia. Lurking in every gadget that can enhance your life is a danger to privacy and security. In either case, augmented reality is coming closer to being an everyday reality.  

In 2013, Google Glass stirred a backlash, but the promise of augmented reality bringing 3D models and computer interfaces into the physical world (while recording everything in the process) is re-emerging. As is the public outcry over privacy and “always-on” recording. In the last seven years, companies are still pushing for augmented reality glasses—which will display digital images and data that people can view through their glasses. Chinese company Nreal, Facebook and Apple are experimenting with similar technology. 

Digitizing the World in 3D

Several technologies are moving to create a live map of different parts of our world, from Augmented or Virtual Reality to autonomous vehicles. They are creating “machine-readable, 1:1 scale models” of the world that are continuously updated in real-time. Some implement such models through point clouds, a dataset of points coming from a scanner to recreate the surfaces (not the interior) of objects or a space. Each point has three coordinates to position them in space. To make sense of the millions (or billions) of points, a software with Machine Learning can help recognize the objects from the point clouds—looking exactly as a digital replica of the world or a map of your house and everything inside.  

The promise of creating a persistence 3D digital clone of the world aligned with real-world coordinates goes by many names: “world’s digital twin,” “parallel digital universe,” “Mirrorworld,” “The Spatial Web,” “Magic Verse'' or a “Metaverse”. Whatever you call it, this new parallel digital world will introduce a new world of privacy concerns—even for those who choose to never wear it. For instance, Facebook Live Maps will seek to create a shared virtual map. LiveMaps will rely on users’ crowd-sourced maps collected by future AR devices with client-mapping functionality. Open AR, an interoperable AR Cloud, and Microsoft’s Azure Digital Twins are seeking to model and create a digital representation of an environment. 

Facebook’s Project Aria continues on that trend and will aid Facebook in recording live 3D maps and developing AI models for Facebook’s first generation of wearable augmented reality devices. Aria’s uniqueness, in contrast to autonomous cars, is the “egocentric” data collection of the environment—the recording data will come from the wearers’ perspective; a more “intimate” type of data. Project Aria is a 3D live-mapping tool and software with an AI development tool, not a prototype of a product, nor an AR device due to the lack of display. Aria’s research glasses, which are not for sale, will be worn only by trained Facebook staffers and contractors to collect data from the wearer’s point of view. For example, if the AR wearer records a building and the building later burns down, the next time any AR wearer walks by, the device can detect the change, and update the 3D map in real-time. 

A Portal to Augmented Privacy Threats

In terms of sensors, Aria’s will include among others a magnetometer, a barometer, GPS chip, and two inertial measurement units (IMU). Together, these sensors will track where the wearer is (location), where the wearer is moving (motion), and what the wearer is looking at (orientation)—a much more precise way to locate the wearers’ location. While GPS doesn’t often work inside a building, for example, sophisticated IMU can allow a GPS receiver to work well indoors when GPS-signals are unavailable. 

A machine learning algorithm will build a model of the environment, based on all the input data collected by the hardware, to recognize precise objects and 3D map your space and the things on it. It can estimate distances, for instance, how far the wearer is from an object. It also can identify the wearers’ context and activities: Are you reading a book? Your device might then offer you a reading recommendation. 

The Bystanders’ Right to Private Life

Imagine a future where anyone you see wearing glasses could be recording your conversations with “always on” microphones and cameras, updating the map of where you are in precise detail and real-time. In this dystopia, the possibility of being recorded looms over every walk in the park, every conversation in a bar, and indeed, everything you do near other people. 

During Aria’s research phase, Facebook will be recording its own contractors’ interaction with the world. It is taking certain precautions. It asks the owners’ concerns before recording in privately owned venues such as a bar or restaurant. It avoids sensitive areas, like restrooms and protests. It blurs peoples’ faces and license plates. Yet, there are still many other ways to identify individuals, from tattoos to peoples’ gait, and these should be obfuscated, too. 

These blurring protections mirror those used by other public mapping mechanisms like Google Street View. These have proven reasonable—but far from infallible—in safeguarding bystanders’ privacy. Google Street View also benefits from focusing on objects, which only need occasional recording. It’s unclear if these protections remain adequate for perpetual crowd-sourced recordings, which focus on human interactions. Once Facebook and other AR companies release their first generation of AR devices, it will likely take concerted efforts by civil society to keep obfuscation techniques like blurring in commercial products. We hope those products do not layer robust identification technologies, such as facial recognition, on top of the existing AR interface. 

The AR Panopticon

If the AR glasses with “always-on” audio-cameras or powerful 3D mapping sensors become massively adopted, the scope and scale of the problem changes as well. Now the company behind any AR system could have a live audio/visual window into all corners of the world, with the ability to locate and identify anyone at any time, especially if facial or other recognition technologies are included in the package. The result? A global panopticon society of constant surveillance in public or semi-public spaces. 

In modern times, the panopticon has become a metaphor for a dystopian surveillance state, where the government has cameras observing your every action. Worse, you never know if you are a target, as law enforcement looks to new technology to deepen their already rich ability to surveil our lives.

Legal Protection Against Panopticon

To fight back against this dystopia, and especially government access to this panopticon, our first line of defense in the United States is the Constitution. Around the world, we all enjoy the protection of international human rights law. Last week, we explained how police need to come back with a warrant before conducting a search of virtual representations of your private spaces. While AR measuring and modeling in public and semi-public spaces is different from private spaces, key Constitutional and international human rights principles still provide significant legal protection against police access. 

In Carpenter v. United States, the U.S. Supreme Court recognized the privacy challenges with understanding the risks of new technologies, warning courts to “tread carefully …  to ensure that we do not ‘embarrass the future.’” 

To not embarrass the future, we must recognize that throughout history people have enjoyed effective anonymity and privacy when conducting activities in public or semi-public spaces. As the United Nations' Free Speech Rapporteur made clear, anonymity is a “common human desire to protect one’s identity from the crowd..." Likewise, the Council of Europe has recognized that while any person moving in public areas may expect a lesser degree of privacy, “they do not and should not expect to be deprived of their rights and freedoms including those related to their own private sphere.” Similarly, the European Court of Human Rights, has recognized that a “zone of interaction of a person with others, even in a public context, may fall within the scope of “private life.” Even in public places, the “systematic or permanent recording and the subsequent processing of images could raise questions affecting the private life of individuals.” Over forty years ago, in Katz v. United States, the U.S. Supreme Court also recognized "what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected." 

This makes sense because the natural limits of human memory make it difficult to remember details about people we encounter in the street; which effectively offers us some level of privacy and anonymity in public spaces. Electronic devices, however, can remember perfectly, and collect these memories in a centralized database to be potentially used by corporate and state actors. Already this sense of privacy has been eroded by public camera networks, ubiquitous cellphone cameras, license plate readers, and RFID trackers—requiring legal protections. Indeed, the European Court of Human Rights requires “clear detailed rules..., especially as the technology available for use [is] continually becoming more sophisticated.” 

If smartglasses become as common as smartphones, we risk losing even more of the privacy of crowds. Far more thorough records of our sensitive public actions, including going to a political rally or protest, or even going to a church or a doctor’s office, can go down on your permanent record. 

This technological problem was brought to the modern era in United States v. Jones, where the Supreme Court held that GPS tracking of a vehicle was a search, subject to the protection of the Fourth Amendment. Jones was a convoluted decision, with three separate opinions supporting this result. But within the three were five Justices – a majority – who ruled that prolonged GPS tracking violated Jones’ reasonable expectation of privacy, despite Jones driving in public where a police officer could have followed him in a car. Justice Alito explained the difference, in his concurring opinion (joined by Justices Ginsburg, Breyer, and Kagan):

In the pre-computer age, the greatest protections of privacy were neither constitutional nor statutory, but practical. Traditional surveillance for any extended period of time was difficult and costly and therefore rarely undertaken. … Only an investigation of unusual importance could have justified such an expenditure of law enforcement resources. Devices like the one used in the present case, however, make long-term monitoring relatively easy and cheap.

The Jones analysis recognizes that police use of automated surveillance technology to systematically track our movements in public places upsets the balance of power protected by the Constitution and violates the societal norms of privacy that are fundamental to human society.  

In Carpenter, the Supreme Court extended Jones to tracking people’s movement through cell-site location information (CSLI). Carpenter recognized that “when the Government tracks the location of a cell phone it achieves near perfect surveillance as if it had attached an ankle monitor to the phone's user.”  The Court rejected the government’s argument that under the troubling “third-party doctrine,” Mr. Carpenter had no reasonable expectation of privacy in his CSLI because he had already disclosed it to a third party, namely, his phone service provider. 

AR is Even More Privacy Invasive Than GPS and CSLI

Like GPS devices and CSLI, AR devices are an automated technology that systematically documents what we are doing. So AR triggers strong Fourth Amendment Protection. Of course, ubiquitous AR devices will provide even more perfect surveillance, compared to GPS and CSLI, not only tracking the user’s information, but gaining a telling window into the lives of all the bystanders around the user. 

With enough smart glasses in a location, one could create a virtual time machine to revisit that exact moment in time and space. This is the very thing that concerned the Carpenter court:

the Government can now travel back in time to retrace a person's whereabouts, subject only to the retention policies of the wireless carriers, which currently maintain records for up to five years. Critically, because location information is continually logged for all of the 400 million devices in the United States — not just those belonging to persons who might happen to come under investigation — this newfound tracking capacity runs against everyone.

Likewise, the Special Rapporteur on the Protection of Human Rights explained that a collect-it-all approach is incompatible with the right to privacy:

Shortly put, it is incompatible with existing concepts of privacy for States to collect all communications or metadata all the time indiscriminately. The very essence of the right to the privacy of communication is that infringements must be exceptional, and justified on a case-by-case basis.

AR is location tracking on steroids. AR can be enhanced by overlays such as facial recognition, transforming smartglasses into a powerful identification tool capable of providing a rich and instantaneous profile of any random person on the street, to the wearer, to a massive database, and to any corporate or government agent (or data thief) who can access that database. With additional emerging and unproven visual analytics (everything from aggression analysis to lie detection based on facial expressions is being proposed), this technology poses a truly staggering threat of surveillance and bias. 

Thus, the need for such legal safeguards, as required in Canada v. European Union, are “all the greater where personal data is subject to automated processing. Those considerations apply particularly where the protection of the particular category of personal data that is sensitive data is at stake.” 

Augmented reality will expose our public, social, and inner lives in a way that maybe even more invasive than the smartphone’s “revealing montage of the user's life” that the Supreme Court protected in Riley v California. Thus it is critical for courts, legislators, and executive officers to recognize that the government cannot access the records generated by AR without a warrant.

Corporations Can Invade AR Privacy, Too

Even more, must be done to protect against a descent into AR dystopia. Manufacturers and service providers must resist the urge, all too common in Silicon Valley, to “collect it all,” in case the data may be useful later. Instead, the less data companies collect and store now, the less data the government can seize later. 

This is why tech companies should not only protect their users’ right to privacy against government surveillance but also their users’ right to data protection. Companies must, therefore, collect, use, and share their users’ AR data only as minimally necessary to provide the specific service their users asked for. Companies should also limit the amount of data transited to the cloud, and the period it is retained, while investing in robust security and strong encryption, with user-held keys, to give user control over information collected. Moreover, we need strong transparency policies, explicitly stating the purposes for and means of data processing, and allowing users to securely access and port their data. 

Likewise, legislatures should look to the augmented reality future, and augment our protections against government and corporate overreach. Congress passed the Wiretap Act to give extra protection for phone calls in 1968, and expanded statutory protections to email and subscriber records in 1986 with the Electronic Communication Privacy Act. Many jurisdictions have eavesdropping laws that require all-party consent before recording a conversation. Likewise, hidden cameras and paparazzi laws can limit taking photographs and recording videos, even in places open to the public, though they are generally silent on the advanced surveillance possible with technologies like spatial mapping. Modernization of these statutory privacy safeguards, with new laws like CalECPA, has taken a long time and remains incomplete. 

Through strong policy, robust transparency, wise courts, modernized statutes, and privacy-by-design engineering, we can and must have augmented reality with augmented privacy. The future is tomorrow, so let’s make it a future we would want to live in.

California Is Putting Together A Broadband Plan. We Have Thoughts.

Fri, 10/16/2020 - 6:20pm

Right now the California Public Utilities Commission and the California Broadband Council are collecting public comment to create the California Broadband Plan, per Governor Newsom’s Executive Order 73-20. The order’s purpose is to get a means of delivering 100 mbps-capable Internet connections to around 2 million Californians who lack access to at least one high-speed connection. These Californians overwhelmingly live in two types of places: rural and low-income urban. 

California’s Systemic Broadband Access Failures Will Require a Lot of Work to Fix, But It Can Be Done

California has some major broadband access problems, despite the size and reach of its economy. The state has the largest number of students (1.298 million) in the country who are unable to access high-speed Internet access. When we see kids going to fast food parking lots for Internet access, like the two little girls in Salinas, California doing their homework with Taco Bell parking lot WiFi, that is a pretty clear sign of policy failure in cities. When 2 million, mostly rural, Californian residents are dependent on the now bankrupt Frontier Communications— which received billions in federal subsidies and spent a lot of it on obsolete copper DSL instead of profitable fiber to the home—that is a pretty clear sign of both market failure and policy failure. And when studies of California cities find that ISPs are more likely to avoid Black neighborhoods with fiber in Los Angeles and have deployment patterns of high-speed access that mirror 1930s-era redlining in Oakland, we have a failure to modernize and enforce our existing non-discrimination laws. 

As bad as things are today, particularly in light of how the pandemic has shifted more of our needs online, the state has a great opportunity to fix these problems. The Electronic Frontier Foundation has been studying this issue of 21st-century access to the Internet and based on our legal and technical research, we’ve submitted the following recommendations to the state outlining how to bring fiber-to-the-home to all people. 

Here is the good news. It is already commercially feasible to deliver fiber to the home to 90% of households in the United States (and California) within a handful of years—if we adopt the right policies soon. Public sector deployments led by rural cooperatives across the country are proving that even in the cases of the absolutely hardest deployments, such as a 7,500 person cooperative reaching only 2.5 people per square mile, it’s possible to get access to gigabit fiber-to-the-home. So, contrary to what a legacy industry overly reliant on limiting customers to yesterday's infrastructure may say, or the arguments of policy makers who adopt the belief that some residents deserve 2nd-class access to the Internet, it can not only be done, it should have already been happening now. 

EFF’s Recommendations to the State of California to Deliver a Fiber for All Future

If our current and past policies have failed us, what is the new approach? You can read our is our full filing here (pdf), but the main points we make are as follows:

  1. Prioritize local private and public options and de-emphasize reliance on large national ISPs tethered to 3 to 5 years return on investment formulas. This short-term focus is incompatible with resolving the digital divide that will require 10 year or longer return on investment strategies (that’s ok with fiber that will last several decades).
  2. Mandate full fiber deployment in California cities with a density of more than 1,000 people per square mile in areas where low-income neighborhoods are being denied fiber infrastructure, in violation of current law. Give Internet service providers (ISPs) an opportunity to remedy, but make the consequence of failing to do so losing both their franchise for lack of compliance and the right to do business in California.
  3. Promote open access fiber policies and sharing opportunities in fiber infrastructure to reduce the overall cost of deployment. Leverage existing fiber needs in non-telecom markets such as electrical utilities to jointly work with telecom providers. 
  4. Adopt statewide San Francisco’s tenants ordinance, which expanded broadband competition through the city’s apartment complexes and ended the monopoly arrangement between cable companies and landlords. 
  5. Have the state assist in the creation of special districts to fill the role of rural cooperatives when a rural cooperative does not exist. Provide support in feasibility studies, technical education support, long term financing, and regulatory support to ensure such networks can interconnect with the nearest fiber infrastructure. 
  6. Begin mapping public sector fiber infrastructure, and open it up to shared uses to facilitate more local public and private options. Require private ISPs to provide mapping data of their fiber deployments to facilitate sharing, and intervene to address monopoly withholding of fiber access if necessary.  
  7. Standardize and expedite the permitting process for deploying fiber. Explore ways to improve micro-trenching policy to make it easier to deploy fiber in cities.    
  8. Support local government efforts to build fiber by financially supporting their ability to access the bond market and obtain long term low interest debt financing.

We hope that state policymakers recognize the importance of fiber optics as the core ingredient for 21st-century networks. It is already a universally adopted medium to deliver high-capacity networks, with more than 1 billion gigabit fiber connections coming online in a few years (primarily led by China). And to the extent that policymakers wish to see 5G high-speed wireless in all places, the industry has made it clear that is contingent on dense fiber networks being everywhere first

We urge the state to start the work needed to bring broadband access to every Californian as soon as possible. It will be hard work, but we can do it—if we start with the right policies, right now.

Latin American Governments Must Commit to Surveillance Transparency

Fri, 10/16/2020 - 11:34am

This post is the second in a series about our new State of Communications Privacy Laws report, a set of questions and answers about privacy and data protection in eight Latin American countries and Spain. The series’ first post was “A Look-Back and Ahead on Data Protection in Latin America and Spain.” The reports cover Argentina, Brazil, Chile, Colombia, Mexico, Paraguay, Panama, Peru, and Spain.

Although the full extent of government surveillance technology in Latin America remains mostly unknown, media reports have revealed multiple scandals. Intelligence and law enforcement agencies have deployed powerful spying tools in Latin American presidential politics and used them against political adversaries, opposition journalists, lawmakers, dissident groups, judges, activists, and unions. These tools have also been used to glean embarrassing or compromising information on political targets. All too often, Latin America’s weak democratic institutions have failed to prevent such government abuse of power.

High Tech Spying in Latam, Past and Present

Examples abound in Latin America of documented government abuses of surveillance technologies. Surveillance rose to public prominence in Peru in the 1990s with a scandal involving the former director of the Intelligence Agency and former President Fujimori. Fujimori's conviction marked the first time in history that a democratically elected president had been tried and found guilty in his own country for human rights abuses, including illegal wiretapping of opposition figures’ phones. In the 2000s, the Colombian intelligence agency (DAS) was caught wiretapping political opponents. Ricardo A. Martinelli, Panama’s President from 2009 to 2014, was accused of using the spyware “Pegasus” to snoop on political opponents, union leaders, and journalists. (A court last year rejected illegal wiretapping charges against him because of “reasonable doubts”.) 

In Chile in 2017, civil society worked to grasp how the Intelligence Directorate of Chile's Carabiniers (Dipolcar and its special intelligence unit) had "intercepted'' eight of the Mapuche indigenous community leaders’ encrypted WhatsApp and Telegram messages. These leaders had been detained as part of the Huracán Operation. Carabineros shifted its explanation of how it had procured the messages: it had simply claimed generic "interception of messages," but later claimed to have used a keylogger and other malicious software to plant fake evidence. Expert examinations within a Prosecutor’s Office investigation and the report of a Congressional investigative committee concluded that evidence was fabricated. The Huracán Operation also engaged in fraudulent manipulation of seized devices and obtained communications without proper judicial authorization. This is but one abuse among many involving Mapuches by Chilean authorities.

Leaked U.S. diplomatic cables showed collaboration in communications surveillance between the U.S. Drug Enforcement Administrations and Latin American governments such as Paraguay and Panama. This included "cooperation" between the U.S. government and Paraguayan telecom companies.

History repeats itself. Just a few weeks ago, a report revealed that between 2012 and 2018, the government of Mexico City operated an intelligence center that targeted political adversaries including the current Mexican President and the current mayor of Mexico City. Likewise, Brazilians learned just a few weeks ago about Cortex—the Ministry of Justice’s Integrated Operations Secretariat (SEOPI) surveillance program created to fight “organized crime.” Intercept Brazil revealed that Cortex integrates automated-license plate readers (ALPRs) with other databases such as Rais, the Ministry of Economy's labor database.  Indeed, Cortext reportedly cross-references Rais records about employee “address, dependents, salary, and position” with location data obtained from 6,000 ALPRs in at least 12 Brazilian states. According to the Intercept's anonymous source, around 10,000 intelligence and law enforcement agents have access to the system. The context of this new revelation recalls a previous scandal involving the same Ministry of Justice's Secretariat. In July of this year, Brazil’s Supreme Court ordered the Ministry of Justice to halt SEOPI’s intelligence-gathering against political opponents. SEOPI had compiled an intelligence dossier about police officers and teachers linked to the opposition movement. The Ministry of Justice dismissed SEOPI’s intelligence director

Sunlight is the Best Disinfectant

The European Court of Human Rights has held that “a system of secret surveillance set up to protect national security may undermine or even destroy democracy under the cloak of defending it.” In a recent report, the Inter-American Commission’s Free Expression Rapporteur reinforced the call for transparency. The report stresses that people should, at least, have information on the legal framework for surveillance and its purpose; the procedures for the authorization, selection of targets, and handling of data in surveillance programs; the protocols adopted for the exchange, storage, and destruction of the intercepted material; and which government agencies are in charge of implementing and supervising those programs. 

Transparency is vital for accountability and democracy. Without it, civil society cannot even begin to check government overreach. Surveillance powers and the interpretation of such laws must always be on the public record. The law must compel the State to provide rigorous reporting and individual notification. The absence of such transparency violates human rights standards and the rule of law. Transparency is all the more critical where, for operational reasons, certain aspects of the system remain secret. 

Secrecy prevents meaningful public policy debates on these matters of extreme importance: the public can’t respond to abuses of power if it can’t see them. There are many methods states and communication companies can implement to increase transparency about privacy, government data access requests, and legal surveillance authorities.

Policy Recommendations

States should publish transparency reports of law enforcement demands to access customers’ information.
The UN Special Rapporteur on free expression has called upon States to disclose general information about the number of requests for interception and surveillance that have been approved and rejected. Such disclosure should include a breakdown of demands by service providers, type of investigation, number of affected persons, and period covered, among other factors. Unfortunately, the culture of secrecy on states’ transparency reporting is a real problem in Latin America. 

Brazil and Mexico have regulations that compel agencies to publish transparency reports, and they do disclose statistical information. However, Argentina, Colombia, Chile, Paraguay, Peru, and Spain do not have a concrete law that requires them to do so, and in practice, they do not post such reports. Of course, the lack of a specific obligation to publish public interest data, as pointed out by the IACHR’s Freedom of Expression Rapporteur, should not prevent States from publishing this type of data. The IACHR Rapporteur states that the public has the right to access a surveillance agency’s functions, activities, and public resources management.

Mexico's Transparency Law requires governmental agencies to regularly disclose statistical information about data demands made to telecom providers for interceptions, access to communications records, and access to location data in real time. Agencies also must keep the data updated. 

Brazil’s decree 8.771/2016 obliges each federal agency to publish, on its website, yearly statistical reports about their requests for access to Internet users' subscriber data. The statistical reports should include the number of demands, the list of ISPs and Internet applications from which data has been requested, the number of requests granted and rejected, and the number of users affected. Moreover, Brazil's National Council of Justice created a public database with statistics on communications interceptions authorized by courts. The system breaks the data down per month and court in the following categories: number of started and ongoing requests, number of new and ongoing criminal procedures, number of monitored phones, number of monitored VOIP communications, and number of monitored electronic addresses.  

Companies should publish detailed statistical transparency reports regarding all government access to their customers’ data.
The legal frameworks in Argentina, Brazil, Colombia, Chile, Mexico, Peru, Panama, Paraguay, and Spain do not prohibit companies from publishing statistical data on government requests for user data and other investigative techniques authorities. But of the countries we studied, the only one where ISPs publish this information is Chile. Large and small Chilean ISPs have published their transparency reports, including Telefonica, WOM!, VTR,  Entel, and most recently GTD Manquehue. We haven’t seen similar developments in other countries. While America Móvil (Claro) operates in all the Latam countries covered in our reports, only in Chile does it publish one with statistical figures for government data requests.

Telefónica-Movistar is among the few companies to fully embrace transparency reports throughout all the Latam countries where it operates. Others should follow. In Central America, Millicom-Tigo has generally issued consolidated data for Costa Rica, El Salvador, Guatemala, Honduras, and, more recently, Panama. This is less helpful and deviates from the general standard to publish aggregate data per country and not per multi-country region. The company does the same for South America, where it publishes consolidated statistical data for Bolivia, Colombia, and Paraguay. In 2018, Millicom-Tigo first followed the industry-standard by posting aggregate data just for Colombia. 

AT&T publishes detailed data for the United States, but very little information for Latam countries, except for Mexico, where more data is available. The type of data requested by governments depends on the services AT&T provides in each country (whether it is broadband, mobile, or only TV and entertainment). AT&T should provide information like the number of rejected requests or the applicable legal framework for all the countries where it operates. 

In Spain, Orange published its latest report in 2018, while Ono Vodafone’s last report refers to 2016-2017 requests.

Many local Latam telcos have failed to publish transparency reports.

  • In Argentina: Cablevision, Claro, Telecom, Telecentro, and IPLAN.
  • In Brazil: Claro, Oi, Algar, and Nextel.
  • In Colombia: Claro and EMCALI.
  • In Panama: Cable & Wireless Panamá (Más Móvil), Claro, and Digicel
  • In Perú: Claro, Entel, Olo, Bitel, and Inkacel
  • In Paraguay: Claro, Personal, Copaco, Vox, and Chaco Comunicaciones.
  • In México: Telmex/Telcel (América Móvil), Axtel, Megacable, Izzi, and Totalplay. 

At a minimum, companies' transparency reports should disclose the number of government data requests per country, and split by key types of data, applicable legal authorities, and the number of claims challenged or denied. The reports we reviewed usually provide different numbers for content and metadata, which is important. AT&T also includes real-time access to location data for Mexico. Telefonica and AT&T Mexico's section release the number of rejected requests; Millicom doesn't provide this information. None of the reports distinguish criminal orders from national security requests; AT&T does so only for the United States. Reports should also allow readers to learn the number of affected users or devices; disclosing only the number of requests isn’t enough, since one legal demand may refer to more than one customer or device. Telefónica indicates figures of accesses affected for both interception and metadata in Argentina, Brazil, Chile, Mexico, and Peru. In Spain, the system used by security forces for sending judicial orders to obtain metadata still doesn't allow this breakdown. And in Colombia, it's not even possible to count the number of interception requests in mobile lines. 

Of course, companies' transparency reports depend on their knowledge of when surveillance takes place through their systems. Such knowledge is missing--and so transparency reporting is not possible--when police and other government agencies compel providers to give law enforcement direct access to their servers. The 2018 UN High Commissioner on Human Rights report recognized that such direct access systems are a serious concern; they are particularly prone to abuse and tend to circumvent critical procedural safeguards. According to Millicom's report, direct access requirements to telecom companies' mobile networks in Honduras, El Salvador, and Colombia prevent the ISPs from knowing how often or for what periods interception occurs. Millicom points out that a similar practice exists in Paraguay. Yet, in this case, Millicom states the procedures allow them to view the judicial order required for authorities to initiate the interception. 

Companies should publish guidelines for government agencies seeking users’ data. 
It is important for the public to know how police and other government agencies obtain customer data from service providers. To ensure public access to this information,  providers should transparently publish the request guidelines they provide to government agencies. 

Chilean telecom companies publish their law enforcement guidelines. WOM and VTR detail the integrated systems and contact channels they use to receive government requests, and the information that requests and judicial orders should contain, such as the investigative targets and procedures. They break details down by type of interception (like emergency cases and deadline extension) and users' information (such as traffic data).  

GTD Manquehue has a similar model but doesn't specify information related to urgent interceptions and extension requests. Claro also includes contact channels and some important requisites, particularly for traffic and other associated data. Entel doesn't indicate contact channels for data requests but goes beyond others in explaining the applicable legal framework and requirements orders must fulfill. In turn, Telefónica - Movistar's guidelines are vague when setting legal requirements, but provide great detail about the kind of metadata and subscriber information authorities can access. 

Telefónica and Millicom have global guidelines for all law enforcement requests. They apply to their subsidiaries, which usually don't publish local specifications. While Telefonica guidelines commit to relevant principles and display a chart flow for assessing government requests, Millicom outlines five steps of their process for law enforcement assistance. Both give valuable insight into the companies' procedures. But they shouldn't supplant the release of more specific guidelines at the domestic level, showing how their global policies apply regarding local contexts and rules. 

Secret laws—about government access to data or anything else—are unacceptable.
Law is only legitimate if people know it exists and can act based on that knowledge. It allows people the fundamental fairness of understanding when they can expect privacy from the government and when they cannot.  As we’ve noted before, it avoids the Kafkaesque situations in which people, like Joseph K in The Trial, cannot figure out what they did that resulted in the government accessing their data. The UN Report on the Right to Privacy in the Digital Age states that “Secret rules ... do not have the necessary qualities of ‘law’ … [a] law that is accessible, but that does not have foreseeable effects, will not be adequate.” The Council of Europe’s Parliamentary Assembly likewise has condemned the “extensive use of secret laws and regulations.”  

Yet the Peruvian guidelines on data sharing by ISPs with police has been declared “reserved information.” In striking contrast, Peruvian wiretapping protocols are deemed public.  

Service providers should notify all their customers, wherever they live when the government seeks their data. Such notice should be immediate, unless doing so would endanger the investigation.
The notification principle is essential to restrict improper data requests from the government to service providers. Before the revolution in electronic communication, the police had to knock on a suspect’s door and show their warrant. The person searched could observe whether the police searched or seized their written correspondence, and if they felt the intrusion was improper, ask a court to intervene. 

Electronic surveillance, however, is much more surreptitious. Data can be intercepted or acquired directly from telecom or Internet Providers without the individual knowing. It is often impossible to know that their data has been accessed unless the evidence leads to criminal charges. As a result, the innocent are least likely to discover the violation of their privacy rights. Indeed, new technologies have enabled covert remote searches of personal computers. Any delay in the notification has to be justified to a court and tied to an actual danger to the investigation or harm to a person. The UN High Commissioner for Human Rights recognized that users who have been subject to surveillance should be notified, at least ex post facto.

Peru and Chile provide the two best standards in the region to notify the persons affected.  Unfortunately, notification is often delayed. Peru's Criminal Procedure Code allows informing the surveilled person once the access procedures are completed. The affected person may ask for judicial re-examination within three days of receiving notice. Such post-access notification is permitted only if the investigation scope allows it and does not endanger another person. 

Chilean law has a similar provision. The access procedure is secret by default. However, the state must notify the affected person after the interception has ended, if the investigative scope allows it and notice won’t jeopardize another person. If the data demand is secret, then the prosecutor must set a term of no more than 40 days, which may be extended one more time. 

Argentinian criminal law does not include any obligation or prohibition to inform the individual, not even when the access is over. The subject of the investigation may learn about the evidence used in a criminal proceeding. However, an individual may never know that the government accessed their data if it was not used by the prosecutor. 

There is no legal obligation in Brazil that compels either the State or companies to provide notice a priori. The Telephone Interception Law imposes a general secrecy requirement. Another statute authorizes the judge to determine secrecy issues. Companies could voluntarily notify the user if a gag order is not legally or judicially set, or subsequently after secrecy is lifted.  

In Spain, secrecy is the norm. This includes for interception of communication, malware, location tracking, or communication data access. The obliged company carrying out the investigative measures is sworn to secrecy on pain of criminal penalty. 

Freedom of Information Laws and investigative reporting are needed to shine a light on governmental data requests and secret surveillance. Whistleblowers’ legal protection is required, too.

States in the region are required to respond to public record requests and must provide information ex officio. The Inter-American Court recognizes that it is “essential that State authorities are governed by the principle of maximum disclosure, which establishes the presumption that all information is accessible, subject to a limited system of exceptions.” The Court also echoed the 2004 joint declaration by the rapporteurs on freedom of expression of the UN, the OAS, and the OSCE, in which it stipulated that “[p]ublic authorities should be required to publish proactively, even in the absence of a request, a range of information of public interest. Systems should be put in place to increase, over time, the amount of data subject to such routine disclosure." 

The Mexican Transparency Law obliges governmental agencies to automatically disclose and update information about government access to company data. In contrast, the Peruvian Transparency Law only compels the State to disclose on request the information it creates or is in its possession, with certain exceptions. So if aggregate information on the details of the requests existed, it could be accessible through FOIA requests. But if it's not, the law does not require the agency to create a new record.

In Latin America, NGOs have used these public access laws to learn more about high tech surveillance in their countries.  In Argentina, ADC filed a FOIA request after the City of Buenos Aires announced it would deploy facial recognition software over its CCTV cameras' infrastructure. Buenos Aires' administration disclosed responsive information about the legal basis and purposes for implementing the technology, the government body responsible for its operation, and the software purchase. ODIA made further requests about the system’s technical aspects, and Access Now followed suit in Córdoba.

In the wake of revelations on the use of "Pegasus" malware to spy on journalists, activists, lawyers, and politicians in Mexico, digital rights NGO R3D filed a FOIA request in 2017 seeking documents about the purchase of "Pegasus".  After receiving part of the agreement, R3D challenged the country's Transparency and Data Protection Authority (INAI) decision to classify Pegasus’ technical specifications and operation methods. In 2018, the judge overruled INAI's resolution, holding that serious human rights violations and acts of corruption must never be confidential information. 

In other countries, digital media have shed light on the number of government data access demands. For example, INFOBAE in Argentina published a story reporting the leaked number of interceptions and other statistical information. Another outlet in Chile revealed the number of telephone interception requests based on the public records law. 

The IACHR Rapporteur stresses the important role of investigative journalists and whistleblowers in its new Freedom of Expression report. The rapporteur’s recommendations underscore the need to ensure legislation to protect the right of journalists and others. The law should also protect their sources against direct and indirect exposure, including intrusion through surveillance. Whistleblowers who expose human rights violations or other wrongdoings should also receive legal protection against retaliation. 

Conclusions
Governments often confuse a need for secrecy in a specific investigation with an overarching reticence to describe a surveillance technology’s technical capabilities, legal authorities, and aggregate uses. But civil society’s knowledge of these technologies is crucial to public oversight and government accountability. Democracy cannot flourish and persist without the capacity to learn about and provide effective remedies to abuses and violations of privacy and other rights. 

Secrecy must be the exception, not the norm. It must be limited in time and strictly necessary and proportionate to protect specific legitimate aims. We still have a long way ahead in making transparency the norm. Government practices, state regulations, and companies’ actions must build upon the transparency principles set forth in our recommendations. 

Education Groups Drop Their Lawsuit Against Public.Resource.Org, Give Up Their Quest to Paywall the Law

Thu, 10/15/2020 - 1:41pm

This week, open and equitable access to the law got a bit closer. For many years, EFF has defended Public.Resource.Org in its quest to improve public access to the law — including standards, like the National Electrical Code, that legislators and agencies have made into binding regulations. In two companion lawsuits, six standards development organizations sued Public Resource in 2013 for posting standards online. They accused Public Resource of copyright infringement and demanded the right to keep the law behind paywalls.

Yesterday, three of those organizations dropped their suit. The American Educational Research Association (AERA), the National Council on Measurement in Education (NCME), and the American Psychological Association (APA) publish a standard for writing and administering tests. The standard is widely used in education and employment contexts, and several U.S. federal and state government agencies have incorporated it into their laws.

A federal district court initially ruled that laws like the testing standard could be copyrighted, and that Public Resource could not post them without permission. But in 2018, the Court of Appeals for the D.C. Circuit threw out that ruling and sent the case back to the trial court with instructions to consider whether posting standards that are incorporated into law is a non-infringing fair use. As one member of the three-judge panel wrote [pdf], the court “put[] a heavy thumb on the scale in favor of an unrestrained ability to say what the law is.”

Also this year, in a related case, the Supreme Court ruled that Public Resource could post the state of Georgia’s annotated code of laws, ruling that the state could not assert copyright in its official annotations.

Yesterday, AERA, NCME, and APA asked the court to dismiss their suit with prejudice, indicating that they are no longer trying to stop Public Resource from posting the testing standards. “I’m pleased that AERA, NCME, and APA have withdrawn their claims and hope they will embrace open access to their admirable work,” said Public Resource founder Carl Malamud. “We have vigorously objected to what we believed was a baseless suit, but we are also very happy to move forward and thank them for taking this important though overdue step. It has been seven long years, let's think about the future.“

Three other standards development groups (the American Society for Testing and Materials, the National Fire Protection Association, and the American Society for Heating, Refrigeration, and Air Conditioning Engineers) continue to pursue their suit against Public Resource. We’re confident that the court will rule that laws are free for all to read, speak, and share with the world.

Related Cases: Freeing the Law with Public.Resource.Org

San Francisco Supervisors Must Reign In SFPD’s Abuse of Surveillance Cameras

Tue, 10/13/2020 - 10:00am

Black, white, or indigenous; well-resourced or indigent; San Francisco residents should be free to assemble and protest without fear of police surveillance technology or retribution. That should include Black-led protesters of San Francisco who took to the streets in solidarity and protest, understanding that though George Floyd and Breonna Taylor were not neighbors in the most literal sense, their deaths resulted from police violence and racism experienced across geographic and jurisdictional boundaries.

Take Action

end illegal San Francisco Police Department Spying

San Francisco residents may have believed that their protest would be protected from unwarranted government surveillance. Only a year before this year's record-breaking protests, San Francisco passed a groundbreaking surveillance ordinance establishing a democratic process before the San Francisco Police Department (SFPD), or any City agency, could acquire new surveillance systems or use them in ways that had not been expressly approved by the City's Board of Supervisors.

But from May 31 to June 7—when over 10,000 people came together in the streets of San Francisco to call for an end to police lawlessness and murder with impunity—the SPFD violated that surveillance ordinance to spy on protesters. EFF uncovered and revealed this unlawful surveillance. Circumventing the Board of Supervisors' authority and control over surveillance technology decisions in San Francisco, the SFPD Homeland Security Unit requested and received live, remote access to the Union Square Business Improvement District's (USBID) network of over 400 high-definition cameras. The cameras’ funder is on record that it’s against the law for the SFPD to access the cameras in real-time. We agree. So in cooperation with the ACLU of Northern California, we recently filed a lawsuit against the SFPD for violating the surveillance ordinance.

This morning, with a broad coalition of civil society organizations, EFF delivered a letter to the San Francisco Board of Supervisors calling on the Board to condemn SFPDs illegal surveillance of demonstrations against the police killings of George Floyd, Breonna Taylor, Tony McCade, and many other Black people across the nation. Our letter asks the Board to prohibit city departments from obtaining real-time use of private camera networks, or “data dumps” of footage from those networks. It also asks the Board to call on the SFPD to testify at a special hearing regarding their illegal surveillance of demonstrations, and to ask the Office of the Controller to address the SFPD's violations in its annual audit of compliance with the City's surveillance ordinance.

Take Action

end illegal San Francisco police department spying

As we explained in our court filing, the SFPD has a long and troubling history of targeting individuals for unlawful surveillance based on their race, religion, gender identity, and political activism. Throughout the 20th century, the SFPD surveilled and conducted raids on establishments frequented by the gay community, including bars and bathhouses. By 1975, the SFPD’s Intelligence Unit had amassed files on more than 100,000 San Franciscans, including civil rights demonstrators, anti-War activists, labor union members, and students. In 1993, an SFPD inspector was caught selling intelligence information obtained through surveillance of Arab American groups and South African apartheid opponents.

When the SFPD cut ties with the FBI's Joint Terrorism Task Force in 2017, amidst concerns of warrantless surveillance on Muslim communities, SFPD Chief William Scott said: "When confidence is shaken we have to slow down for a minute and make sure that the public sees us as an organization that they can trust." The SFPD has once again shaken that trust by violating an ordinance intended to assure trust between the people of San Francisco and those whose job it is to provide public safety. The threat of police exploiting surveillance technology to spy on people exercising their First Amendment rights was a primary motivation for San Francisco's Board of Supervisors passing 2019's groundbreaking Stop Secret Surveillance Ordinance.

Disturbingly, this may not be the only way the SFPD has utilized an outside agency to violate the surveillance ordinance. The Department has been accused of using the Northern California Regional Intelligence Center (NCRIC) to circumvent the City's ban on government use of face surveillance. While investigating an illegal gun charge, SFPD officers distributed an alert, including imagery captured from surveillance footage. NCRIC then analyzed the image with their face recognition system and supplied SFPD with the results. SFPD's use of this information only came to light when the prosecutor included them, perhaps mistakenly, in discovery materials disclosed to the suspect's attorney.

Take Action

end illegal San Francisco Police Department Spying

Perhaps more than any other moment in our lifetimes, public safety demands trust between community members and the agencies sworn to keep them safe. Police lawlessness and high tech surveillance further erode the frayed fabric of that trust. In addition to the courts, the Board of Supervisors have a responsibility to respond to violations of that trust. It must restore the public's confidence in their ability to move freely throughout our City without the threat of high tech surveillance chilling their freedom to speak truth to power—or simply congregate in their own communities.

The Board of Supervisors must step in to ban the SFPD's real-time use of private cameras and data-dumps of footage from private cameras. San Francisco residents can help make sure that happens by contacting their District Supervisor and insisting that they take this necessary action now.

Related Cases: Williams v. San Francisco

Thank You For Your Transparency Report, Here’s Everything That’s Missing

Tue, 10/13/2020 - 4:51am

Every major social media platform—from Facebook to Reddit, Instagram to YouTube—moderates and polices content shared by users. Platforms do so as a matter of self-interest, commercial or otherwise. But platforms also moderate user content in response to pressure from a variety of interest groups and/or governments. 

As a consequence, social media platforms have become the arbiters of speech online, defining what may or may not be said and shared by taking down content accordingly. As the public has become increasingly aware and critical of the paramount role private companies play in defining the freedom of expression of millions of users online, social media companies have been facing increased pressure to stand accountable for their content moderation practices. In response to such demands, and partially to fulfill legal requirements stipulated by regulations like Germany’s NetzDG, Facebook and other social media companies publish detailed ‘transparency reports’ meant to give some insight into their moderation practices.

Transparency does not always equal transparency

Facebook’s most recent 'community standards enforcement report', which was released in August and also covers its subsidiary Instagram, is emblematic of some of the deficits of companies reporting on their own content moderation practices. The report gives a rough overview of the numbers of posts deleted, broken down according to the 10 policy areas Facebook uses to categorize speech (it is unclear whether or not Facebook uses more granular categories internally). These categories can differ between Facebook and Instagram. Per category, Facebook also reports how prevalent content of a certain type is on its platforms, which percentage of allegedly infringing content was removed before it was reported by users, and how many pieces of supposedly problematic content were restored later on.

But content moderation, and its impact, is always contextual. While Facebook’s sterile reporting of numbers and percentages can give a rough estimate of how many pieces of content of which categories get removed, it does not tell us why or how these decisions are taken. Facebook’s approach to transparency thus misses the mark, as actual transparency should allow outsiders to see and understand what actions are performed, and why. Meaningful transparency inherently implies openness and accountability, and cannot be satisfied by simply counting takedowns. That is to say that there is a difference between corporately sanctioned ‘transparency,’ which is inherently limited, and meaningful transparency that empowers users to understand Facebook’s actions and hold the company accountable.

This is especially relevant in light of the fundamental shift in Facebook’s approach to content moderation during the COVID-19 pandemic. As companies were unable to rely on their human content moderators, Facebook, Twitter and YouTube began relying much more heavily on automated moderation tools—despite the documented shortcomings of AI tools to consistently judge the social, cultural, and political context of speech correctly. As social media platforms ramp up their use of automated content moderation tools, it is especially crucial to provide actual explanations for how these technologies shape (and limit) people's online experiences.

True transparency must provide context

So what would a meaningful transparency report look like? First of all, it should clarify the basics—how many human moderators are there, and how many cover each language? Are there languages for which no native speaker is available to judge the context of speech? What is the ratio of moderators per language? Such information is important in order to help understand (and avoid!) crises like when Facebook’s inability to detect hate speech directed against the Rohingya contributed to widespread violence in Myanmar.

Real transparency should also not stop at shedding light on the black boxes that algorithmic content moderation systems appear to be from the outside. In order to give users agency vis-à-vis automated tools, companies should explain what kind of technology and inputs are used at which point(s) of content moderation processes. Is such technology used to automatically flag suspicious content? Or is it also used to judge and categorize flagged content? When users report content takedowns, to which extent are they dealing with automated chat bots, and when are complaints reviewed by humans? Users should also be able to understand the relationship between human and automated review—are humans just ‘in the loop’, or do they exercise real oversight and control over automated systems?

Another important pillar of meaningful transparency are the policies that form the basis for content takedowns. Social media companies often develop these policies without much external input, and adjust them constantly. Platforms’ terms of service or community guidelines usually also don’t go into great detail or provide examples to clearly delineate acceptable speech. Transparency reports could, for example, include information on how moderation policies are developed, whether and which external experts or stakeholders have contributed, how often they are amended, and to which extent.

Closely related: transparency reports should describe and explain how human and machine-based moderators are trained to recognize infringing content. In many cases, the difference is between, for example, incitement to terrorism and counterspeech against extremism. For moderators—who work quickly—the line between the two can be difficult to judge, and is dependent on the context of the statement in question. That’s why it’s crucial to understand how platforms are preparing their moderators to understand and correctly judge such nuances.

Real transparency is empowering, not impersonal

Meaningful transparency should empower people to understand how a social media platform is governed, to know their rights according to that governance model, and to hold companies accountable whenever they transgress it. We welcome companies’ efforts to at least offer a glimpse into their content moderation machine room. But they still have a long way to go.

This is why we have undertaken a process to review the Santa Clara Principles on Accountability and Transparency in Content Moderation, a process that is currently underway. We look forward to sharing the conclusions of our research and contributing to the future of corporate transparency.

We Fight For the Users

Fri, 10/09/2020 - 6:14pm

Here at the Electronic Frontier Foundation, we have a guiding motto: "I Fight For the Users." (We even put it on t-shirts from time to time!) We didn't pick that one by accident (nor merely because we dig the 1982 classic film "Tron"), but because it provides such a clear moral compass when we sit down to work every day.

Should your boss be able to spy on you through your computer? Well, you're the user and we fight for you, so we say no.

What about your professor? Same logic applies here too.

Your spiteful, angry ex? No way.

What about tech giants, data-brokers or ad-tech companies? The user decides, not them.

Who decides who fixes your stuff? You do.

What about which ink goes in your printer? That's your business, not some giant company's.

When companies have their users' backs, we have those companies' backs. When the companies subordinate users' interests in favor of their own, we have the users' backs.

That's not switching sides, that's fighting for the user!

This summer, the Internet Engineering Task Force's Internet Architecture Board began circulating RFC 8890: The Internet is for End Users, and we think it's just terrific (RFC stands for "Request for Comment"; it's what the IETF calls its internal documents, including its standards).

The document's principal author is Mark "mnot" Nottingham, an Internet pioneer who works on core Internet standards like HTTP, the working group for which he co-chairs. Nottingham and colleagues have produced a thoughtful manifesto for how technologists should think about the work they do.

The paper starts out by acknowledging the increasing centrality of the Internet to every realm of our lives, and asserts that fact alone is no indicator of the success of the Internet. It's not enough to "merely [advance] the measurable success of the Internet (e.g., deployment size, bandwidth, latency, number of users)"—all of these indicators can be improved by technology that is "used as a lever to assert power over users, rather than empower them."

Music to our ears! In order to build an Internet fit for human habitation, the RFC demands that we prioritize the empowerment of "end-users"—"human users whose activities IETF standards support."

But this is more complicated than it seems at first blush: end-users have different roles ("seller, buyer, publisher, reader, service provider, and consumer") and many potentially conflicting interests: "privacy, security, flexibility, reachability." And users are blended: kids who use the Internet and their parents; people who post photos to the Internet and the people pictured in those photos. The RFC notes that this complexity may make it hard to figure out who "the end-user" is at any moment, but still demands that we make the effort. (At EFF, we take the position that when it comes to surveillance, the public is the end-user we care about, even if the technology's "user" is a law enforcement agency.)

The RFC lists several ways that end-users can be involved in technical architecture decisions, and ponders the strengths and drawbacks of each: the difficulty of discussing esoteric technology with users who lack the background to understand it; the imperfection of relying on government representatives to represent the interests of their citizens (and the conflicts between those governments and the governments of other states).

The authors land on civil society groups (that's us!) as the go-to group to represent users' interests with both technical depth and a genuine ethical posture. Further, it demands that IETF working groups find ways to directly engage with specialist user groups representing different priorities, meeting them where they are rather than inviting them to participate in esoteric standards-setting committees.

The paper moves on to a discussion of the term "user-agent"—the technical name for a browser in standards like HTTP. The term "user-agent" has a profound implication for the fundamental architecture of the Internet: a user-agent should take orders from the user, not anyone else. Your agent should fetch and display the content you want to see and block the content you don't want to see. It should keep your information consumption habits private unless you decide to share them. It should run the code you choose.

Alas, as the RFC points out, the latest wave of Internet of Things devices have all but abandoned the idea of serving as user-agents. Instead these sensor-studded, actuator-connected gadgets act as outposts for the corporations that sold them, sneaking around behind our backs to spy on us, corralling us into arranging our affairs to suit the manufacturer's shareholders' interests at the expense of our own.

As we’ve pointed out, even browsers, the original “user agents,” sometimes put the interests of the monopolists who made them ahead of the user’s.

The paper concludes with some sober advice for technologists building the Internet: don't rush to the assumption that users' needs have to be traded off for technical necessities; don't sideline users' needs for "architectural purity."

The IETF is an Internet original, a 34-year-old institution that does the hard, unglamorous work of setting and updating standards. The "rough consensus and running code" ethic it defined gave birth to the Internet as it once was, and as it has become. As an organization that is nearly as old as the IETF, we're so pleased that they, too, are here to fight for the users.

EFF and ACLU Ask Ninth Circuit to Overturn Government’s Censorship of Twitter’s Transparency Report

Fri, 10/09/2020 - 4:46pm

Citing national security concerns, the government is attempting to infringe on Twitter's First Amendment right to inform the public about secret government surveillance orders. For more than six years, Twitter has been fighting in court to share information about law enforcement orders it received in 2014. Now, Twitter has brought that fight to the Ninth Circuit Court of Appeals. EFF, along with the ACLU, filed an amicus brief last week to underscore the First Amendment rights at stake.

In 2014, Twitter submitted a draft transparency report to the FBI to review. The FBI censored the report, banning Twitter from sharing the total number of foreign intelligence surveillance orders the government had served within a six-month period. In response, Twitter filed suit in order to assert its First Amendment right to share that information.

Over half a decade of litigation later, the trial court judge resolved the case in April by dismissing Twitter’s First Amendment claim. Among the several concerning aspects of the opinion, the judge spent devoted only a single paragraph to analyzing Twitter’s First Amendment right to inform the public about law enforcement orders for its users’ information.

That single paragraph was not only perfunctory, but incorrect. The lower court failed to recognize one of the most basic rules underpinning the right to free speech in this country: the government must meet an extraordinarily exacting burden in order to censor speech before that speech occurs, which the Supreme Court has called “the most serious and least tolerable infringement on First Amendment rights.”

As we explained in our amicus brief, to pass constitutional scrutiny, the government must prove that silencing speech before it occurs is necessary to avoid harm that is not only extremely serious but is also imminent and irreparable. But the lower court judge concluded that censoring Twitter’s speech was acceptable without finding that any resulting harm to national security would be either imminent or irreparable. Nor did the judge address whether the censorship was actually necessary, and whether less-restrictive alternatives could mitigate the potential for harm.

This cursory analysis was a far cry from the extraordinarily exacting scrutiny that the First Amendment requires. We hope that the hope that the Ninth Circuit will say the same.

Related Cases: Twitter v. Holder

Bar Applicants Deserve Better than a Remotely Proctored “Barpocalypse”

Fri, 10/09/2020 - 3:50pm

This week was the California Bar Exam, a grueling two-day test that determines whether or not a person can practice law in California. Despite the privacy and security risks remote proctoring apps present to users, the California Bar, as well as several other state bars throughout the country, are requiring that students use proctoring and surveillance app ExamSoft to protect the “integrity” of the test. The results have been nothing short of disastrous, and test-takers have taken to calling these remotely proctored exams the “Barpocalypse.” 

All of this was avoidable. 

Students are reporting significant technical issues, including difficulty installing or downloading the software, failures to upload the exam files, crashes, problems logging into the site, last-minute updates to instructions, and lengthy tech support wait times resulting in students taking the exam hours late (if at all). Additionally, we have heard from numerous examinees who are concerned that their data has been breached, adding to the numerous reports of identity, credit card, and password theft reported by previous users. 

All of this was avoidable. 

Last month EFF sent a letter to the California Supreme Court, which oversees the California Bar, asking them not to require ExamSoft for the bar exam. Our concerns were threefold: Test takers should not be forced to give their biometric data to the company, which can use it for marketing purposes, share it with third parties, or hand it over to law enforcement, without the ability to opt out and delete this information. Without a clear opt-out procedure, this data collection violates the California Consumer Privacy Act. Additionally, this data collection creates obvious security risks, as the recent data breaches of proctoring apps have shown. And finally, technical issues, as well as requirements that could disadvantage users who cannot meet them, could wreak havoc on users, forcing them to withdraw from the exam. It is unfortunate that several of our concerns have been validated.

Thankfully, the California Supreme Court has heard at least some of these concerns. Shortly after receiving our letter, the Clerk and Executive Officer of the California Supreme Court asked the state bar [pdf] to propose a timetable within 60 days for the deletion of all the 2020 bar applicants’ personally identifiable information collected via ExamSoft: 

ExamSoft’s Privacy Policy appears to permit the company to use and disclose applicants’ data for many purposes, some of which appear to be unrelated to the administration of the examination. Thus, the court shares applicants’ concern that any unnecessary retention of their sensitive PII data may increase the risk of unintentional disclosure.

This is a good step forward in protecting applicants’ data, and we are glad to see it. However, it is not enough. After receiving a concerned letter from the deans of several California law schools, the Court replied [pdf], stating that ExamSoft’s proctoring tool would not make any determination about a student’s eligibility, nor would it be used to prevent any applicant from completing their exam: 

...the proctoring software will not determine any examinee’s identity, integrity, eligibility, or passing grade, nor will the software be used to prevent any applicant from completing their exam. Instead, multiple layers of human review of the exam videos will permit human proctors to make those determinations.

Unfortunately, that was not the case. At a minimum, the software’s technical issues prevented many students from taking the exam. Additional issues with remote proctoring, from the fear of not knowing what might invalidate an exam to lost time, created stressful experiences for many, many test takers. Some even report being *told* to withdraw due to tech support issues. Additionally, the Court should consider the disproportionate impact these remotely proctored exams have on examinees of color and those with disabilities, as outlined by the Consortium for Citizens with Disabilities (CCD) Rights Task Force and the Lawyers’ Committee for Civil Rights Under Law. The Court and the state bar must also recognize that the failure of the proctoring software’s facial recognition features created a much more difficult environment for some examinees—such as those with darker skin—who were forced to alter their environment by shining lights on their faces, for example, during the entirety of the test:

Facebook’s Most Recent Transparency Report Demonstrates the Pitfalls of Automated Content Moderation

Thu, 10/08/2020 - 1:17pm

In the wake of the coronavirus pandemic, many social media platforms shifted their content moderation policies to rely much more heavily on automated tools. Twitter, Facebook and YouTube all ramped up their machine learning capabilities to review and identify flagged content in efforts to ensure the wellbeing of their content moderation teams and the privacy of their users. Most social media companies rely on workers from the so-called global South to review flagged content, usually under precarious working conditions and without adequate protections from the traumatic effects of their work. While the goal to protect workers from being exposed to these dangers while working from home is certainly legitimate, automated content moderation still poses a major risk to the freedom of expression online.

Wary of the negative effects the shift towards more automated content moderation might have on users’ freedom of expression, we called on companies to make sure that this shift would be temporary. We also emphasized the importance of meaningful transparency, notice, and robust appeals processes in these unusual times called for in the Santa Clara Principles.

While human content moderation doesn’t scale and comes with high social costs, it is indispensable. Automated systems are simply not capable of consistently identifying content correctly. Human communication and interactions are complex, and automated tools misunderstand the political, social or interpersonal context of speech all the time. That is why it is crucial that algorithmic content moderation is supervised by human moderators and that users can contest takedowns. As Facebook’s August 2020 transparency report shows, the company’s approach to content moderation during the coronavirus pandemic has been lacking in both human oversight and options for appeals. While the long-term impacts are not clear, we’re highlighting some of the effects of automated content moderation across Facebook and Instagram as detailed in Facebook’s report.

Because this transparency report omits key information, it remains largely impossible to analyze Facebook’s content moderation policies and practices.  The transparency report merely shares information about the broad categories in which deleted content falls, and the raw numbers of taken down, appealed, and restored posts. Facebook does not provide any insights on its definitions of complex phenomena like hate speech or how those definitions are operationalized. Facebook is also silent on the materials with which human and machine content moderators are trained and about the exact relationship between—and oversight of—automated tools and human reviewers.

We will continue to fight for real transparency. Without it there cannot be real accountability.

Inconsistent Policies Across Facebook and Instagram

While Facebook and Instagram are meant to share the same set of content policies, there are some notable differences in their respective sections of the report. The report, which lists data for the last two quarters of 2019 and the first two of 2020, does not consistently report the data on the same categories across the two platforms. Similarly, the granularity of data reported for various categories of content differs depending on platform.

More troubling, however, is what seems to be differences in whether users had access to appeal mechanisms. When content is removed on either Facebook or Instagram, people typically have the option to contest takedown decisions. Typically, when the appeals process is initiated, the deleted material is reviewed by a human moderator and the takedown decision can get reversed and content reinstated. During the pandemic, however, that option has been seriously limited, with users receiving notification that their appeal may not be considered. According to the transparency report, there were zero appeals on Instagram during the second quarter of 2020 and very few on Facebook.

The Impact of Banning User Appeals

While the company also occasionally restores content on its own accord, user appeals usually trigger the vast majority of content that gets reinstated. An example: in Q2, more than 380 thousand posts that allegedly contained terrorist content were removed from Instagram, fewer than in Q1 (440k). While around 8100 takedowns were appealed by users in Q1, that number plummeted to zero in Q2. Now, looking at the number of posts restored, the impact of the lack of user appeals becomes apparent: during the first quarter, 500 pieces of content were restored after an appeal from a user, compared to the 190 posts that were reinstated without an appeal. In Q2, with no appeal system available to users, merely 70 posts of the several hundred thousand posts that allegedly contained terrorist content were restored.

Meanwhile, on Facebook, very different numbers are reported for the same category of content. Facebook acted on 8.7 million pieces of allegedly terrorist content, and of those, 533 thousand were later restored, without having been triggered by a user appeal. In comparison, in Q1, when user appeals were available, Facebook deleted 6.3 million pieces of terrorist content. Of those takedowns, 180.1 thousand were appealed, but even more—199.2 thousand—pieces of content were later restated. In other words, far fewer posts that allegedly contained terrorist content were restored on Instagram where users couldn't appeal takedowns than on Facebook, where appeals were allowed.

Blunt Content Moderation Measures Can Cause Real Harm

Why does this matter? Often, evidence of human rights violations and war crimes gets caught in the net of automated content moderation as algorithms have a hard time differentiating between actual “terrorist” content and efforts to record and archive violent events. This negative impact of automated content detection is disproportionately borne by Muslim and Arab communities. The significant differences in how one company enforces its rules relating to terrorist or violent and extremist content across two platforms highlights how difficult it is to deal with the problem of violent content through automated content moderation alone. At the same time, it also underpins the fact that users can’t expect to get treated consistently across different platforms, which may increase problems of self-censorship.

Another example of the shortcoming of automated content removals: in Q2, Instagram removed around half a million images that it considered to fall into the category of child nudity and sexual exploitation. That is a significantly lower number compared to Q1, when Instagram removed about one million images. While Facebook’s report acknowledges that its automated content moderation tools struggle with some types of content, the effects seem especially apparent in this category of content. While in Q1, many takedowns of alleged child sexual abuse images were successfully appealed by users (16.2 thousand), only 10 pieces of deleted content were restored during the period in which users could not contest takedowns. These discrepancies in content restoration suggest that much more content that has been wrongfully taken down remained deleted, imperiling the freedom of expression of potentially millions of users. They also show the fundamentally important role of appeals to guard users’ fundamental rights and hold companies accountable for their content moderation policies.

The Santa Clara Principles on Transparency and Accountability in Content Moderation—which are currently undergoing an assessment and evaluation process following an open comment period—offer a set of baseline standards that we believe every company should strive to adopt. Most major platforms endorsed the standards in 2019, but just one—Reddit—has implemented them in full.

Facebook has yet to clarify whether its shift towards more automated content moderation is indeed temporary, or here to stay. Regardless, the company must ensure that user appeals will be reinstated. In the meantime, it is crucial that Facebook allow for as much transparency and public oversight as possible.

The Selective Prosecution of Julian Assange

Wed, 10/07/2020 - 7:32pm

As the extradition hearing for Wikileaks Editor-in-Chief Julian Assange unfolds, it is increasingly clear that the prosecution of Assange fits into a pattern of governments selectively enforcing laws in order to punish those who provoke their ire. As we see in Assange’s case and in many others before this, computer crime laws are especially ripe for this form of politicization.

The key evidence in the U.S. government’s cybercrime conspiracy allegations against Assange is a brief conversation between Julian Assange and Chelsea Manning in which the possibility of cracking a password is discussed, Manning allegedly shares a snippet of that password with Assange, and Assange apparently attempts, but fails, to crack it.  While breaking into computers and cracking passwords in many contexts is illegal under the Computer Fraud and Abuse Act, few prosecutors would ever bother to bring a case for such an inconsequential activity as a failed attempt to reverse a hash. But the government has doggedly pursued charges against Assange for 10 years, perhaps because they fear that prosecuting Assange for publishing leaked documents is protected by the First Amendment and is a case they are likely to lose.  

With this allegation, the government is attempting to dodge around the First Amendment protections by painting Assange as a malicious hacker and charge him for conspiracy to violate computer crime law. This is a pattern we’ve seen before.

Cybercrime laws are a powerful tool used by authoritarian governments to silence dissent, including going after journalists who challenge government authority. The Committee to Protect Journalists has documented how a computer crime law in Nigeria was used to harass and press charges against five bloggers who criticized politicians and businessmen. Human Rights Watch has described how the Saudi Arabian government used vague language in an anti-cybercrime law to prosecute Saudi citizens who used social media to speak out against government abuses. And in Ecuador, Amnesty International has joined EFF in raising awareness about the case of Ola Bini, a Swedish open source software developer who garnered government ire and is now facing a politically-motivated prosecution for supposed computer crime violations.

This is in alignment with EFF’s 2016 whitepaper examining the prosecution history of Arab countries such as Jordan, Saudi Arabia, and Tunisia. We found these governments selectively enforced anti-terrorism and cybercrime laws in order to punish human rights attorneys, writers, activists, and journalists. The pattern we identified was that authorities would first target an activist or journalist they wanted to silence, and then find a law to use against them. As we wrote, “The system results in a rule by law rather than rule of law: the goal is to arrest, try, and punish the individual—the law is merely a tool used to reach an already predetermined conviction.”

Cybercrime laws can turn innocent exploration, and journalistic inquiry into sinister-sounding (and disproportionately punished) felonies, just because they take place in a digital environment that lawmakers and prosecutors do not understand. The Intercept’s Micah Lee described the computer crime charges against Assange as “incredibly flimsy.” The conspiracy charge is rooted in a chat conversation in which Manning and Assange discussed the possibility of cracking a password. Forensic evidence and expert testimony make it clear that not only did Assange not crack this password, but that Manning only ever provided Assange with a piece of a password hash – from which it would have been impossible to derive the original password. 

Furthermore, recent testimony by Patrick Eller, a digital forensics examiner, raises questions about whether the alleged password cracking attempt had anything to do with leaking documents at all, especially since the conversation took place after Manning had already leaked the majority of the files she sent to Wikileaks.

Testimony from the Chelsea Manning court martial make it clear that lots of soldiers in Manning’s unit were routinely using their government computers to download music, play games, download chat software, and install other software programs they found useful, all of which was not permitted on these machines. This included logging into computers under an administrator account and then installing what they wanted, and sometimes deleting the administrator account, so that the military sysadmin had to wipe and reimage computers again and again. Eller even noted that one of Manning’s direct supervisors even asked Manning to download and install software on her computer. Indeed, the activity Assange is accused of was not even important enough to be included in the formal CFAA charges leveled against Manning.

Prosecutors don’t go after every CFAA violation, nor do they have the resources to do so. They can choose to pursue specific CFAA cases that draw their attention. And Assange, having published a wealth of documents that embarrassed the United States government and showed widespread misconduct, has been their target for years.

Assange is charged with 18 violations of the law. The majority of these counts relate to obtaining classified government information and disclosing that information to the world. As we’ve written before, the First Amendment strongly protects the rights of journalists, including Assange, to publish truthful information of clear public interest that they merely receive from whistleblowers, even when the documents are illegally obtained. This has been upheld in the Supreme Court cases New York Times Co. v. United States (finding the government could not enjoin the New York Times from publishing Vietnam war documents from whistleblower Daniel Ellsberg) and Bartnicki v. Vopper (in which a radio journalist was not liable for publishing recordings of union conversations plotting potential violence). Indeed, Wikileaks had every right to publish the leaked documents they received, and to work directly with a source in the process just as any journalist could.

The lone conspiracy to commit a computer crime allegation has become a major focus of attention in this case, and in fact a computer crime was the only charge against Assange when he was first arrested. The charge is drawing that attention because it’s the only charge that isn’t directly about receiving and publishing leaks. But as the court assesses these charges against Assange, we urge them to see this case within the context of a repeated, known pattern of governments enforcing computer crime law selectively and purposely in order to punish dissenting voices, including journalists. Journalism is not a crime, and journalism practiced with a computer is not a cyber-crime, no matter how U.S. prosecutors might wish it were

Alleged chat between Chelsea Manning and Julian Assange

Related Cases: Bank Julius Baer & Co v. Wikileaks

California League of Cities Should Reject Misguided Section 230 Resolution

Wed, 10/07/2020 - 4:48pm

The past few months have seen plenty of attempts to undermine Section 230, the law that makes a free Internet possible. But now we’re seeing one from a surprising place: the California League of Cities.

To be clear, the League of Cities, an association of city officials from around the state, doesn’t have the power to change Section 230 or any other federal law. But if Congress were to actually follow their lead, the policies that the League is considering approving would be disastrous for the freedom of California residents.

Section 230 states that websites and online services can’t be sued or prosecuted based on content created by their users. This straightforward regulation is based on the simple fact that you are responsible for your own speech online.

This week, the League will consider a resolution proposed by the city of Cerritos, which would effectively force website owners, large or small, to surveil their sites and remove content that “solicits criminal activity.” If they don’t, they would lose Section 230 protections and be exposed to civil suits, as well as state-level criminal prosecutions, for their users’ misdeeds. The resolution goes further, requiring websites and apps to help police with the “identification and apprehension” of people deemed (by the police) to be soliciting crime of any kind.

The Cerritos proposal is based on a crime that never happened. According to the proposal, Cerritos police responded to an anonymous posting on Instagram, inviting followers to “work together to loot Cerritos [M]all.” Nothing happened, but the city of Cerritos has now asked the League to endorse dramatic changes to federal law in order to give police vast new powers.

If the vague allegation that a website was used by city residents to "solicit criminal activity" is enough to expose that website to prosecutions and lawsuits, it will result in widespread Internet censorship. If Congress were to pass such an amendment to Section 230, it would provide a lever for government officials to eliminate protest and rally organizing via social media. Online platforms would be coerced into performing police surveillance of residents in cities throughout California. That’s the last thing we need during a year when millions of Americans have taken to the streets protesting police abuses.

Two California League of Cities committees have considered and passed the resolution, despite considerable opposition. On Sept. 29, the League’s Public Safety Committee met and passed the resolution by an 19-18 vote. EFF spoke at those committee meetings and delivered a letter [PDF] expressing our opposition to committee members.

If California municipalities want to weigh in on Internet regulation that will have national ramifications, they should do so in a way that benefits their residents—like legislation that could protect net neutrality, or reduce the digital divide.

Instead, the City of Cerritos and a few allies are urging the League to ask for a new type of Internet. It would be one in which their own residents are under constant surveillance online, and local newspapers and blogs would have to either close their online discussion sections, or patrol them for behavior that might offend local police.

We hope League members vote against this resolution, and send Congress a message that Californians want an Internet that respects users’ rights—not one focused on doing the police’s work for them.  

 

 

 

Privacy Badger Is Changing to Protect You Better

Wed, 10/07/2020 - 3:02pm

Privacy Badger was created to protect users from pervasive non-consensual tracking, and to do so automatically, without relying on human-edited lists of known trackers. While our goals remain the same, our approach is changing. It is time for Privacy Badger to evolve.

Thanks to disclosures from Google Security Team, we are changing the way Privacy Badger works by default in order to protect you better. Privacy Badger used to learn about trackers as you browsed the Web. Now, we are turning “local learning” off by default, as it may make you more identifiable to websites or other actors. If you wish, you can still choose to opt in to local learning and have the exact same Badger experience as before. Regardless, all users will continue to benefit from Privacy Badger’s up-to-date knowledge of trackers in the wild, as well as its other privacy-preserving features like outgoing link protection and widget replacement.

Google Security Team reached out to us in February with a set of security disclosures related to Privacy Badger’s local learning function. The first was a serious security issue; we removed the relevant feature immediately. The team also alerted us to a class of attacks that were enabled by Privacy Badger’s learning. Essentially, since Privacy Badger adapts its behavior based on the way that sites you visit behave, a dedicated attacker could manipulate the way Privacy Badger acts: what it blocks and what it allows. In theory, this can be used to identify users (a form of fingerprinting) or to extract some kinds of information from the pages they visit. This is similar to the set of vulnerabilities that Safari’s Intelligent Tracking Prevention feature disclosed and patched late last year.

To be clear: the disclosures Google’s team shared with us are purely proof-of-concept, and we have seen no evidence that any Privacy Badger users have had these techniques used against them in the wild. But as a precaution, we have decided to turn off Privacy Badger’s local learning feature by default.

From now on, Privacy Badger will rely solely on its “Badger Sett” pre-trained list of tracking domains to perform blocking by default. Furthermore, Privacy Badger’s tracker database will be refreshed periodically with the latest pre-trained definitions. This means, moving forward, all Privacy Badgers will default to relying on the same learned list of trackers for blocking.

How does Privacy Badger learn?

From the beginning, Privacy Badger has recognized trackers by their sneaky, privacy-invading behavior. Privacy Badger is programmed to look for tracking heuristics—specific actions that indicate someone is trying to identify and track you. Currently, the things Privacy Badger looks for are third-party cookies, HTML5 local storage “supercookies” and canvas fingerprinting. When local learning is enabled, Privacy Badger looks at each site you visit as you browse the Web and asks itself, “Does anything here look like a tracker?” If so, it logs the domain of the tracker and the domain of the website where the tracker was seen. If Privacy Badger sees the same tracker on three different sites, it starts blocking that tracker.

But for some time now, Privacy Badger hasn’t just learned in your browser: it also came preloaded with data about common trackers on the Web. Badger Sett is an automated version of Privacy Badger that we use daily to visit thousands of the most popular sites on the Web. Each new installation of Privacy Badger comes with the list of trackers collected from the latest Badger Sett scan. This way, when you install it for the first time, it immediately starts blocking known trackers.

What were the disclosures?

The first Google Security Team disclosure was a security vulnerability based on a feature we added in July 2019: detection of first-to-third-party cookie sharing (pixel cookie sharing). Because of the way Privacy Badger checked first-party cookie strings against outgoing third-party request URLs, it would have been possible in certain circumstances for an attacker to extract first-party cookie values by issuing thousands of consecutive requests to a set of attacker-controlled third-party domains. We immediately removed the first-to-third-party cookie heuristic from Privacy Badger’s local learning in order to patch the vulnerability. (We have continued using that heuristic for pre-training in Badger Sett, where it does not expose any sensitive information.)

The second set of disclosures described a set of attacks that can be carried out against any kind of heuristic learning blocker. These attacks hinge on an adversary having the ability to force a particular user’s instance of Privacy Badger to identify arbitrary domains as trackers (setting state), as well as the ability to determine which domains a user’s Privacy Badger has learned to block (reading back the state). The disclosures were similar to the ones Google previously reported about Apple’s Intelligent Tracking Protection (ITP) feature.

One attack could go something like this: a Privacy Badger user visits a malicious webpage. The attacker then uses a script to cause the user’s Privacy Badger to learn to block a unique combination of domains like fp-1-example.com and fp-24-example.com. If the attacker can embed code on other websites, they can read back this fingerprint to track the user on those websites.

In some cases, the ability to detect whether a particular domain has been blocked (like a dedicated content server for a particular bank) could reveal whether a user has visited particular sites, even if the attacker doesn’t run code on those sites.

More information on this style of attack can be found in the researchers’ paper. Since Privacy Badger learns in much the same way that Safari’s ITP did, it was vulnerable to the same class of attack.

What is changing?

Since the act of blocking requests is inherently observable by websites (it’s just how the Web works), the best way to prevent this class of attacks is for Privacy Badger to disable local learning by default and use the same block list for all of its users. Websites will always be able to detect whether a given domain was blocked or not during your visit. However, websites should not be able to set Privacy Badger state, nor should they be able to distinguish between individual Privacy Badger users by default.

Before today, every Privacy Badger user would start with a set of known trackers (courtesy of Badger Sett), then continue finding information about new trackers over time. A new installation of Privacy Badger would start with data from the most recent Badger Sett scan before its release, but future updates would not modify the tracker list in any way.

Now, by default, Privacy Badger will no longer learn about new trackers based on your browsing. All users (with the default settings) will use the same tracker-blocking list, generated by Badger Sett. In future updates to Privacy Badger, we plan to update everyone’s tracker lists with new data compiled by Badger Sett. That means users who do not opt in to local learning will continue receiving information about new trackers we discover, keeping their Badgers up-to-date.

For anyone who opts back in to local learning, Privacy Badger will work exactly as it has in the past. These users will continue blocking trackers based on what their own Privacy Badger instance learns, and they will not receive automatic tracker list updates from EFF.

The trackers included in the pre-trained Badger Sett list are compiled using the same techniques Privacy Badger has always used: browsing to real websites, observing the behavior of third-party domains on those sites, and logging the trackers among them. Regardless of how you choose to use Privacy Badger, it will continue to adapt to the state of trackers in the wild.

Why is local learning still an option?

Privacy Badger is meant to be a no-configuration-necessary, mostly install-and-forget kind of tool. We feel comfortable turning off local learning because we believe the majority of Privacy Badger’s protection is already captured by the pre-trained list, and we don’t want to expose users to any potential risk without informed opt-in. But we’re leaving local learning as an option because we think it presents a reasonable tradeoff that users should be able to make for themselves.

The main risk of enabling local learning is that a bad actor can manipulate Privacy Badger’s state in order to create a unique identifier, a kind of Privacy Badger-specific fingerprint. A tracker that does this can then identify the user across sites where the tracker can run JavaScript. Additionally, local learning enables a limited form of history sniffing where the attacker can try to determine whether a Privacy Badger user had previously visited a particular website by seeing how many strikes it takes for Privacy Badger to learn to block a (legitimate) third-party domain that appears only on that website. We see these as serious concerns but not showstoppers to local learning altogether.

There are already many other kinds of information the browser discloses that can be used for fingerprinting. Most common fingerprinters use a combination of techniques, often wrapped up in a single script (such as FingerprintJS). Detecting any one of the techniques in use is enough for Privacy Badger to flag the domain as a fingerprinter. Compared with existing methods available to bad actors, fingerprinting Privacy Badger’s local learning is likely to be less reliable, more resource-intensive, and more visible to users. Going forward, it will only apply to the small subset of Web users who have Privacy Badger installed and local learning enabled. Furthermore, if caught, companies will face reputational damage for exploiting users’ privacy protections.

The risk of history sniffing is also not unique to Privacy Badger. Known history sniffing attacks remain in both Firefox and Chrome. Exploiting Privacy Badger to ascertain bits of users’ history will be limited to Privacy Badger users with local learning enabled, and to websites which use unique third-party domains. This is then further limited by Privacy Badger’s pre-training (did the user visit the domain, or was the domain visited in pre-training?) and Privacy Badger’s list of domains that belong to the same entity (domains on that list will always be seen as first party by Privacy Badger and thus immune to this exploit). Existing browser history sniffing attacks are not bound by these limitations.

Some users might want to opt back in to local learning. The pre-trained list is designed to learn about the trackers present on thousands of the most popular sites on the Web, but it does not capture the “long tail” of tracking on websites that are less popular. If you regularly browse websites overlooked by ad/tracker blocker lists, or if you prefer a more hands-on approach, you may want to visit your Badger’s options page and mark the checkbox for learning to block new trackers from your browsing.

The future

Privacy Badger still comes with all of its existing privacy benefits like outgoing link tracking protections on Google and Facebook and click-to-activate replacements for potentially useful third-party widgets.

In the coming months, we will work on expanding the reach of Badger Sett beyond U.S.-centric websites to capture more trackers in our pre-trained lists. We will keep improving widget replacement, and we will add new tracker detection mechanisms.

In the longer term, we will be looking into privacy-preserving community learning. Community learning would allow users to share the trackers their Badgers learn about locally to improve the tracker list for all Privacy Badger users.

Thanks again to Artur Janc, Krzysztof Kotowicz, Lukas Weichselbaum and Roberto Clapis of Google Security Team for responsibly disclosing these issues.

Activists Sue San Francisco for Wide-Ranging Surveillance of Black-Led Protests Against Police Violence

Wed, 10/07/2020 - 1:46pm
Violating San Francisco’s Surveillance Technology Ordinance, SFPD Secretly Used Camera Network to Spy on People Protesting Police Killing of George Floyd

San Francisco—Local activists sued San Francisco today over the city police department’s illegal use of a network of more than 400 non-city surveillance cameras to spy on them and thousands of others who protested as part of the Black-led movement against police violence.

The Electronic Frontier Foundation (EFF) and the ACLU of Northern California represent Hope Williams, Nathan Sheard, and Nestor Reyes, Black and Latinx activists who participated in and organized numerous protests that crisscrossed San Francisco, following the police killing of George Floyd.

During the first week of mass demonstrations in late May and early June, the San Francisco Police Department (SFPD), in defiance of a city ordinance, tapped into a sprawling camera network run by a business district to conduct live mass surveillance without first going through a legally required public process and obtaining permission from the San Francisco Board of Supervisors.

“San Francisco police have a long and troubling history of targeting Black organizers going back to the 1960s,” said EFF Staff Attorney Saira Hussain. “This new surveillance of Black Lives Matter protesters is exactly the kind of harm that the San Francisco supervisors were trying to prevent when they passed a critical surveillance technology ordinance last year. And still, with all eyes watching, SFPD brazenly decided to break the law.”

“In a democracy, people should be able to freely protest without fearing that police are spying and lying in wait,” said Matt Cagle, Technology and Civil Liberties Attorney at the ACLU of Northern California. “Illegal, dragnet surveillance of protests is completely at odds with the First Amendment and should never be allowed. That the SFPD flouted the law to spy on activists protesting the abuse and killing of Black people by the police is simply indefensible.”

“Along with thousands of people in San Francisco, I took to the streets to protest police violence and racism and affirm that Black lives matter,” said Hope Williams, the lead plaintiff in this lawsuit and a protest organizer. “It is an affront to our movement for equity and justice that the SFPD responded by secretly spying on us. We have the right to organize, speak out, and march without fear of police surveillance.”

Records obtained and released by EFF in July show SFPD received a real-time remote link to more than 400 private surveillance cameras. The vast camera network is operated by the Union Square Business Improvement District (USBID), a non-city entity. These networked cameras are high definition, allow remote zoom and focus capabilities, and are linked to a software system that can automatically analyze content, including distinguishing between when a car or a person passes within the frame.

The lawsuit calls on a court to order San Francisco to enforce the Surveillance Technology Ordinance and bring the SFPD back under the law. San Francisco’s Surveillance Technology Ordinance was enacted in 2019 following a near unanimous vote of the Board of Supervisors.

The plaintiffs, all of whom participated in protests against police violence and racism in May and June of 2020, are:

  • Hope Williams, a Black San Francisco activist. Williams organized and participated in several protests against police violence in San Francisco in May and June 2020.
  • Nathan Sheard, a Black San Francisco activist and community organizer at EFF. In his personal capacity, Sheard attended one protest and helped connect protestors with legal support in May and June 2020.
  • Nestor Reyes, a Latinx activist, native San Franciscan, and community healer. Reyes organized and participated in several protests against police violence in San Francisco in May and June 2020.

 For the complaint: https://www.eff.org/document/williams-v-san-francisco-complaint

Link to video statement of attorneys and client

EFF case page
ACLU case page

For more on police spying tech:

Contact:  SairaHussainStaff Attorneysaira@eff.org press@aclunc.org

Announcing Global Privacy Control in Privacy Badger

Wed, 10/07/2020 - 9:00am

Today, we’re announcing that the upcoming release of Privacy Badger will support the Global Privacy Control, or GPC, by default.

GPC is a new specification that allows users to tell companies they'd like to opt out of having their data shared or sold. By default, Privacy Badger will send the GPC signal to every company you interact with alongside the Do Not Track (DNT) signal. Like DNT, GPC is transmitted through an HTTP header and a new Javascript property, so every server your browser talks to and every script it runs will know that you intend to opt out of having your data shared or sold. Compared with ad industry-supported opt-out mechanisms, GPC is simple, easy to deploy, and works well with existing privacy tools.

DNT vs. GPC

Do Not Track is an older proposed web standard, meant to tell companies that you don't want to be tracked in any way. (Learn more about what we mean by "tracking" here). Privacy Badger was built around DNT, and will continue to send a DNT signal along with every request your browser makes. Privacy Badger gives third-party companies a chance to comply with DNT by adopting EFF’s DNT policy, and blocks those that look like they're tracking you anyway.

If DNT already expresses your intent to opt out of tracking, why do we need GPC? When DNT was developed, many websites simply ignored users’ requests not to be tracked. That's why Privacy Badger has to act as an enforcer: trackers that don't want to comply with your wishes get blocked. Today, users in many jurisdictions, including California, Nevada, and the European Economic Zone, have the legal right to opt out of some kinds of tracking. That's where GPC comes in. 

GPC is an experimental new protocol for communicating opt-out requests that align with privacy laws. For example, the California Consumer Privacy Act gives California residents the right to opt out of having their data sold. By sending the GPC signal, Privacy Badger is telling companies that you would like to exercise your rights. And while Privacy Badger only enforces DNT compliance against third-party domains, GPC applies to everyone—the first-party sites you visit, and any third-party trackers they might invite in.

GPC is a new proposal, and it hasn't been standardized yet, so many sites will not respect it right away. Eventually, we hope GPC will represent a legally-binding request to all companies in places with applicable privacy laws.

To stop tracking, first ask, then act

The CCPA and other laws are not perfect, and many of our users continue to live in places without strong legal protections. That’s why Privacy Badger continues to use both approaches to privacy. It asks websites to respect your privacy, using GPC as an official request under applicable laws and DNT to express what our users actually want (to opt out of all tracking). It then blocks known trackers, who refuse to comply with DNT, from loading at all.

Starting this release, Privacy Badger will begin setting the GPC signal by default. Users can opt out of sending this signal, along with DNT, in their Privacy Badger settings. In addition, users can disable Privacy Badger on individual first-party sites in order to stop sending the GPC signal to those sites.

Pages