Electronic Freedom Foundation

The Key To Fixing Copyright Is Ending Massive, Unpredictable Damages Awards

EFF - Thu, 01/23/2020 - 6:41pm

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

What if a single parking ticket carried a fine of up to a year's salary? What if there were no way to know consistently how much the fine would be before you got it? And what if any one of thousands of private citizens could decide to write you a ticket? What would happen? People would start avoiding public parking and stay home more often. Business would decline. The number of false or unfair tickets would rise. Everyone would lose confidence in the system—and in the law—as parking became a huge gamble.

Something very close to this scenario is a reality in copyright law. Copyright holders who sue for infringement can ask for "statutory damages." That means letting a jury decide how big of a penalty the defendant will have to pay—anywhere from $200 to $150,000 per copyrighted work, without requiring any evidence of actual financial losses or illicit profits. That's a big problem for anyone who uses works in lawful but non-traditional ways. Musicians, bloggers, video creators, software developers, and others gamble with these massive damages whenever their art or technology touches another’s work. They risk unpredictable, potentially devastating penalties if a copyright holder objects and a court disagrees with their well-intentioned efforts.

On Copyright Week, we like to talk about ways to improve copyright law. One of the biggest improvements available is to fix U.S. copyright’s broken statutory damages regime. In other areas of civil law, the courts have limited jury-awarded punitive damages so that they can’t be far higher than the amount of harm caused. Shockingly, it’s been determined that large jury awards for fraud, for example, offend the Constitution’s Due Process Clause. But somehow, that’s not the case in copyright—some courts have ruled that Congress can set damages that are potentially hundreds of times greater than actual harm, if it chooses to do so.

Massive, unpredictable damages awards for copyright infringement, such as a $222,000 penalty for sharing 24 music tracks online, are the fuel that powers exploitative litigation business models: law firms and companies that bring dubious claims of infringement against thousands of Internet users, demanding cash settlements to avoid being served with a lawsuit. These businesses, often called copyright trolls, use the threat of statutory damages to coerce settlements, often without doing the work to make sure their accusations are correct.

Statutory damages also magnify other problems in copyright law, and make reform more difficult. The Music Modernization Act, passed in 2018, was the biggest overhaul of the licensing market for songwriters in a generation, addressing a market that nearly everyone agreed was broken. But a minority resisted any reform, apparently preferring the mere possibility of a multimillion-dollar statutory damages windfall to a smaller but steadier stream of royalty payments.

By turning litigation into a game of financial Russian roulette, statutory damages also discourage artistic and technological experimentation at the boundaries of fair use. None but the largest corporations can risk ruinous damages if a well-intentioned fair use crosses the fuzzy line into infringement.

Many reforms are possible. Congress could limit statutory damages to a multiple of actual harm. That would bring U.S. copyright in line with other countries, and with other civil laws like patent and antitrust. Congress could also remove statutory damages in cases where the defendant has a good-faith claim of fair use, which would encourage creative experimentation. Fixing fair use would make many of the other problems in copyright law more easily solvable, and create a fairer system for creators and users alike.

The Public Domain Is the Rule, Copyright Is the Exception

EFF - Thu, 01/23/2020 - 1:23pm

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

Remember the monkey selfie? Animal rights organizations and a photographer went to court to fight over who owned the copyright in a picture where the photographer set up the camera but the animal took the pic, and great fun was had by all. But as our friends at Public Knowledge noted, maybe no one "owned" the picture.

And that’s just fine. Most of our culture, knowledge, and history isn’t "owned" by anyone at all—it is available for all to use in the vibrant and ever-expanding public domain. This domain is populated by formerly copyrighted material and material that was never copyrightable in the first place. The first category is what most people probably think about when then they think of the public domain: things such as literature, art, music and movies, for which the copyright term has expired or the rightsholder has dedicated the work to the public domain. Under the original U.S. copyright law, each generation was largely free to use the copyrighted material of previous generations, because terms were much shorter (and so was the scope of what could be copyrighted). But terms grew longer and longer until, one year ago, material from 1923 onward finally started entering the public domain each year.  There doesn’t seem to be much appetite to extend U.S. terms further (not so other countries), so presumably these kinds of works will continue to enrich the U.S. public domain.

Meanwhile, the second, less glamorous, category—the one of ideas, facts, procedures, methods of operation, laws, and regulations that are deemed to belong to everyone—has become highly contested.

Judge Louis Brandeis said, “The general rule of law is, that the noblest of human productions—knowledge, truths ascertained, conceptions, and ideas—become, after voluntary communication to others, free as the air to common use.”  Some very powerful interests would beg to disagree.

For example, a pitched battle has been raging for a decade about the copyright in Application Programming Interfaces (APIs), which are, generally speaking, specifications that allow programs to communicate with each other. It’s headed to the Supreme Court in March. The API battle follows on the heels of another dispute over whether the State of Georgia can claim a copyright in its official code, which, by legislative fiat, includes annotations. Meanwhile EFF is representing Public.Resource.Org in a years-long fight over whether private entities can claim copyright in huge swaths of the Code of Federal Regulations.

One thing, other than copyrightability, unites these legal battles: all of them concern content developed by people that did not need a copyright incentive. In other words, they would have done the work (and mostly did do the work) without compensation from copyright royalties. 

As we explained in our first amicus brief in the API litigation, for example, software developers assumed APIs were excluded from copyright protection—but that didn’t stop them from writing and using them. In fact, that exclusion was essential to the development of the home computer, operating systems, programming languages, the Internet, and cloud computing—creating a statutory monopoly in APIs would likely have created a licensing thicket that would have slowed innovation to a crawl.

As for the Georgia Code Annotated, the code itself is developed like most laws, by legislators informed by lobbyists and the general public. All of those people have their own reasons for drafting laws, and none involve earning copyright royalties. The annotations themselves are developed by a private company that might enjoy the royalties, but could get them anyway by simply publishing their own version, without the state’s official imprimatur.

And the last dispute concerns standards, such as the National Fire Safety Code, that are initially developed by volunteers, government officials, and other professionals experienced in the relevant industry, and later incorporated by reference into law. They do the work to contribute to the public interest, mostly without payment, and never receive any royalties for it. The purported copyright holders are the organizations that help organize the process—but those organizations receive ample compensation through dues, selling educational materials, and trainings.

Copyright maximalists denigrate the public domain as the space that creative works “fall into” eventually, once their financial value has been thoroughly exploited. In reality, many more valuable works occupy the public domain than the private one, and its contributors are legion. Their work, and the public domain itself, remind us that the copyright monopoly, and the assumptions it embodies about how to spur creativity, represent a limited exception to the general rule: that most production of knowledge and culture has always taken place within the public domain. Chipping away at the public domain will necessarily inhibit, rather than encourage, new creativity.

Related Cases: Oracle v. GoogleFreeing the Law with Public.Resource.Org

EFF Activists To Demonstrate Against Sell Out of .ORG to Private Equity at Los Angeles Protest

EFF - Thu, 01/23/2020 - 10:39am
Rally at ICANN’s LA Headquarters Will Feature EFF Special Advisor Cory Doctorow, Nonprofit Defenders

Los Angeles—Electronic Frontier Foundation (EFF) activists will join advocates for other public interest nonprofits to protest ICANN’s plans to sell out the Internet .ORG domain registry at a demonstration tomorrow outside ICANN’s board of directors meeting in Los Angeles.

EFF, nonprofit advocacy group NTEN, digital rights groups Fight for the Future and Demand Progress, and other nonprofits will participate in a rally to call on the Internet Corporation for Assigned Names and Numbers, or ICANN, to halt a transaction under which Ethos Capital, a private equity firm run by domain name industry insiders, will pay $1.135 billion to take over the lucrative .ORG registry, which collects fees for the use of the .org domain. Organizations working in the public interest around the world in the arts, religion, culture, the environment, race, and poverty, will be affected by the sale.

“ICANN and its board are mostly invisible to the public and nonprofit world, but their power and influence over the health and well-being of public interest groups that serve the needs of hundreds of millions of people around the world cannot be overstated,” said EFF Special Advisor Cory Doctorow, who will speak at the rally. “The ICANN board needs to know that their actions are under scrutiny. They are out of touch with the people who both run and rely on .orgs around the world. ICANN should listen to the global nonprofit community and refuse to treat .ORG like a piece of real estate that can be sold to the highest bidder.”

Under the deal, ginned up with little disclosure, Ethos will acquire Public Interest Registry (PIR), itself a nonprofit that has for the past 17 years overseen the .ORG registry, and then change PIR into a for profit entity. The sale is opposed by 700 organizations, from the Girl Scouts of America and the League of Women Voters, to Farm Aid and Meals On Wheels.  21,000 individuals also signed in opposition, as have six members of Congress.

ICANN’s Board of Directors* is gathering in Los Angeles tomorrow for a regular meeting at which the .ORG sale is expected to be discussed; a final vote is due by February 17. The rally is being hosted by NTEN.

“.ORG needs a steward that will stand up for the nonprofit community in the face of pressure to raise prices, or to surveil or censor nonprofits,” said EFF Activism Director Elliot Harmon, who will speak at the rally. “The nonprofit community’s overwhelming concern about the announced sale shows that the community recognizes this for what it is: a blatant money grab by a private equity firm that will inevitably have to prioritize profits over the needs of nonprofits .”

“The ICANN board has the opportunity to take action here and demand that the .org domain ownership is controlled and decided through the appropriate, multi-stakeholder process, in keeping with ICANN’s policy-making practices,” said NTEN Chief Executive Officer Amy Sample Ward, also speaking at the demonstration. “A private billion dollar deal doesn't meet that criteria.”

Save.ORG Rally and Demonstration

EFF Special Advisor Cory Doctorow
EFF Staff Attorney Cara Gagliano
EFF Activism Director Elliot Harmon
NTEN CEO Amy Sample Ward

12025 E Waterfront Dr.
Playa Vista, CA 90094

Friday, January 24
9 am – 11 am

*ICANN Board Member Sarah Deustch is a member of EFF’s Board of Directors.


Contact:  ElliotHarmonActivism Directorelliot@eff.org AmySample WardCEO, NTENamy@nten.org

Tale of Jailbreaking Disobedient IoT Appliances Shortlisted for the National Canada Reads Prize

EFF - Wed, 01/22/2020 - 5:19pm

In Unauthorized Bread, a novella by EFF Special Advisor Cory Doctorow published in his 2019 Tor Books collection Radicalized, a refugee named Salima leads a mass jailbreaking of the locked-down Internet of Things appliances in a subsidized housing unit in Boston. With this act, Salima and others risk eviction, felony prosecution under Section 1201 of the Digital Millennium Copyright Act and deportation to the countries they fled in fear of their lives.

Radicalized has just been named a finalist in Canada Reads, the Canadian Broadcasting Corporation's national book prize. In honor of the occasion, Ars Technica has published Unauthorized Bread in full.

Unauthorized Bread is also in development for TV by Topic, parent company of The Intercept, and is being adapted as a young adult graphic novel by Firstsecond, in collaboration with the comics creator Jennifer Doyle.

Doctorow returned to EFF in 2015 to fight Digital Rights Management, and helped with our suit to overturn the law that restricts removing or tampering with DRM. Unauthorized Bread and Radicalized represent another front in our battle for a free, fair and open tech world: using stories to make urgent but abstract policy questions real and vivid to broad audiences. Congratulations, Cory!

EFF to Supreme Court: Criminal Immigration Statute Threatens Free Speech Online

EFF - Wed, 01/22/2020 - 3:28pm

EFF is urging the U.S. Supreme Court to strike down a law that poses a serious threat to online speech by criminalizing speech that “encourages” unlawful immigration. EFF filed an amicus brief on behalf of itself and Immigrants Rising, the Internet Archive, and Daphne Keller.

The case, United States v. Sineneng-Smith, questions whether 8 U.S.C. §  1324(a)(1)(A)(iv) (“the Encouragement Provision”)—which makes it a felony to “encourage” an undocumented immigrant to enter or remain in the United States—violates the First Amendment. The accused, an immigration consultant charged under the Encouragement Provision, was convicted in the district court. However, the Ninth Circuit reversed her conviction, holding that the Encouragement Provision was facially unconstitutional. The court found that the statute was so overbroad that it would encompass speech ranging from “a loving grandmother who urges her grandson to overstay his visa” to a “post directed at undocumented individuals on social media” that encourages them to stay in the United States. As the court explained:

We do not think that any reasonable reading of the statute can exclude speech. To conclude otherwise, we would have to say that "encourage" does not mean encourage . . . At the very least, it is clear that the statute potentially criminalizes the simple words spoken to a son, a wife, a parent, a friend, a neighbor, a coworker, a student, a client "I encourage you to stay here."

As our amicus brief explains, the Internet is full of protected speech that encourages immigrants to remain in the country, whether those immigrants are here lawfully or unlawfully. Social media users share posts that declare #HomeIsHere in support of undocumented youth. Services organizations direct immigrants to financial, educational, and health resources. Advocacy groups publish “know your rights” guides explaining that immigrants have the right to remain silent when questioned by immigration officers. Twitter, Reddit, and Facebook have each taken to their own platforms to express support for immigrants and oppose President Trump’s immigration policies.

All of this speech is exactly the kind of political advocacy that the First Amendment is designed to protect—but all of this speech may be criminalized under the Encouragement Provision.

And it is not only the speakers themselves who may be impacted by this law. Intermediaries may be chilled from even hosting this type of speech, for fear of risking criminal prosecution either as the publishers of the speech, or as aiders and abettors. Although federal law often protects online intermediaries from liability for users’ speech that the intermediaries host, intermediaries receive no immunity from federal criminal enforcement.

When faced with the risk that the speech they are transmitting may be illegal, intermediaries commonly choose to broadly restrict all speech about a topic rather than take on the impossible task of sifting through an enormous volume of user content to try to parse out the specific speech that’s prohibited. We’ve seen this exact dynamic play out before—most recently when Congress outlawed online advertisements for sex work and platforms responded with sweeping prohibitions on adult content far beyond what the law had banned.

The Encouragement Provision raises the specter that platforms seeking to minimize their own criminal exposure under the statute may censor all expression about immigration wholesale. Perhaps even more troublingly, platforms may remove all speech favoring an immigration policy based on principles of inclusion and decriminalization—because such expression is likeliest to violate the statute—while allowing speech favoring more restrictive and punitive immigration policies to remain online.

The Constitution protects our right to comment on and advocate for and against government policies. Immigration is one of the most hotly debated issues of our times, and the Supreme Court should strike down this dangerous and unlawful statute. 

Speaking Freely: An Interview with Addie Wagenknecht

EFF - Wed, 01/22/2020 - 2:18pm

Addie Wagenknecht is an artist and researcher based between the U.S. and Europe. We met a few years back when she invited me to be part of Deep Lab, a “collaborative group of cyberfeminist researchers, artists, writers, engineers, and cultural producers” that she co-founded in 2014. We’ve shared the stage together twice at re:publica in Berlin, and I always enjoy having the chance to chat with her about art and free expression.

This conversation was no exception, as it journeyed from censorship in the art world to the restrictions social media place on profanities [ed. note: this interview contains a few of those] to the impact of conspiracy theories on our societies. As a successful artist, Addie brings an important perspective to this ongoing conversation about what free expression means. 

Jillian C. York: Let’s get started. What does free expression mean to you?

I’m looking at it from the point of view of somebody who works in arts and culture; a lot of it has to do with how that’s translated within institutions and museums, commercial galleries, and the art world.

For me, it’s specifically about the right of expression creatively and being able to translate thoughts or political situations into things that can be shown in an open and public space, with the caveat that a lot of these spaces are donor and privately-run, so there are a lot of stipulations around what can be shown or how it can be shown.

York: Would you say that you identify as a defender of or advocate for free expression?

Yeah, I would say I definitely advocate for freedom of expression, freedom of speech, and the right to those freedoms both inside and outside the U.S.

York: Would you mind sharing a personal experience you’ve had with censorship or with utilizing your free expression for the common good?

I think the first thing that comes to mind is a project called Webcam Venus [ed. note: link contains nudity] which is about highbrow and lowbrow culture and what is considered art versus what is considered porn and how do you deviate and know the differences. It’s a piece where sexcam workers pose in traditional or classical pieces of well-known art, in an institutional sense. It’s taking paintings that are frequently cited within our history or the art canon, and recontextualizes those in a more contemporary means using webcams and sex performers.

That piece was installed a few times for more institutional museum shows but also for more commercially-sponsored events. It was shown at Internet week in New York City...I always think of New York as this kind of progressive place where you can do what you want, and it’s radical and open to new ideas. The piece was installed for this marketing or tech week in New York, and within five to ten minutes of the piece going up, someone came up to me and said they couldn’t show it, it had to be taken down immediately. So I asked why they wanted it taken down, and they said “Google is our sponsor, and they don’t want this up. It’s inappropriate, and it’s not something we want people seeing.” So they shut it down, they thought it was completely inappropriate to have in the context of this Internet culture week.

York: I do remember that, I think it came up in one of the talks we did together, but I didn’t know that part of the story.

It’s crazy, because I’ve always thought that imagery and pornography are what have driven so much of technology, and the advancement of wifi, and higher speeds, but then the fact that it’s completely siloed from the rest of the internet when you’re celebrating internet culture for me was a really disappointing thing, especially in New York, which I thought was so open to new ideas and discourse. It being shut down just popped that balloon for me.

York: That’s really wild. What was the impetus for that project? What inspired you to create it in the first place?

I was collaborating with Pablo Garcia, and our constraints were that we wanted to create a project together, but we were in different time zones—him at the Art Institute of Chicago, and me in Europe. Our constraints were that we were wanting to create a project across different time zones so we had to come up with a project that was ethereal in the sense that we didn’t need to create something physical.

I’d come across a talk at Transmediale in Berlin that year about the history of chatrooms or bullet boards that unfolded specifically through a queer lens, and it was interesting the way these speakers talked about the access to webcam performers and that specific site really intrigued me because when they were going through the history of bulletin boards and IRC, one of the panelists brought up this interesting site that was read/write in terms of porn. You have access to the cam performers as well as the chat or input and what they’re willing to do can be communicated in both ways. He had presented it in the context of this panel and I started to research more about that. I always liked Chat Roulette, where you go on and it’s different webcams shuffled with other people all over the world. I was already interested in this sort of randomness and accessibility through the web, and finding ways to do that with spaces of the web. 

With art, sexuality is still a topic that is highly refined in terms of institutions, so I thought that was an interesting space to explore. I presented that idea to Pablo, I sent him the link to the talk I’d seen, and we kind of just fell into that. A lot of his work looks at art and architecture from an historical perspective, so somehow within the collaboration we came up with this idea of how you recontextualize historical work that is defined as art through texts, academics and institutions, and recreating that through a contemporary lens, and what does that look like? Is it still art even though it references the prior works but in a more contemporary medium?

York: That’s really interesting. In fact, one of the interesting things to me is how platforms censor nudity when it’s in a more modern format.

Right, right.

York: Okay, so here’s a different question. Do you have a free speech hero?

I don’t know that I do. In the past few years, the Internet and political climate have changed so much and people have gone in different directions. But one of my longstanding activism heroes were always the Guerrilla Girls. I think they started in the ‘80s, and they’re these women who are decently high-profile in the arts, but they go around wearing gorilla masks and do a lot of stuff within anonymity, giving themselves the power of that anonymity to advocate for free speech and against inequality within institutions and other things people aren’t willing to talk about publicly.

York: I've always found their work really interesting. Okay, changing topics a little: let’s talk about social media. What concerns you at the moment?

That conspiracy theories are becoming mainstream. Ten years ago, scientific consensus was considered factual for example, but that has been completely dismissed by those in power. Somehow we have reached a point where not wanting the world to burn is being considered 'politically radical'.

See, our communities online and off, depend on shared truths and if you think about it, what the Internet—and the social media spaces that so many of us rely on daily—have created is a sense of identity by over-inflating the value of our opinions while equally maximizing the sense of opposition, all while simultaneously destroying the sense of personal impact on just about anything. We are so totally unaware of our own confirmation biases on these platforms because everything can appear equally legitimate. Our personal lives are quite literally monetized and becoming public domain but more than that, I am afraid people have lost the ability to parse facts, and our democracy and freedoms—of speech, of art and expression—literally depend on that. 

Companies like Facebook will not change because their entire profit model relies on clicks, and nothing generates those like lies, conspiracy theories and declarations of victim-hood by some of the most powerful and privileged men in the world. If social media companies were held to the same basic standards of print, movies or TV perhaps this could alleviate some of these issues. 

If for example, Facebook had to fact-check ads before running them, they could prevent the micro-targeting and disseminating of lies and 'fake news' before it starts but as of now they do not—instead they actually help you target your audiences instead for maximum audience engagement and views, further spreading and encouraging disinformation on a massive scale. 

York: So do you think that any online speech should be regulated, and if so, how?

I’m very much an advocate for freedom of expression/speech and the right to express those things, unfortunately for people living outside of the U.S. they’re not protected by the constitutional right to that, so how that’s protected would require some sort of protections for people who exist outside of the U.S., because they don’t necessarily have those rights.

I’m also kind of on the team of anti-regulation and anti-censorship. I want to think that those things can be self-regulated within communities and within spaces both online and off. So, I don’t necessarily believe that [speech] needs to be regulated. I don’t think the way it’s being regulated right now, through having companies like Facebook, having people who are trained in various countries with different social and cultural norms censoring content is necessarily the way to go. I do find the flagging process, where for example content is hidden with a warning, useful. I’ve noticed a few sites have started to do this, like Twitter and YouTube when something gets enough downvotes—you can choose to either click it or not.

York: Yes. Twitter’s is really interesting though, because it seems to be based on a list of keywords. “Vagina,” for example, seems to be included as an “offensive” term, and I find that so strange.

Yeah, I mean, it’s interesting to me that “dick” or “vagina” or those sorts of words are deemed offensive, but then you could say something really hateful about someone, like using radical slurs, and those would be tolerated, but then anatomy isn’t. How you differentiate those things is the golden ticket in terms of solving some of those things. How do you allow for freedom of expression without censoring those things? If you could figure those things out,I think you’d have a really viable social media platform.

I think it’s interesting that Jordan Peterson is starting this anti-censorship bulletin board sort of thing. I’m really interested in how that’s going to transpose with his new fanbase. I heard they’re not going to censor anything or limit who has access to it. I’m kind of interested to see if it becomes another 4chan or 8chan sort of thing or if it becomes a space for viable discourse.

York: I’m sort of curious about that too. What you said about the golden ticket—I don’t really see anyone working that hard to make that possible. What’s really difficult is that there really isn’t yet an AI that can detect that sort of thing. If someone says “women are inferior to men,” there’s currently no way that AI detects that as a hateful statement, whereas it’s easy to plug in words that people find offensive. So words like “fuck” are really interesting in this context.

Yes—words like “fuck” or “shit” or “bitch” have this duality depending on how they’re contextualized within the language. And then it comes back to “who wrote the AI?” because the AI obviously has bias depending on how it was written, or who wrote it.

To some extent, I think people think of code and science and math as inherently neutral, but in fact it’s kind of embedded in the biases of those who write it. But if you write a computer vision program...for example, have you seen this gif on Twitter that says “why computer vision isn’t neutral” and shows a white hand going under a soap dispenser and the soap comes out, but then a black hand goes under it and the dispenser can’t detect it.

York: Wow, that’s just so blatant.

I think about it a lot because we have so much trust in these technologies and these larger systems and corporations that build them. Maybe that’s changed on a social scale, and people are beginning to realize that there’s this kind of implicit biases built into these system, and so looking at who’s building them and what’s being built is so interesting. And that’s why I think what Kate Crawford and Meredith Whitaker are doing at AI Now is so interesting.

York: Okay, let’s see...is there anything I’ve missed?

I definitely find that there’s something interesting about the implications to people who choose to be outspoken outside the norm of the art world. I recently had a show that was supposed to happen in a large city, but the space didn’t want it because they said it was too political, and they were really advocating that political stuff doesn’t sell, and that people don’t want to see it. As an artist, there’s this weird kind of conundrum—and probably for people who do free speech work as well—if you choose to do political work or you choose to speak out, there are implications to your professional well-being, your income, your sales, and all these things that determine your livelihood in some sense. So a lot of artists and curators I know, when I talk to them off the record, they’re very adamant about not showing opinions that could be seen as political, advocacy, free speech work because they don’t want to be deemed radical.

Recently, there’s been a lot of upheaval with artists, like those not showing at the Whitney Biannual because of the affiliation with the families who created Oxycontin. So there’s some awareness now about the financial pipeline and what we’re implicit in supporting by showing work that they want or are comfortable with. 

York: Do you see any parallels between the art institutions and Silicon Valley?

Yeah, they’re typically run by the exact same sort of people—white people who are extremely privileged. I’d have to look it up, but the Guerilla Girls are a good resource for this. They made something that showed [the percentages of the art world that are white, female, or people of color] and if you consider that as what is forming the art canon as well as Silicon Valley’s canon, I’d say it’s pretty similar in terms of venture funding and startup culture [editor’s note: The Guerrilla Girls have also done work on the racial composition of internet users].

That’s what’s writing the history. So it’s like, there’s an entire majority of society that’s not being considered into the contemporary story, but also the historical one, and those of course inform the future. Looking at who has the right of access, who has the privilege of access, and who has the right to show their work...there’s definitely a parallel for me.

York: Thank you so much, Addie, this has been fascinating.

In Serving Big Company Interests, Copyright Is in Crisis

EFF - Tue, 01/21/2020 - 6:46pm

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

Copyright rules are made with the needs of the entertainment industry in mind, designed to provide the legal framework for creators, investors, distributors, production houses, and other parts of the industry to navigate their disputes and assert their interests.

A good copyright policy would be one that encouraged diverse forms of expression from diverse creators who were fairly compensated for their role in a profitable industry. But copyright has signally failed to accomplish this end, largely because of the role it plays in the monopolization of the entertainment industry (and, in the digital era, every industry where copyrighted software plays a role). Copyright's primary approach is to give creators monopolies over their works, in the hopes that they can use these as leverage in overmatched battles with corporate interests. But monopolies have a tendency to accumulate, piling up in the vaults of big companies, who use these government-backed exclusive rights to dominate the industry so that anyone hoping to enter it must first surrender their little monopolies to the hoards of the big gatekeepers.

Creators get a raw deal in a concentrated marketplace, selling their work into a buyer's market. Giving them more monopolies – longer copyright terms, copyright over the "feel" of music, copyright over samples – just gives the industry more monopolies to confiscate in one-sided negotiations and add to their arsenals. Expecting more copyright to help artists beat a concentrated industry is like expecting more lunch money to help your kid defeat the bullies who beat him up on the playground every day. No matter how much lunch money you give that kid, all you'll ever do is make the bullies richer.

One of the biggest problems with copyright in the digital era is that we expect people who aren't in the entertainment industry to understand and abide by its rules: it's no more realistic to expect a casual reader to understand and abide by a long, technical copyright license in order to enjoy a novel than it is to expect a parent to understand securities law before they pay their kid's allowance. Copyright law can either be technical and nuanced enough to serve as a rulebook for a vast, complex industry...or it can be simple and intuitive enough for that industry's customers to grasp and follow without years of specialized training. Decades of trying to make copyright into a system for both industrial actors and their audiences has demonstrated that the result is always a system that serves the former while bewildering and confounding the latter.

But even considered as a rulebook for the entertainment industry, copyright is in crisis. A system that is often promoted as protecting the interests of artists has increasingly sidelined creators' interests even as big media companies merge with one another, and with other kinds of companies (like ISPs) to form vertical monopolies that lock up the production, distribution and commercialization of creative work, leaving creators selling their work into a buyer's market locked up by a handful of companies.

2019 was not a good year for competition in the entertainment sector. Mergers like the $71.3B Disney-Fox deal reduced the number of big movie studios from five (already a farcical number) to four (impossibly, even worse). The Hollywood screenwriters have been locked in a record-breaking strike with the talent agencies—there are only three major agencies, all dominated by private equity investors, and the lack of competition means that they increasingly are negotiating deals on behalf of writers in which they agree to accept less money for writers in exchange for large fees for themselves.

On top of that, the big entertainment companies are increasingly diversifying and becoming distribution channels. The Trump administration approved the AT&T/Time-Warner merger just as the Obama administration approved the Universal/Comcast merger a decade earlier. Meanwhile, Disney has launched a streaming service and is pulling the catalogs of all its subsidiaries from rival services. That means that the creators behind those works will no longer receive residual payments from Disney for the licensing fees it receives from the likes of Netflix—instead, their work will stream exclusively on Disney Plus, and Disney will no longer have to pay the creators any more money for the use of their work.

To top it all off, the DOJ is working to end the antitrust rule that bans movie studios from owning movie theater chains, 70 years after it was put in place to end a suite of nakedly anti-competitive tactics that had especially grave consequences for actors and other creative people in the film industry. Right on cue, the already massively concentrated movie theater industry got even more concentrated.

The most visible impact of the steady concentration of the entertainment industry is on big stars: think of Taylor Swift's battle to perform her own music at an awards show where she was being named "Artist of the Decade" shortly after rights to her back catalog were sold to a "tycoon" whom she has a longstanding feud with.

But perhaps the most important impact is on independent creators, those who either cannot or will not join forces with the entertainment giants. These artists, more than any other, depend on a free, fair and open Internet to connect with audiences, promoted and distribute their works and receive payments. The tech sector has undergone market concentration that makes it every bit as troubled as the entertainment industry: as the New Zealand technologist Tom Eastman wrote in 2018, "I'm old enough to remember when the Internet wasn't a group of five websites, each consisting of screenshots of text from the other four."

The monopolization of the online world means that all artists are vulnerable to changes in Big Tech policy, which can see their livings confiscated, their artistic works disappeared, and their online presences erased due to error, caprice, or as collateral damage in other fights. Here, too, independent artists are especially vulnerable: when YouTube's Content ID copyright filter incorrectly blocks a video from a major studio or label, executives at the company can get prompt action from Google -- but when an independent artist is incorrectly labeled a pirate, their only hope of getting their work sprung from content jail is to make a huge public stink and hope it's enough to shame a tech giant into action.

As online platforms become ever-more-central to our employment, family, culture, education, romance and personal lives, the tech giants are increasingly wielding the censor's pen to strike out our words and images and sounds and videos in the name of public safety, copyright enforcement, and a host of other rubrics. Even considering that it's impossible to do a good job of this at massive scale, the tech companies do a particularly bad job.

This is about to get much worse. In March 2019, the European Union passed the most controversial copyright rules in its history by a razor-thin margin of only five votes—and later, ten Members of the European Parliament stated that they were confused and had pressed the wrong button, though the damage had already been done.

One of the most controversial parts of the new European Copyright Directive was Article 17 (formerly Article 13), which will require all online platforms to implement copyright filters similar to Google's Content ID. The Directive does not contain punishments for those who falsely claim copyright over works that don't belong to them (this is a major problem today, with fraudsters using fake copyright claims to threaten the livelihoods of creators in order to extort money from working artists).

Article 17 represents a bonanza for crooks who victimize creators by claiming copyright over their works—without offering any protections for the artists targeted by scammers. Artists who are under the protective wing of big entertainment companies can probably shield themselves from harm, meaning that the heavily concentrated entertainment sector will have even more leverage to use in its dealings with creators.

But that's not all: Article 17 may have snuffed out any possibility of launching a competing platform to discipline the Big Tech firms, at least in Europe. Startups might be able to offer a better product and lure customers to it (especially with the help of Adversarial Interoperability) but they won't be able to afford the massive capital expenditures needed to develop and operate the filters required by Article 17 until they've grown to giant size—something they won't get a chance to do because, without filters, they won't be able to operate at all.

That means that the Big Tech giants will likely get bigger, and, where possible, they will use their control over access to markets and customers to force both independent creators and big media companies to sell on terms that benefit them, at the expense of creators and entertainment companies.

To see what this looks like, just consider Amazon, especially its Audible division, which controls virtually the entire audiobook market. Once a minor sideline for publishing, audiobooks are now a major component of any author's living, generating nearly as much revenue as hardcovers and growing much faster.

Amazon has abused its near-total dominance over the audiobook market to force creators and publishers to consent to its terms, which include an absolute requirement that all audiobooks sold on Audible be wrapped in Amazon's proprietary "Digital Rights Management" code. This code nominally protects Audible products from unauthorized duplication, but this is a mere pretense.

It's pretty straightforward to remove this DRM, but providing tools to do so is a potential felony under Section 1201 of the Digital Millennium Copyright Act, carrying a penalty of a five-year prison sentence and a $500,000 fine for a first offense (EFF is suing the US government to overturn this law). This means that potential Audible rivals can't offer tools to import Audible purchases to run on their systems or to permit access to all your audiobooks from a single menu.

As Amazon grows in scale and ambition, it can, at its discretion, terminate authors' or publishers' access to the audience it controls (something the company has done before). Audiences that object to this will be left with a difficult choice: abandon the purchases they've made to follow the artists they love to smaller, peripheral platforms, or fragment their expensive audiobook libraries across a confusion of apps and screens. 

Copyright was historically called "the author's monopoly," but increasingly those small-scale monopolies are being expropriated by giant corporations—some tech, some entertainment, some a weird chimera of both—and wielded to corner entire markets or sectors. In 2017, EFF lost a long, bitter fight to ensure that a poorly considered project to add DRM to the standards for Web browsers didn't result in further monopolization of the browser market. Two years later, our worst fears have been realized and it is effectively impossible to launch a competitive browser without permission from Google or Microsoft or Apple (Apple won't answer licensing queries, Microsoft wants $10,000 just to consider a licensing application, and Google has turned down all requests to license for new free/open-source browsers).

Copyright has also become a key weapon in the anticompetitive arsenal wielded against the independent repair sector. More than 20 state-level Right to Repair bills have been killed by industry coalitions who cite a self-serving, incoherent mix of concerns over their copyrights and "cybersecurity" as reasons why you shouldn't be able to get your phone or car fixed in the shop of your choice.

All this is why EFF expanded its competition-related projects in 2019 and will do even more in 2020. We, too, are old enough to remember when the Internet wasn't a group of five websites, each consisting of screenshots of text from the other four. We know that, in 2020, it's foolish to expect tech companies to have their users' back unless there's a meaningful chance those users will go somewhere else (and not just to another division of the same tech company).

EFF Statement on Glenn Greenwald Charges

EFF - Tue, 01/21/2020 - 3:06pm

EFF is dismayed to learn of the decision by Brazilian prosecutors to charge journalist Glenn Greenwald under the country’s computer crime law.

EFF has long warned that cybersecurity laws in the Americas have been written and interpreted so broadly as to invite misuse. Computer crime laws should never be used to criminalize legitimate journalistic practice.  Prosecutors should be cautious to apply them without considering the chilling effects on the free press, and the risk of politicized prosecutions.

In free societies, journalists play an important role in challenging and criticizing governmental officials and scrutinizing their actions and policies. It is a threat to democracy when authorities use cybercrime laws to punish their critics, as the Brazilian government has done here with Glenn Greenwald, and it discourages journalists from using technology to best serve the public.  

Hearing Wednesday: EFF Urges Court To Rule That Blogger’s Opinion of Open Source Licensing Agreement is Protected by the First Amendment

EFF - Tue, 01/21/2020 - 2:26pm
Patch Maker Sued For Defamation To Shut Down Critic

San Francisco, California—On Wednesday, January 22, at 9 am, EFF Staff Attorney Jamie Williams will tell a federal appeals court that a lower court correctly dismissed a defamation lawsuit against a blogger, finding that his criticisms of a company’s business practices were opinions about an unsettled legal issue protected by the First Amendment.

EFF is representing Bruce Perens, founder of the Open Source movement, who criticized restrictions Open Source Security Inc. (OSS) placed on customers of its security patch software for the Linux Operating System. OSS sued Perens in response. The lower court found that OSS’s lawsuit not only failed to plausibly state a claim for defamation, but also that it ran afoul of a California statute that protects defendants from SLAPPs, short for Strategic Lawsuits Against Public Participation. SLAPPs are malicious lawsuits that aim to silence critics by forcing victims to spend time and money to fight the lawsuit itself—draining their resources and chilling their right to free speech.

At the hearing on Wednesday, Williams will tell a panel of Ninth Circuit Court of Appeals judges that Perens’s blog post merely expressed his opinion about an unsettled legal issue of concern to a worldwide Open Source community, and that Perens disclosed the factual basis for that opinion. OSS, which disagrees with Perens, was free to state its disagreement publicly, but it was not free to sue Mr. Perens for exercising his First Amendment right, Williams will tell the court.

Read EFF’s filing in the Perens case:

WHO: EFF Staff Attorney Jamie Williams

OSS v. Perens

Ninth Circuit Court of Appeals-James R. Browning Courthouse
Courtroom 1, 3rd Floor, Room 338
95 7th Street, San Francisco CA 94103

January 21
9 am

Contact:  Jamie LeeWilliamsStaff Attorneyjamie@eff.org

A Tool That Removes Copyrighted Works Is Not a Substitute for Fair Use

EFF - Mon, 01/20/2020 - 12:11pm

YouTube, which has become essential for video creators to build an audience, has a new tool that’s supposed to help users respond to its copyright filter. Is it something that makes fair use a priority? No, it’s a way to make it easier to remove the part of a video that someone has claimed they own.

In December, YouTube released a list of “New YouTube Studio tools to help you deal with copyright claims.” Mostly what it’s done is make it easier for you, as a video creator, to sort through all the copyright claims that have been filed against you and what that’s done to your videos. That is, so you can see difference between a “copyright strike” that is the result of the takedown process—which YouTube does in order to comply with the safe harbor provisions of the DMCA—and something which has been flagged by Content ID—a copyright filter voluntarily built and deployed by YouTube and subject only to YouTube’s policies.

Content ID works by having copyright holders upload their content into a database maintained by YouTube. New uploads are compared to what’s in the database and when the algorithm detects a match, copyright holders are informed. They can decide whether to monetize someone else’s video for themselves, mute the audio, or take it down. Users whose videos are hit with Content ID can dispute the match—chancing the claim being converted to a strike—or alter their video in some way that releases the claim.

Content ID makes matches based on seconds of matching audio or video. In other words, it doesn’t just make matches when a whole thing has been copied and uploaded. It makes matches when just a short clip is found. And short clips are often present in videos making fair use.

In order to make dealing with Content ID claims “easier” for users, YouTube’s new tool list includes something called  “Assisted Trim.” If you get hit by Content ID, YouTube’s interface now presets an editing tool around the disputed clip, so that video makers can easily remove it, releasing the Content ID claim.

Videos critiquing a film or song are going to include clips from that video or song. It makes the point stronger. In the same way that high school English classes teach students to put quotes in their essays to make their point stronger, people working in visual and audio formats do the same thing.

Moreover, fair use gives people the legal right to use copyrighted material for purposes like commentary and criticism without having to get permission or pay the copyright holder. And fair use isn’t bound by a specific number of seconds. It’s bound by whether what was used was needed for the point being made.

But Content ID isn’t based in fair use. It’s based on whatever YouTube decides. Users can technically dispute a Content ID match. But if a user’s dispute of a Content ID claim is rejected, and they appeal, the user can end up with a “copyright strike.” Every YouTuber knows that copyright strikes can lead to you losing your whole page.

Losing your YouTube page—especially since there is no video platform that comes close to offering the kind of audience YouTube does—is not something anyone wants to chance. And if you depend on YouTube for your living, the situation is even direr. You can see why people would just go along with whatever happens rather than risk the potential consequences. YouTube's policies, and the tools it chooses to make available, all funnel creators into simply removing copyrighted material rather than encourage them to make fair use, even if legally they are making fair use.

By making eliminating material flagged by Content ID so easy—just click here!—and making challenging matches so perilous, YouTube has put its thumb on the scale against fair use and in favor of copyright abuse. That thumb gets especially heavy given how few real alternatives to YouTube exist.

Hosting creative content should mean a robust commitment to fair use. Fair use enriches our culture and our understanding of it. It is what ensures that copyright doesn’t strangle free expression and creativity. Subtle reinforcement of anti-fair use ideas enacted by private companies, done by the largest players in the ecosystem, does real damage.

It’s Copyright Week 2020: Stand Up for Copyright Laws That Actually Serve Us All

EFF - Mon, 01/20/2020 - 12:00pm

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

While culture is shared, copyright law has increasingly been used to lock people out of participating in it. Although copyright law is often treated as the exclusive domain of major media and entertainment industries, it actually should be serving all of us. Because, of course, it affects all of us.

Eight years ago, a diverse coalition of Internet users, non-profit groups, and Internet companies defeated the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), bills that would have forced Internet companies to blacklist and block websites accused of hosting copyright-infringing content. These were bills that would have made censorship very easy, all in the name of copyright protection.

SOPA and PIPA have their successors. Between the Copyright Directive in the EU (previously known as Article 13, now as Article 17) and endlessly frustrating iterations of the CASE Act, there’s a push to “fix” copyright law in ways that cause harm for regular people and smaller creators, while shoring up the power of the huge entertainment companies and big tech. And that’s not to mention laws like Section 1201 of the Digital Millennium Copyright Act, which makes it difficult and expensive to tinker with or repair devices you, in theory, have bought and “own.”

But we know this play, and we know that you will stand against it the way we all did eight years ago.

We continue to fight for a version of copyright that does what it is supposed to. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and advocate a set of principles of copyright law. This year’s issues are:

  • Monday: Fair Use and Creativity
    Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.
  • Tuesday: Copyright and Competition
    Copyright should not be used to control knowledge, creativity, or the ability to tinker with or repair your own devices. Copyright should encourage more people to share, make, or repair things, rather than concentrate that power in only a few players.
  • Wednesday: Remedies
    Copyright claims should not raise the specter of huge, unpredictable judgments that discourage important uses of creative work. Copyright should have balanced remedies that also provide a real path for deterring bad-faith claims.
  • Thursday: The Public Domain
    The public domain is our cultural commons and a crucial resource for innovation and access to knowledge. Copyright should strive to promote, and not diminish, a robust, accessible public domain.
  • Friday: Copyright and Democracy
    Copyright must be set through a participatory, democratic, and transparent process. It should not be decided through back-room deals, secret international agreements, unaccountable bureaucracies, or unilateral attempts to apply national laws extraterritorially.

Every day this week, we’ll be sharing links to blog posts and actions on these topics at https://www.eff.org/copyrightweek and at #CopyrightWeek on Twitter.

As we said last year, and the year before that, if you too stand behind these principles, please join us by supporting them, sharing them, and telling your lawmakers you want to see copyright law reflect them.

ICANN Needs To Ask More Questions About the Sale of .ORG

EFF - Fri, 01/17/2020 - 7:50pm

Over 21,000 people, 660 organizations, and now six Members of Congress have asked ICANN, the organization that regulates the Internet’s domain name system, to halt the $1.135 billion deal that would hand control over PIR, the .ORG domain registry, to private equity. There are crucial reasons this sale is facing significant backlash from the nonprofit and NGO communities who make the .ORG domain their online home, and perhaps none of them are more concerning than the speed of the deal and the dangerous lack of transparency that’s accompanied it. 

Less than three months have passed from the announcement of the sale—which took the nonprofit community by surprise—to the final weeks in which ICANN is expected to make its decision, giving those affected almost no chance to have a voice, much less stop it. The process so far, including information that the buyer, Ethos Capital, provided to ICANN in late December, raises more questions than it answers. U.S. lawmakers are correct that “the Ethos Capital takeover of the .ORG domain fails the public interest test in numerous ways.”

Before any change in who operates the .ORG registry can take place, ICANN, which oversees the domain name system, needs to answer important questions about the deal from those who use .ORG domain names as the foundation of their online identity. Working with the nonprofit community, we’re asking ICANN to ask more questions to confirm how the deal will protect .ORG users—questions that are still unanswered. And next week, on January 24th, nonprofits and supporters will protest at ICANN’s headquarters in Los Angeles. You can join us to tell ICANN that it must be more than a rubber stamp.


Tell ICANN: Nonprofits Are Not For Sale

A Dangerous Deal

The Internet Society (ISOC)—which has controlled .ORG for the past 16 years—and Ethos Capital are treating the .ORG registry as an asset that can be bought and sold at will. But ISOC didn’t pay to acquire the .ORG registry—indeed, PIR, the organization that was founded by ISOC to run .ORG was given $5 million to help it do so. Now, ISOC plans to profit off of the value of the registry by converting PIR into a for-profit LLC in the hands of Ethos Capital.

ICANN delegated the task of running .ORG to ISOC in 2002 because ISOC was best positioned to run the domain for the benefit of nonprofit users. The excess funds from .ORG registration fees that have supported the work of ISOC for the past 16 years were a side benefit, not a sacred entitlement. Rather than an asset, like a building, the registry should be thought of as a public function—like the assigning of street addresses. It should be (and has been) administered in the public interest. But in the hands of private equity, the registry will become something altogether different from what it’s been in the past: a tool for making profits from nonprofits.

After the sale, Ethos Capital, having paid $1.135 billion for .ORG to ISOC, will have to recoup that investment on a scale that’s expected of a private equity firm. This week, Ethos revealed for the first time that some $360 million of the purchase price will be financed with a loan. The payments on that loan will have to come out of Ethos’s profits, so they will probably need to raise more money per year than ISOC currently does. While Ethos could try to simply increase the number of its “customers” for .ORGs, PIR has tried this in the past, and the demand for the domains has remained largely flat. This is no surprise; the nonprofit sector just doesn’t grow at exponential rates.

That brings us to the myriad reasons nonprofits have criticized the deal: every other way that Ethos might increase profits is bad news for .ORG users. And these tactics aren’t farfetched: every one of them is already delivering profits in other sectors, often while harming domain registrants and their visitors.

Squeezing Profits from Nonprofits Harms Civil Society
  1. The most obvious way to profit from the registry is for Ethos to raise the annual registration fees on .ORG names. Under pressure, Ethos has promised to keep fee increases to 10% per year “on average.” But they haven’t made that promise legally binding. There won’t be any way for .ORG users to challenge future fee increases without changing domain names, an expensive and risky process well-known to any organization or which has had to shift away from a trusted domain. And even more worrisome, Ethos could begin charging different rates for different domains. Other registries already charge considerably higher fees for names they designate as “premium.” Ethos could base fees entirely on an organization’s ability to pay, essentially holding nonprofits’ domain names for ransom.
  2. Ethos could also engage in censorship-for-profit. As we’ve described before, other domain registries have made deals with powerful corporate interests, like movie studios and pharmaceutical interests, to suspend the domains of websites, even if that means suppressing truthful information. But .ORGs don’t just have corporate interests hoping to control their voices: the world over, including within authoritarian regimes, .ORGs are the home of important critical speech on the Internet. Ethos would have a clear incentive to take down domains at the request of repressive governments, just as governments often demand takedowns of speech on social networks, in exchange for tax or other financial benefits.
  3. Ethos could sell the browsing data of users who visit .ORGs. The operator of a domain registry can, if it chooses, track every look-up of an address within that domain. Ethos could track visits to nonprofit organizations around the world, perhaps to target advertising on behalf of Vidmob, the advertising company they also own, invading the privacy of everyone who visits .ORG websites.
  4. Ethos could cut back on the important technical upkeep of the domains. Domain name lookups must be available worldwide, and quickly. Technical failures can mean being unable to connect to a website, or to send and receive email. This doesn’t just mean 404s: because the .ORG registry is home to relief agencies, news media, and other groups that provide life-saving services, technical failures could result in actual harm. Aid might not reach people in need during a crisis; news and information could be stopped dead during an emergency. The .ORG registry has had no downtime in over a decade. If that changes, it’s not just websites that would be in danger.
Not Enough Safeguards

In response to public pressure, Ethos has made a loose commitment about future pricing. It has also proposed adding text about acting in the public benefit into the “Certificate of Formation” for the new holding company they’re creating. And it’s promised to create a “Stewardship Council” to “help guide” the company’s management.

But there’s no force behind these words. Under corporate law, only the company itself has the power to decide whether it’s acting for the public benefit. Putting vague commitments into a “Certificate of Formation” doesn’t give the users of .ORG domains any mechanism of enforcement. And a “Stewardship Council” will not be able to override the decisions of the company’s owners and management. There’s no guarantee that the council will even be informed about what the company is doing. In fact, PIR already has an advisory council—and it wasn’t even told that the sale to Ethos was going to happen.

Luckily, there are other options on the table. If the .ORG registry needs to change hands, ICANN must take the time to consider all the alternatives, such as the Cooperative Corporation of dot-org Registrants, and determine which organization will best uphold the commitments that were made when .ORG was last re-assigned, in 2002. Instead of a rushed and secretive vote, ICANN should engage in a careful decision-making process that gives all .ORG registrants a voice in decisions around the registry in the future.

The Benefits Are Vague, At Best

In defending the deal, ISOC’s leadership has talked about the good they can do with a $1.1 billion endowment. Those good works, though, don’t excuse breaking trust with thousands of nonprofits. Several proponents of the deal, echoing Ethos’s talking points, claim that turning .ORG into a for-profit registry will lead to “new products and services” for the .ORG community. No one explains what those would be, though, or what they have to do with maintaining a reliable database of domain names. And there is no benefit at all if these vague opportunities in the future come at the cost of functional, censorship-free websites for millions of nonprofits, associations, and clubs around the world. 

ICANN Needs To Ask More Questions

As the group that controls the top level of the domain name system, ICANN has the power to stop .ORG from changing hands, and to name a new organization to steward that important resource. Before the deal goes any further, ICANN needs to ask more questions of ISOC and Ethos. We’ve compiled a handy list.

Anyone who’s concerned about selling a public trust for private profit can sign the petition to #SaveDotOrg, which we’ll be presenting along with other nonprofits to ICANN in person next week. And if you’ll be in the Los Angeles area on Friday, January 24th, come join us at the protest at ICANN’s headquarters, organized by EFF, NTEN, and Fight for the Future. Help us tell ICANN: .ORG is not for sale.


Tell ICANN: Nonprofits Are Not For Sale

EFF Asks the Supreme Court to Put a Stop to Dangerously Broad Interpretations of the Computer Fraud and Abuse Act

EFF - Fri, 01/17/2020 - 4:21pm

At EFF, we have spent years fighting the Computer Fraud and Abuse Act (CFAA). The law was aimed at computer crime, but it is both vague and draconian—putting people at risk for prison sentences for ordinary Internet behavior. Now, we are asking the Supreme Court to step in and stop dangerous overbroad interpretations of the CFAA.

The CFAA was passed more than 30 years ago, before the invention of the World Wide Web. Consequently, the law is hard to make sense of in our increasingly digital world. Some courts have rightly interpreted the law narrowly, focusing on hacking and other illegal computer intrusions. But other courts have bought into tactics used by creative prosecutors, who argue that when the statute outlaws “exceeding authorized access” to a computer, it also covers violating the “terms of service” of websites and other apps.

Let’s be clear: violating a website’s “terms of service” is very easy to do. You’ve probably done it repeatedly. It can include things like logging into your spouse’s bank account, checking your personal email on your work computer, or sharing a social media password—all behavior that companies may not like, but should not result in criminal penalties. If violating terms of use is a crime, then private companies across the country get to decide who goes to prison for what, instead of lawmakers. That’s a dangerous result that puts us all at risk.

Now, a former Georgia police officer who was wrongly convicted under the CFAA is asking the Supreme Court to take his case. In Van Buren v. United States, Van Buren was accused of taking money in exchange for looking up a license plate in a law enforcement database. This was a database he was otherwise entitled to access, meaning the CFAA is the wrong law to use when prosecuting his alleged behavior. In our amicus brief filed today with the Center for Democracy and Technology and New America’s Open Technology Institute, EFF argues that Congress intended to outlaw computer break-ins that disrupted or destroyed computer functionality, not anything that the service provider simply didn’t want to have happen.

It’s time we got some clarity about the CFAA. We hope the Supreme Court takes Van Buren and agrees on a narrow interpretation of this messy and confusing law.

Tuesday Hearing: EFF Argues in New Jersey Supreme Court That Defendant Can’t Be Forced to Turn Over Password to Encrypted iPhone

EFF - Fri, 01/17/2020 - 12:53pm
U.S. Constitution Protects Rights Against Self-Incrimination

Trenton, New Jersey—On Tuesday, January 21, at 1 pm., EFF Senior Staff Attorney Andrew Crocker will ask the New Jersey Supreme Court to rule that the state can’t force a defendant to turn over the passcode for his encrypted iPhone under the Fifth Amendment, which protects American’s rights against self-incrimination.

The Fifth Amendment states that people cannot be forced to incriminate themselves, and it’s well settled that this privilege against self-incrimination covers compelled “testimonial” communications, including physical acts. However, courts have split over how to apply the Fifth Amendment to compelled decryption of encrypted devices.

EFF, ACLU, and ACLU of New Jersey filed a brief in the case State v. Andrews arguing that state can’t compel a suspect to recall and use information that exists only in his memory to aid law enforcement’s prosecution of him.

At Tuesday’s hearing, Crocker will tell the court that reciting, writing, typing or otherwise reproducing a password from memory is testimony protected by the Fifth Amendment.

Read the amicus brief EFF filed in the Andrews case:

WHO: EFF Senior Staff Attorney Andrew Crocker

WHAT: New Jersey v. Andrews

Supreme Court of New Jersey
25 Market St.
Trenton, NJ 08611
The argument will also be live-streamed.

January 21
1 pm

Iranian Tech Users Are Getting Knocked Off the Web by Ambiguous Sanctions

EFF - Fri, 01/17/2020 - 12:28pm

Between targeted killings, retaliatory air strikes, and the shooting of a civilian passenger plane, the last few weeks have been marked by tragedy as tensions rise between the U.S. and Iranian governments. In the wake of these events, Iranians within the country and in the broader diaspora have suffered further from actions by both administrations—including violence and lethal force against protesters and internet shutdowns in Iran, as well as detention, surveillance and device seizure at the U.S. border and exacerbating economic conditions from U.S. sanctions. And to make matters worse, American tech companies are acting on sanctions through an overbroad lens, making it much harder for Iranian people to be able to share their stories with each other and with the broader world.

The Office of Foreign Assets Control (OFAC) of the U.S. Department of the Treasury administers and enforces economic and trade sanctions that target foreign countries, groups, and individuals. Some of these sanctions impact the export to Iran (or use by residents of the country) of certain types of technology, although trying to parse precisely which types are affected appears to have left some companies puzzled.

For example, this week Instagram removed a number of accounts from its service that were affiliated with the Iranian Revolutionary Guard Corps (IRGC)—which is specially designated by OFAC—as well as some accounts praising the IRGC and some condemning the group. The platform initially justified its actions stating:

We review content against our policies and our obligations to US sanctions laws, and specifically those related to the US government’s designation of the IRGC and its leadership as a terrorist organization.

While Instagram is indeed obligated to remove accounts affiliated with the IRGC, the law does not extend to unaffiliated accounts providing commentary on the IRGC—although some experts say that posts supporting a specially designated group could be seen as providing support to the group, thus violating sanctions.

In any case, Instagram may choose to remove accounts praising the IRGC under its own community standards. In the end, Instagram ended up restoring at least one account following media criticism.

A long hard road

EFF has long observed tech companies’ struggle with OFAC sanctions. In 2012, an Apple employee refused to sell a laptop to a customer who was overheard speaking Persian, prompting the State Department to issue a clarifying statement:

[T]here is no U.S. policy or law that prohibits Apple or any other company from selling products in the United States to anybody who’s intending to use the product in the United States, including somebody of Iranian descent or an Iranian citizen or any of that stuff.

In 2013, we spoke up when Airbnb booted an Iranian resident of Switzerland from their platform without recourse, resulting in a reversal of the decision.

And now, as tensions between the U.S. and Iran heat up, we’re seeing tech companies booting Iranians from their platforms left and right. For example:

...But are these companies correct in stripping Iranians of their accounts? The answer: It’s complicated.

Iran is subject to certain OFAC sanctions, and in addition to that, the IRGC and certain Iranian nationals are on OFAC’s list of “specially designated nationals.” OFAC sanctions can be interpreted broadly by tech companies, which is why in 2010, the Treasury Department issued a general license intended as a blanket license for the export of “certain services and software incident to the exchange of personal communications over the Internet, such as instant messaging, chat and email, social networking, sharing of photos and movies, web browsing, and blogging, provided that such services are publicly available at no cost to the user.”

In 2014, that license was amended to include even more products, including certain fee-based services “incident to the exchange of personal communications over the Internet” including social networking. The new license, General License D-1, provided greater clarity to companies on what is and is not subject to sanctions. As the National Iranian American Council pointed out in a 2017 letter, General License D-1 has been widely praised for “securing human rights, protecting access to online information, and avoiding government censors.”

As the events of this week demonstrate, companies are still struggling to understand the rules. And understandably so—as Richard Nephew, a sanctions expert and senior research scholar at Columbia University told CNN:

[T]his is a tough gray area as we also have free speech protections too.  This is why I think companies often make mistakes in this area, both by preventing such posts or activities and by allowing them …

But while the rules might be difficult, companies are making things worse by failing to properly communicate to users about why their accounts have been suspended—and by giving misleading or incorrect statements to the media.

Why does this matter?

Sanctions that prevent the free flow of communications on the internet and hamper ordinary the ability of ordinary Iranians to express themselves often harm the very people they’re intended to help. Over the years, we’ve seen how sanctions on tech—as well as misapplication or overbroad application of such sanctions—hurt individuals from all walks of life by denying them access to information and cutting them off from communication with the rest of the world.

After the 2014 issuance of General License D-1 for Iranians, Sudanese citizens embarked on a campaign for a similar license, arguing that sanctions prevented them from accessing e-books, online courses, and other information. In a country where the government bans books and at times seizes newspapers, the knowledge that can be gained online can make all the difference. For Iranians, greater access can also mean safer access—to VPNs, secure messaging apps, and other vital tools.

But it isn’t just access to information—it’s also the information coming out of Iran that’s affected. When a Ukrainian airliner was struck down in Iranian airspace, it was video taken from inside the country—as well as efforts by individuals in Iran—that led to verification that Iran’s government had struck the plane with a missile. As we’ve pointed out before, policies intended to prevent violent extremists from using online services often have the effect of silencing human rights content. And given how little access international media has to Iran, hearing from Iranians about what’s happening on the ground is vital.

Furthermore, Iran has seen fit in the past to shut down the Internet, preventing its residents from accessing the outside world. If the U.S. government truly believes in the internet freedom policy that it continues to pour millions of dollars into, it should see how its own policies are working against freedom and pushing Iranians toward local services that are likely heavily surveilled or censored. As it stands, the U.S. is just helping Iran do the job of silencing its citizens.

A clearer way forward

As moral panic and confusion set in, more and more companies are seeking to enforce sanctions law—and as they do, it’s vital that they have the best possible information at hand so ordinary citizens aren’t unduly impacted. As such, we are reiterating our ask for the Department of Treasury to update General License D-1 and provide guidance to U.S. tech companies to ensure the minimal amount of damage to users.

But although sanctions are hard, we also call on tech companies to exercise both caution and compassion as they navigate these murky waters. Companies should ensure that they’re using the best possible means to identify potentially impacted users; notify them clearly (by providing information about specific statutes and links to relevant information from the Department of Treasury); and most importantly, provide an appeals system so that users who are wrongly identified have a path of recourse to regain access to their accounts.

Rights Groups to European Commission: Prioritize Users’ Rights, Get Public Input For Article 17 Application Guidelines

EFF - Wed, 01/15/2020 - 5:12pm

The implementation of Art 17 (formerly Article 13) into national laws will have a profound effect on what users can say and share online. The controversial rule, part of the EU’s copyright directive approved last year, turns tech companies and online services operators into copyright police. Platforms are liable for any uploaded content on their sites that infringes someone’s copyright, absent authorization from rightsholders. To escape liability, online service operators have to make best efforts to ensure that infringing content is not available on their platforms, which in practice is likely to require scanning and filtering of billions of daily social media posts and content uploads containing copyrighted material.

The content moderation practices of Internet platforms are already faulty and opaque. Layering copyright enforcement onto this already broken system will censor even more speech. It’s paramount that preserving and protecting users’ rights are baked into guidelines the EC is developing for how member states should implement the controversial rule. The guidelines are non-binding but politically influential.

The commission has held four meetings with stakeholders in recent months to gather information about copyright licensing and content moderation practices. Two more meetings are scheduled for this spring, after which the EC is expected to begin drafting guidelines for the application of Article 17, which must be implemented in national laws by June 7, 2021.

The fifth meeting was held today in Brussels. The good news is EFF and other digital rights organizations have a seat at the table, alongside rightsholders from the music and film industries and representatives of big tech companies like Google and Facebook. The bad news is that the commission’s proposed guidelines probably won’t keep users’ rights to free speech and freedom of expression from being trampled as internet service providers, fearful of liability, race to over-block content.

That’s why EFF and more than 40 user advocate and digital rights groups sent an open letter to the EC asking the commissioners to ensure that implementation guidelines focus on user rights, specifically free speech, and limit the use of automated filtering, which is notoriously inaccurate. The guidelines must ensure that protecting legitimate, fair uses of copyrighted material for research, criticism, review, or parody takes precedence over content blocking measures Internet service providers employ to comply with Article 17, the letter says. What’s more, the guidelines must make clear that automated filtering technologies can only be used if content-sharing providers can show that users aren’t being negatively affected.

Further, we asked the commission to share the draft guidelines with rights organizations and the public, and allow both to comment on and suggest improvements to ensure that they comply with European Union civil and human rights requirements. As we told the EC in the letter, “This request is based on the requirement of transparency, which is a core principle of the rule of law.” EFF and its partners want to “ensure that the guidelines are in line with the right to freedom of expression and information and also data protection guaranteed by the Charter of Fundamental Rights.”

The EC is scheduled to hold the next stakeholder meeting in February in preparation for drafting guidelines. We will keep the pressure on to protect users from censorship and content blocking brought on by this incredibly dangerous directive.

Strange Bedfellows: EFF Sides with PTO in Trademark Battle Over ‘Booking.com’

EFF - Tue, 01/14/2020 - 6:59pm

EFF often criticizes the Patent and Trademark Office (PTO) for granting bad patents, but a case in the Supreme Court has us on the same side.

On Monday, EFF filed an amicus brief asking the court to reject trademark protection for “Booking.com,” pointing out that other travel companies that use variations of the word “booking” in their domain names could face legal threats if the mark were granted.

The case started in 2016, when Booking.com sued the PTO for refusing its trademark application on the basis that “Booking.com” is a generic term for the services it provides. Generic terms refer to categories or classes of things that can’t be trademarked because of the effect on free speech and competition. For example, you wouldn’t want Apple to have a trademark for the word “computer,” because other computer manufacturers should be allowed to accurately describe their products. However, a lower court judge decided that adding the “.com” to the end of the generic word “booking” made it eligible for trademark protection. Last year, an appeals court agreed.

The PTO rightly took its case to the Supreme Court. In our brief, we argued that granting a mark like this would hurt both consumer rights and competition. For example, there are a number of companies with domain names like “ebooking.com” and “bookit.com.” Even if the names are not identical to Booking.com, they could be at risk of lawsuits under trademark liability’s “likelihood of confusion” standard. Additionally, a win for Booking.com would likely kick off a flood of additional trademark requests for combinations of generic words and top-level domains, leading to even more uncertainty and drawn-out court cases.

The Supreme Court has granted certiorari and will likely hear oral arguments in the case later this year. We hope the justices recognize that the PTO had it right: generic words with “.com” at the end don’t deserve trademark protection.

Top Apps Invade User Privacy By Collecting and Sharing Personal Data, New Report Finds

EFF - Tue, 01/14/2020 - 12:43pm

A new year often starts with good resolutions. Some resolve to change a certain habit, others resolve to abandon an undesired trait. Mobile app makers, too, claim to have user behavior and their preferences at their heart. From dating to health to music, their promise is to add convenience to consumers’ lives or to offer support when needed. The bad news is that the ecosystem of the underlying ad tech industry has not changed and still does not respect user privacy. A new report published today by the Norwegian Consumer Council (NCC) looks at the hidden side of the data economy and its findings are alarming.

Discrimination, Manipulation, Exploitation

Scrutinizing 10 popular apps in Google Play Store, such as Grindr, Clue, and Perfect365, the NCC report’s technical analysis reveals comprehensive tracking and profiling practices. Personal data is systematically collected and shared with dozens of third-party companies without users’ knowledge. EFF’s recent report on third-party tracking documents additional ways that companies profit from invading our digital privacy.

The NCC’s legal analysis concludes that companies have not obtained valid consent from consumers to process their data under the EU General Data Protection Regulation (GDPR) and consumers have no practical option to avoid being tracked. The report highlights that profiling practices may not only be used to personalize advertising, but could also result in discrimination, manipulation, and exploitation of users.

Actions by Consumer and Digital Rights Organizations

Current tracking and profiling practices translate into exploitative practices in contradiction to the GDPR, the report says. While the research was carried out in the EU, the analyzed apps are available around the globe, and many are owned by companies headquartered in the U.S. Responding to the report, consumer and digital rights organizations globally are notifying their data protection authorities.

What Needs to Be Done: Strong Privacy Rights and Alternative Solutions

EFF has long advocated for critical and tangible privacy rights for users, including the right to opt-in consent, the right to know, and the right to data portability. Rules should not only exist on paper but users should also be empowered to bring their own lawsuits against companies that violate their privacy rights.

The NCC report shows that a huge surveillance industry has built up around us. Instead, we need a user-oriented tech ecosystem that does not treat user data like a free resource to be exploited. To build alternative solutions to the incumbent online advertising systems, we need new laws that create strong privacy rights.

Report and materials: https://www.forbrukerradet.no/out-of-control/

Bay Staters Continue to Lead in Right to Repair, and EFF Is There to Help

EFF - Mon, 01/13/2020 - 7:27pm

Massachusetts has long been a leader in the Right to Repair movement, thanks to a combination of principled lawmakers and a motivated citizenry that refuses to back down when well-heeled lobbyists subvert the legislative process.

In 2012, Massachusetts became the first US state to enact Right to Repair legislation, with an automotive law that protected the right of drivers to get their cars repaired by independent mechanics if they preferred them to the manufacturers' service depots. Though wildly popular, it took the threat of a ballot initiative to get the legislature to act, an initiative that ultimately garnered 86% of the vote. The initiative led to strong protections for independent repair in Massachusetts and set the stage for a compromise agreement leading to better access to repair information for most of the country.

Now Bay Staters are back in the legislature: in the years since the original automotive Right to Repair law was enacted, manufacturers have redesigned their products in ways that exploit loopholes in the 2012 law, effectively shutting out independent repair.

House Bill 4122 closes the loopholes in the 2012 law, and in-state advocates are gathering signatures for another ballot initiative should lobbyists defeat the bill in the legislature.

EFF was pleased to submit comments to the Massachusetts Legislature's Joint Committee on Consumer Protection and Professional Licensure for a hearing on January 13 in support of HB4122.

In those comments, sent to each member of the Committee, EFF Special Consultant Cory Doctorow wrote:

Auto manufacturers have argued that independent service endangers drivers' cybersecurity. In reality, the opposite is true: security is weakened by secrecy and strengthened by independent testing and scrutiny. It is an iron law of information security that "there is no security in obscurity"—that is, security cannot depend on keeping defects a secret in the hopes that "bad guys" won't discover and exploit those defects. And since anyone can design a security system that they themselves can't imagine any way of breaking, allowing manufacturers to shroud their security measures in secrecy doesn't mean that their cars can't be hacked—in fact, history has shown that vehicle computers depending on secrecy for security are, in fact, frequently vulnerable to hacking.

In 2018 and 2019, cities, hospitals, and other large institutions had their informatics systems seized by petty criminals using off-the-shelf ransomware that had combined with a defect in Windows that the NSA had discovered and kept secret—until an NSA leaker released it to the world. As these cities discovered, the NSA's decision to keep these defects secret did not put them out of reach of bad guys—it just meant that institutional Microsoft customers were put at grave risk, and that Microsoft itself did not know about the devastating bugs in its own products and so could not fix them.

Information security is absolutely reliant upon independent security researchers probing systems and disclosing what they discover. Allowing car manufacturers to monopolize service—and thus scrutiny—over their products ensures that the defects in these fast-moving, heavy machines will primarily become generally known after they are exploited to the potentially lethal detriment of drivers and the pedestrians around them.

The manufacturers' desire to monopolize bad news about design defects in their own products is especially dire because it rides on the tails of a strategy of monopolizing service and parts for those products. The uncompetitive, concentrated automotive sector has already brought itself to the brink of ruin—averted only by the infusion of $80.7B in tax-funded bailouts. More than a decade later, it remains in dire need of competitive discipline, as is evidenced by a commercial strategy dominated by reducing public choice, surveilling their own customers and selling their data, and extracting monopoly rents from luckless drivers who are locked into their proprietary ecosystems.

The German Constitutional Court Will Revisit the Question of Mass Surveillance, Will the U.S.?

EFF - Mon, 01/13/2020 - 7:21pm

On January 14 and 15, 2020, the German Federal Constitutional Court will be holding a hearing to reevaluate the Bundesnachrichtendienst (BND) Act, which gives the BND agency (similar to the National Security Agency in the United States) broad surveillance authority. The hearing comes after a coalition of media and activist organizations including the Gesellschaft für Freiheistrechte filed a constitutional complaint against the BND for its drag net collection and storage of telecommunications data. This new hearing continues a renewed effort on the part of countries around the world to re-access the high cost of liberty that comes with operating an invasive drag net surveillance program and may increase global pressure on the United States’ intelligence community.

One of the coalitions leading arguments against massive data collection by the foreign intelligence service is the fear that sensitive communications between sources and journalists may be swept up and made accessible by the government. Surveillance which, purposefully or inadvertently, sweeps up the messages of journalists jeopardizes the integrity and health of a free and functioning press and could chill the willingness of sources or whistleblowers to expose corruption or wrongdoing in the country.

In September 2019, based on similar concerns about the surveillance of journalists, South Africa’s High Court issued a watershed ruling that the country’s laws do not authorize bulk surveillance. In part, because there were no special protections to ensure that the communications of lawyers and journalists were not also swept up and stored by the government.

In EFF’s own landmark case against the NSA’s dragnet surveillance program, Jewel v. NSA, the Reporters Committee for Freedom of the Press recently filed an Amicus brief making similar arguments about surveillance in the United States. “When the threat of surveillance reaches these sources,” the brief argues, “there is a real chilling effect on quality reporting and the flow of information to the public.”

This new complaint comes years after the revelations of global surveillance coalitions exposed by Edward Snowden, and only two years after a report revealed that BND had surveyed at least 50 phone numbers, fax numbers, and email addresses of known foreign journalists starting in 1999.  

In 2016, Germany’s Bundestag passed intelligence reform that many argued did not go far enough. Under the post-2016 order, an independent panel oversees the BND and any foreign intelligence collected from international communications networks must be authorized by the chancellor. However, the new reform explicitly allowed surveillance to be conducted on EU states and institutions for the purpose of “foreign policy and security,” and permitted the BND to collaborate with the NSA—both of which allow for the privacy of foreign individuals to be invaded.

It is worth noting that part of what allows a case like this to move forward is the ability of German citizens to know more about the surveillance programs their nation operates. In the United States, our lawsuit against NSA mass surveillance is being held up by the government argument that it cannot submit into evidence any of the requisite documents necessary to adjudicate the case. In Germany, both the BND Act and its sibling, the G10 Act, as well as their technological underpinnings, are both openly discussed making it easier to confront their legality.

We eagerly await the outcome of the German hearing and hope that the BND will be another fallen domino in the movement to restore global privacy.  Meanwhile, EFF will continue to litigate our constitutional challenge to the U.S. government’s mass surveillance of telephone and internet communications and will complete briefing in the Ninth circuit in late January 2020.

Related Cases: Jewel v. NSA