This is the old version of the H2O platform and is now read-only. This means you can view content but cannot create content. You can access the new platform at https://opencasebook.org. Thank you.
Robert Faris and Urs Gasser
The stakes are getting higher. As more and more citizens rely on digital technologies in their everyday lives, governments around the world face constant political pressure to address concerns over online security and harmful speech online. Concurrently, the costs of excessive regulation on innovation and civil liberties are of increasing concern. Deciding when and how to intervene in digital affairs is only getting harder for governments.
The public sector has always played an important role in the evolution of the Internet. Governments have been enablers: investing in infrastructure, encouraging private sector action, conducting training and education, and setting up legal regimes to support market environments that are ripe for innovation. Governments also have acted as constrainers: reining in illegal activity, filtering speech, and inhibiting malicious behavior online. Frequently, enabling and constraining are different facets of the same policy actions; suppressing harmful activity may facilitate beneficial interactions. However, tensions and trade-offs often accompany government interventions. For example, cracking down on cybercrime may help to stimulate online business, but it may also hurt innovation. Laws and mechanisms for combating harmful speech often come at the cost of legitimate speech. Win-win scenarios are the exception.
Government action is also shaped by strategic interests. There is no shortage of governments that seek to manipulate online environments to enhance their power and limit political opposition. Separating strategic behavior from interventions taken in the public interest is difficult, as this behavior is conveniently cloaked in the rhetoric of legitimate public sector action and commonly framed in terms of law enforcement, security, and protection.
At one end of the spectrum, a few dozen countries aggressively seek to control Internet activity. This group of countries comprises primarily those that have a long history of tight media controls and authoritarian government. These governments have considerable experience in attempting to control Internet activity and have tried a wide variety of strategies, based on a few options: (1) identifying and pursuing authors and activist networks that reside domestically, taking down content hosted domestically; (2) blocking content hosted overseas (this is often coupled with pressure on foreign countries and cyber-attacks); (3) engaging in information campaigns to disrupt online discussions and promote government-friendly messaging; and (4) limiting access to the Internet altogether.
Despite many years of concerted efforts, the difficulty of enforcing information controls on the Internet continues to vex governments that are intent on limiting online communication. The scale of the Internet, along with its distributed architecture and the ability to at least partially cloak one’s identity online, make locating the source of objectionable speech and blocking the spread of unwanted content a formidable task. In an attempt to increase enforcement capacity, countries draw on a number of common strategies, including: (1) enlisting the help of intermediaries in blocking content and accessing identifying information; (2) conducting surveillance; (3) compelling domestic hosting; (4) enacting licensing and real name requirements; and (5) passing legislation that is sufficiently broad to provide a rationale and to facilitate implementation of the above.
Ultimately, the effectiveness of these policies is manifest in self-censorship—increasing the costs and risks of engaging in digital communication discourages more and more individuals from writing about controversial topics online. Self-censorship is particularly difficult to measure; we are unable to observe that which does not occur, though we might make inferences about types of content that are unrepresented or missing online.
After witnessing a rapid increase in the number of countries that developed national-level content filtering during the first decade of the 21st century (there are currently several dozen, depending on how one counts), we have seen fewer big shifts in recent years. By and large, those that are able to garner the political power to implement Internet filtering are now doing so. Burma and Tunisia have notably scaled back their filtering regimes over the past two years. Russia has begun to block sites related to extremist thought and to pornography, drugs, and satire, and earlier this year, Jordan instigated blocking of hundreds of websites that did not comply with new online media licensing requirements. Pakistan is caught up in an ongoing policy dispute over plans to scale up filtering. The UK recently joined the ranks of countries that turned back serious attempts to enact broad scale filtering, following a similar path to Australia several years earlier. In Iran, statements that signal a possible softening of Internet filtering, along with the fact that officials in the current administration—including the president—maintain active Facebook and Twitter accounts (both platforms are blocked in the country), highlights the diverging opinions within the government on the current filtering policy and the possibility of controls being loosened in the future. Perhaps the most interesting and potentially pernicious control strategies are China’s efforts to control speech in social media.
The challenge of enforcing content restrictions on sites hosted outside of the country is not easily surmounted. Several countries have tried with little success to force social media and content hosting platforms to maintain a domestic presence that is within the reach of local control. China continues to be a notable exception after using a combination of laws and the blocking of outside platforms to create a social media market dominated by domestic firms. Attempts to convince foreign-based platforms to adopt local content restriction policies have yielded limited success. YouTube has agreed to geographic blocking of some videos; Google has agreed to remove results from country-specific versions of its search engine, and Twitter has set up a process for blocking tweets that are illegal according to national laws. In general, these steps fall far short of the aspirations of many regulators. A handful of countries, including Thailand, Pakistan, Turkey, and Vietnam, have resorted to blocking entire platforms for long periods of time without prompting the emergence of local alternatives.
The apparent slowing of the spread of filtering does not necessarily translate into generally good news for the state of civil liberties online. Pursuing individuals through legal and extralegal means continues be a mainstay of control strategies that all too frequently impinge on basic human rights. The number of authors behind bars for their online writing continues to grow. Over the past several months, China and Vietnam, in particular, have arrested a large number of bloggers and microbloggers.
Cyberattacks have been employed in apparent efforts to influence content hosted abroad, though their use is problematic. Given the shaky ethics and merits of this approach, governments that do support and carry out such actions are not eager to take credit and must limit their level of involvement to maintain a measure of deniability. It is unclear that the associated service disruptions have a substantial long-term impact. Hacking into servers is potentially more serious when it uncovers sensitive personal information; this is where hacking ties with surveillance.
In the past year, we have learned much about the mechanisms and scope of digital surveillance, particularly as carried out by the NSA. It is logical to assume that the US government has a sizable advantage over other countries in its technical expertise and access to information flows. It is also reasonable to assume that the implied principles of digital surveillance—as suggested by NSA practices—are the same around the world: capture as much information as possible, by any available means. This is due in part to structural changes that may not be reversible. In prior generations, the cost of surveillance and data acquisition constituted a useful buffer between state surveillance and privacy; resource constraints forced law enforcement to focus on a limited number of targets on a scale where judicial oversight was a practical—if imperfect—deterrent against overreach.
Both cyberattacks and surveillance represent threats to a related set of principles of democratic governance: accountability and transparency. The prospect of governments working in the shadows greatly hinders efforts to document and analyze these activities and to design governance and accountability systems that include adequate oversight.
Over the past couple of years, lawmakers have endeavored to define the contours of permissible speech online and support the development of legal and administrative mechanisms for implementing regulations. Among the troubling examples of this legislative activity are the rumor regulations enacted in China that criminalize the spread of information deemed defamatory or in some way inaccurate (the regulations do not define what constitutes a rumor, leaving interpretation open to authorities). In Vietnam, Decree 72 restricts blogs and social websites to content related to 'personal information,' leaving discussion of news and current events in the realm of forbidden speech. Recent changes to media law in Jordan require websites that include news and commentary related to Jordan to be licensed by the government. Amendments to Bangladesh’s ICT law made in August 2013 criminalize “publishing fake, obscene[,] or defaming information,” or posting materials that “prejudice the image of the State” or “hurt religious belief.”
Noting that intrusive filtering comes at a political cost, even for authoritarian regimes. Rather than maintaining constant filtering regimes, an increasing number of countries are cranking up controls for shorter periods of time during times of unrest or political sensitivity such as protests or elections. China and Iran have historically dialed up content controls for periods of time. At an extreme, blacking out the Internet has become a more common short-term tool and has been implemented in Egypt, Libya, Syria, and Sudan.
In countries committed to protecting online speech, the nature of regulatory challenges is different. Drawing a clean line between protected and unprotected speech is impossible, and processes for adjudicating the difficult cases get bogged down when operating at the scale of the Internet. A core problem is that increasing the effectiveness of measures to squash unprotected speech online endangers protected speech and threatens the development of a vibrant space for collaboration and innovation. A related concern is that the legal, administrative, and technical structures used for legitimate regulatory action are easily extended to levels that trample civil liberties and blunt the benefits of economic, political, and social activity online. In many countries, comprised largely of strong democracies, an appreciation for these tensions has supported policies characterized by regulatory restraint and prompted the passage of laws and policies that affirmatively build in speech protections. This represents a stark contrast to countries that aggressively constrain online communication.
The treatment of intermediary liability is perhaps the best single indicator of the tone and general disposition to online speech—the countries that require intermediaries to police content on their platforms also tend to employ other strategies to restrict online content and activity, and those that limit intermediary liability have the most active online environments.
However, promoting productive online activity is by no means straightforward, and for governments that seek to promote greater online engagement among their citizens, a number of difficult policy challenges lie ahead. Among these is resolving a host of complex issues related to cloud computing, which will be difficult both in the West and in less open environments. Net neutrality and broadband policy debates are tangled up in the age old ideological disputes over the proper role of government and standards for intervening in private markets. These philosophical differences extend as well to debates over privacy. Many privacy advocates expect governments to play a more proactive role in crafting online privacy protections, though others favor a hands-off approach. The EU is taking a leading role in defining mechanisms to protect privacy. The complexities of transnational data flows again come into play in the realm of privacy, as conflicting privacy regimes may impede access to outside platforms and data services. Harmonization of these regimes into a global data privacy standard is one possible solution that has gotten a boost from the NSA surveillance controversy.
The regulatory approaches of the BRIC countries—Brazil, Russia, India, and China—reflect much of the variation in Internet strategies. China continues to set the standard for applying an extensive and multi-pronged approach to keeping a lid on digital activism that employs legal, technical, and social control mechanisms. Yet online discussions and debates in China are extremely active and take on a very wide range of issues and debates not featured in traditional media. Russia has traditionally relied upon non-filtering methods, including offline intimidation of journalists and the threats of legal action and surveillance, while allowing political discussion online to flourish. In both China and Russia, we see evidence that governments are more concerned with political organizing online that they are with freedom of speech and criticism of the government, although the two are inextricably linked. While government filters can slow the diffusion of information, attempts to prevent the distribution of ideas, memes, articles, and videos online have proven to be futile. Civil society organizing online, however, is both a bigger threat to non-democratic and semi-democratic regimes and easier to disrupt. In both China and Russia, the approach appears to be focused on dismantling emergent efforts at social mobilization before they take hold. This is often achieved by targeting key hubs and leaders, while allowing a good degree of political debate to continue.
India and Brazil have much stronger commitments to freedom of speech, while diverging in interesting ways from the policies adopted in North America and Europe. India has adopted the safe harbor provisions for intermediaries that played a key part in the emergence of the Internet. The same body of law also opens up a broad range of speech to possible criminal liability and gives the government broad authority to order the blocking of Internet content. India has also taken large steps to implement a national-level government identification scheme meant in part to facilitate the provision of government services and engagement in economic activity, including to many of those most in need. This initiative collides with a host of difficult privacy issues that remain unresolved. Brazil has at times adopted policies that many would describe as heavy-handed. Yet Brazil has also experimented with some of the world’s most innovative and progressive approaches for engaging citizens in government decisions using digital tools. If passed into law, the Marco Civil da Internet in Brazil would perhaps represent the world’s most extensive legal assertion of individual online rights.
The diversity of regulatory approaches to the Internet around the world is now sufficiently broad and long that lessons can be reasonably inferred with reasonable confidence. The policies adopted by many governments designed to protect online speech, enable the emergence of collective action, and promote business activity have had a strong positive influence on private sector and civil sector activity. Laws and policies meant to provide online security and limit harmful speech have been less successful and have often come at the cost of stunting the development of social capital online, although in many cases this was the very objective.
This is the old version of the H2O platform and is now read-only. This means you can view content but cannot create content. If you would like access to the new version of the H2O platform and have not already been contacted by a member of our team, please contact us at email@example.com. Thank you.