Many of the current security and vulnerability concerns associated with the Internet are the product of deliberate design philosophy and choices regarding functionality that characterized the early days of the Internet.
This chapter demonstrates how the architecture of the Internet shapes the way we use it and the possibilities of regulation—code is law. The original design was not concerned with control and pushed complexity to the edges of the network. The trade-off between security, control, privacy, and connectivity is decided, to a certain extent, at the architecture level.
This book provides a conceptual framework through which newcomers can begin investigating the cyber-frontier. Rather than providing a technical understanding of the elements of the network, it poses some critical questions for understanding how cyberspace works and who makes the rules in cyberspace. This set of questions provides a useful framework to keep in mind while diving into the more technical sections.
This sub-section provides an overview on the network, the protocols it employs to transfer data, and the various ways computers connect to the Internet. Its purpose is to consider the different domains of cyberspace—systems, applications, and human—and provide an “under-the-hood” understanding of how they interact.
An overview of how the Internet works and why it works the way it does. It provides both a technical introduction and covers some of the design principles that guided the Internet’s initial architecture. The conclusion outlines some implications for policy makers resulting from design: different types of service providers cannot always see the parts of the information that is not relevant to them. That is, an ISP cannot always see the higher level information in the packets (for example, it may be encrypted.); the higher-level service provider (a Web server, for example) cannot see the routing information in the routers, and cannot determine what the topology and capacity of the Internet is. This article also includes a glossary with key terms.
A technical, yet accessible illustrated overview of the main building blocks and connection types. The first section, Understanding the Internet’s Underlying Architecture, provides an overview of the Internet, and examines fundamental architectures, protocols, and general concepts. The second section, Connecting to the Internet, looks at the various ways computers can connect to the Internet, and has not been covered by other readings listed in this section. The main takeaway from this introduction is that connecting to the Internet will become increasingly easy—and will occur at increasingly higher speeds.
Notes: For users on the Harvard network: available as an e-textbook through <a href="http://www.google.com/url?q=http%3A%2F%2Fproquest.safaribooksonline.com.ezp-prod1.hul.harvard.edu%2F0789736268%3Fuicode%3Dharvard&sa=D&sntz=1&usg=AFQjCNF1M5MplTOw-aODoX16jxKox7Z8nQ">Hollis</a>.
About 99 percent of Internet traffic travels through undersea cables maintained by private providers. Securing and monitoring the cables raises questions regarding private/public cost-burden, territoriality, and international cooperation.
This book traces the history of modern cryptography and how it transferred from being a tool employed by governments to a public service designed and consumed by private actors. Chapter 3 describes how researchers sought to answer the following question: how can you create a system where people who have never met can speak securely? The answer is a one-way authentication system, now popularized as public and private keys.
Public-key cryptography and related standards and techniques underlie many commonly used security features, including signed and encrypted email, form signing, object signing, single sign-on, and the Secure Sockets Layer (SSL) protocol. This document introduces the basic concepts of public-key cryptography.
Many of the online authentication mechanisms that enable transactions rely on faith in the Secure Sockets Layer protocol and Certificate Authorities. Growing evidence suggests that this mechanism is highly vulnerable, and there has been much discussion surrounding alternatives.
The Secure Sockets Layer (SSL) protocol has been universally accepted on the World Wide Web for authenticated and encrypted communication between clients and servers. This article introduces key concepts and also touches upon potential threats such as Man-in-the-Middle Attacks.
This brief blog post defines the core issues with the Certificate Authorities mechanism SSL relies on, primarily via the missing quality of trust agility; it also critically examines suggested alternatives such as DNSSEC.
This is the old version of the H2O platform and is now read-only. This means you can view content but cannot create content. If you would like access to the new version of the H2O platform and have not already been contacted by a member of our team, please contact us at firstname.lastname@example.org. Thank you.