ILP Lectures

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 23

Belén Luengo Palomino

LAWS0339 Internet Law and Policy


Contenido
LAWS0339 Internet Law and Policy.............................................................................................1
GENERAL INFORMATION....................................................................................................1
LECTURE 1 - Welcome to the Internet: Technologies and Histories.......................................1
Internet Regulation – Guadamuz............................................................................................1
Internet Histories: Partial Visions of People and Packets – Cath-Speth................................6
SEMINAR – INTERNET HISTORY........................................................................................8
LECTURE 2 – An Unregulable Network?.................................................................................9
The Generative Internet – Zittrain..........................................................................................9
Piracy, Security and Architectures of Control – Cohen.......................................................17

GENERAL INFORMATION
- 2h seminars on Monday 4-6pm
o Discuss material
o Look at legal concepts
- 4x 1h tutorials per semester
o Light preparation
o Apply knowledge from the lecture
o Real life examples (legislation, discussion on policy, cases…)
- Put questions on the Q&A instead of emailing them, he won’t reply by mail
o It is ok to answer to other people’s questions
- Formative essays
o Mid-late Nov
o Late Feb
- Summatives
o Coursework (2500 word, due at beginning of Term 2)  50%
 We can suggest our own questions and pick questions suggested by
others
o Take-home exam (48h – 2 essays of 1250 words, Term 3)  50%
 Apply content from the whole module
 Not problem questions, but essays that you can approach from the angle
that you like most (technical, social, legal…)

LECTURE 1 - Welcome to the Internet: Technologies and Histories


Internet Regulation – Guadamuz
Introduction
- How can we regulate the Internet?
a) Internet is a free and open space; no regulation, is incapable of being regulated

1
Belén Luengo Palomino

b) regulation is not only desirable, but now is completely regulated because of


state surveillance; all semblance of freedom is a mere illusion
The Internet
- the Internet is a ‘network of networks’ that operates using common protocols designed
to ensure resilience, distribution, decentralization and modularity.
- Started as a military program to create a communication infrastructure that could
survive a nuclear strike
o Need for decentralization  no governing node
o distribution  message needed to be broken down in pieces and put back
together by recipient
o heterogeneity/modularity  different pieces capable of communicating using
standard communication tools
- Elements of the Internet according to their function:
1. Link layer  protocols that allow connection of a host with gateways and
routers within a network (usually a large one known as LAN, like Ethernet
protocols)
2. Transport layer  provides end-to-end protocols for communication between
hosts in the system (i.e. TCP, or UDP)
3. Internet layer  Because the Internet is a network of networks, every
computer connected to it must be able to find its components. Thus, it allows
data packets to reach their destinations by allowing identification of
participating computers based on their IP address (IP stands for Internet
Protocol)
4. Application layer  This is the top communication level made up of protocols
for user applications such as sending mail (SMTP), files (HTTP) or system
support (i.e. identifying servers by name instead of IP addresses, knowns as
DNS)
- The role of the first layers is to distribute information across the networks; so
fundamental that cannot be regulated.
o Made up of protocols established by standard-setting bodies such as the IETF,
W3C, IESG, IAB and ISOC.
- Application layer  where most user activity happens, the actual communication. This
is the one where most regulation takes place.
o However, a country that wants to control the flow of information to its citizens
would place technical controls on the Internet layer, for instance.

A Tale of Two Internets


A) The Dark Web
- Visible Internet  users connect to the network using the communication layers and
connect to one another with the application layer
o It is formed by shared applications such as the World Wide Web, email, social
media apps, games, file transfers, etc.
o People use the application World Wide Web, used to surf websites created with
HTML
 But because the Internet is decentralized and modular, anyone could
create a browser that used their own application protocol
- Dark Web  uses the Internet’s transport layers but it is made of applications that only
a few know how to operate. Highly encrypted space which is seldom regulated.
o Users cannot be traced and cannot be identified, known as realm of criminals

2
Belén Luengo Palomino

o Tor Browser uses the TOR Hidden Service Protocol to connect computers that
are also connected to the Internet
 Created by a group of encryption enthusiasts, it was designed to
anonymize data using voluntary relays and routers that mask the users’
identities
 With this browser, one accesses websites that are not available in
common browsers like Chrome and Firefox
o Possible to post any type of content due to anonymity, and buy almost anything
using decentralized payment methods like Bitcoin
 Trial of Ross Ulbricht, operator of the Silk Road website (marketplace
for any sort of illegal material)
- Contributes to the idea that the Internet is so vast that it cannot be regulated
B) Snowden’s Internet
- Former NSA contractor who revealed the mass surveillance conducted by NSA in the
US and GCHQ in the UK
o Issues he uncovered:
1. Link layer  NSA’s hacking unit known as the Tailored Access Operations
(TAO) capable of breaking into a target’s communications by tapping into
people’s connections to the network at the point of origin
2. Transport layer  communications in the transport layer are not encrypted by
default, and the NSA was able to tap into underwater cable systems
3. Internet layer  one of the fathers of the Internet, Vint Cerf, reported that the
NSA stopped him from including an encrypted protocol into the transport layer.
Thus, there is a lack of encryption in these protocols (TCP/IP)
4. Application layer  NSA was able to collaborate with technology firms to
conduct surveillance like Skype
- This is a much-controlled vision of the Internet, using collusion by the industry and
surveillance
C) Real Internet
- Truth lies in the middle  The Internet presents a space that is difficult to regulate
despite the existence of the surveillance apparatus previously explained
o Relatively easy to identify most people, but there is still some level of
anonymity, and it would grow if faced with any attempt at exercising
enforcement online
 When regulators tried to curb file-sharing, the number of users of
anonymous services and virtual private networks rose
- There is still infringement of copyright, but a more centralized experience is emerging
for most users:
o State control is possible  China is censored
o Everyday experience of most users depends on that private enterprises show
them  experience is defined by app developers
- A minority of technically-oriented users operate in heavily encrypted spaces with near-
impunity from regulation, while a large number of people suffer constant regulation
through censorship, blocking and filtering of content
o A person’s experience of the Internet will depend on their education, resources
and geographical location

3
Belén Luengo Palomino

Regulation Theories
A) The more things change, the more they stay the same
- When the Internet was created, many thought that it would herald a new golden era of
prosperity and global understanding, an age of ‘computer-aided peace’ without
nationalism
o Yet, this has not been the case.
o Historically, the invention of the telegraph was followed by a misuse for
fraudulent purposes and the creation of a highly skilled technical class of users
which brought social changes
o In 1970~80s people were already concerned about the end of privacy or
whether censorship was acceptable.
o Braman  what has changed is that the Internet has democratized applications,
and its regulation has shifted from concerns about direct regulation to
discussion about intermediary liability. Before, only professional entities were
subject to regulation, now everyone uses social media.
- Many of the themes remain the same, but the Internet has some key unique features that
raise the question of whether is possible to regulate it
B) The Return of Cyber-Libertarianism
- Idea that the Internet is a separate space subject to different laws; in words of Barlow:

Cyberspace consists of transactions, relationships, and thought itself,


arrayed like a standing wave in the web of our communications. Ours is a
world that is both everywhere and nowhere, but it is not where bodies live.
[...] Our identities have no bodies, so, unlike you, we cannot obtain order by
physical coercion. We believe that from ethics, enlightened self-interest, and
the commonweal, our governance will emerge

o Internet communities should be able to exercise self-regulatory control because


governments would not be able to intervene
o Post and Johnson’s Net Federalism  the Cyberspace is clearly a separate
entity and should be treated as an independent regulatory sphere for all legal
purposes. Its regulation would be assembled like federal states are brought
together under an unifying ideal, each one would bring their own rules
- These arguments underestimate the regulatory power of the government, yet, when new
technologies arise most people immediately resort to them
o Maybe it is because these ideas are normative and not descriptive, this is, the
Internet can be regulated but IT SHOULD NOT be regulated. Is regulation in
the Cyberspace desired?
C) Regulating the Gateways
- The reality is that there are power struggles. Up until 2000 it was a mixture of cyber-
libertarianism and half-hearted legislative solutions.
- Even if in its origin it has an open architecture, governments have been able to draw
borders online, segregating it into national intranets to filter undesired content
o While the global architecture of the Internet remains due to routers and
distributed protocols, the actual physical Internet tends to be centralized

4
Belén Luengo Palomino

o Countries have internal connections working with routers, and physical


connections with the exterior, which allows them to create Internet chokepoints
or controlled gateways (i.e. Great Firewall of China)
 The Great Firewall works by deploying hardware routers at each of the
entry points into the country. These routers are given lists of banned IP
addresses, so when an Internet host within China makes a request to
access a banned site, the router does not forward the request to the
target host, so the site appears not to exist, and a network error message
is returned to the client.
 Disconnection of Egyptian Internet during Arab Spring 2011 
possible due to a national firewall consisting of an extra layer of
Internet servers that mediate all traffic in and out of the country through
servers running the appropriately named Border Gateway Protocol
(BGP).
- It has become clear then that the most effective regulatory solution to online content is
to exercise control at the access points = Internet is decreasingly distributed
- Moreover, private enterprises also function as regulators of their own environment,
controlling the flow of information and gatekeeping, being able to unilaterally remove
content
o They do so either prompted by the government or on behalf of other private
actors
D) Code
- Lessig’s ‘Code and Other Laws of Cyberspace’ has become one of the more influential
books on Internet regulation
o There are four main modes of regulation  markets, norms, law and
architecture
 He introduced the 4th one, architectural regulation in technological
settings
o Whether the Internet can be subject to regulatory control will depend entirely
on its underlying architecture.
 The code is open to everyone and thus, hard to be subject to
government regulation
 But the plumbing, the architecture, the protocols and communication
tools that build the online world can be controlled
- The invisible hand of the cyberspace introduces an ordering force in the architecture of
the Internet, but it is shaped by the code made consciously by programmers and
policymakers
o This code shapes the underlying architecture, this is, the invisible hand is
constrained by conscious decisions when writing the code
o In summary, architectural decisions made at the outset of the development of a
technology have huge implications about how it is going to be regulated.
 i.e. the TCP/IP authentication vulnerability (it is possible to perform IP
spoofing and pretend to be sending data from a different address
because of a lack of authentication)
- The code model relies on coders and policymakers to make informed decisions, but
they might be wrong sometimes. However, we can also introduce new code to create
solutions for things that were not working
o Regulation by code can increase the efficacy of internet regulation, yet it will
tend to undervalue the public interest and has a lack of democratic legitimacy
- Internet Code Pyramid:

5
Belén Luengo Palomino

1. Foundational protocols, TCP/IP (need for a conscious architectural decision)


2. All the other protocols are built on that one, responding to the characteristics of
that existing network; developers are constrained
Complexity and Self-Organization
- The Internet is a complex network that displays self-organizing characteristics
o It is necessary to understand these characteristics in order to regulate it
o The network is made up of nodes and links that grow according to power laws.
The older nodes attract more nodes, and that reinforces them = creates large
hubs that explain the apparent ordered nature of the system
o Scale-free nature  the network is fractal, meaning that it has the same
architectural features on the large or short scale, which makes it resilient to
attacks ´
 This also means that undesired networks are also robust
- Luhmann’s theory of autopoiesis  social systems that respond to internal stimuli
instead of relying on external elements; these elements come together to generate
stability in the system.
o Chaotic systems tend to stability in the long run  the Internet is an autopoietic
system which becomes organized because its elements favor clustering and
stability to manage the complexity
o Regulators can…
 The network responds to its own self-organizing elements and cannot
be governed
 Cyber-libertarian view
 It responds to self-organizing elements and can still be governed
 Optimistic view
 Build the systems to fit the regulatory goals, so that self-
regulation grows from that seed (engineered self-organization)
- Self-organization theory of Internet regulation  the Internet is a complex system
that displays self-organization. In order to regulate the digital environment efficiently
and successfully, it is imperative that one understands how it is organized, what
characteristics are present, what elements act as fitness peaks and how architectural
decisions affect its emergent features.
Epilogue: The Centrality Menace
- In 2013 hundreds of thousands of websites failed due to increasing centralization of the
nodes, which is undermining the original resilience of the Internet
o Distributed and open Internet is important, must be protected against…
 Government surveillance
 Profit-seeking monopolies
 Public-private conglomerates
o Regulations are needed to tackle these issues

Internet Histories: Partial Visions of People and Packets – Cath-Speth


Introduction
- Internet history will shortly be indistinguishable from human history – Crocker
- Point of the author  Internet engineers reject responsibility for the rights-eroding
properties of the protocols they created
o Most research focuses on the technical backbone of the Internet, rather than its
social side

6
Belén Luengo Palomino

- Visions about how the Internet should be known as ‘sociotechnical imaginaries’, these
tend to be ‘libertarian imaginaries of an identity-free and bodiless sociotechnical future
shaped the Internet’ (Paris 2020,8)
o The history of the internet has been shaped by the imaginaries of just a few
people, mostly white male scientists = insensitive to human rights concerns
o At least, that’s the liberatory history that is mostly told but author believes there
is not a single history, but many parallel ones
- Author takes an anthropological stance of the history of the Internet, telling the stories
of the people who created it who were influenced by the culture they were living in
A Brief History of the Internet
- the Internet’s technical functions are inextricably linked to its social functions
o force of the Internet’s sociotechnical imaginaries of individual liberation and
optimism in shaping its material design
- Networking protocols are the rules, or blueprints, that enable interoperability among
technologies made by different manufacturers (the Internet is made of hundreds of
them)
o These are developed in industry-led standard-setting bodies, like the IETF.
o Made by predominantly white, male and Western, technically savvy and
university educated individuals who favored matters of surveillance, individual
freedom and liberty over social justice, equity or anti-discrimination
The Moonshot Internet: The intergalactic computer network
- The Internet was born within ARPA (Advanced Research Projects Agency), working
for the USA Department of Defense during the Cold War
o Licklider was the first director of the Information Processing Technology
Office (IPTO) there, where he highlighted some barriers to creating a network
of computers such as the creation of norms regarding a shared language for the
network
o He recognized that the aspirations of the individuals involved and the network
were intertwined, blurring technology’s social and technical functions
o There were discussions about the function it would fulfill, proving that its
origin is social rather than technically constructed
- Moonshot thinking  a utopian view of technology as able to foment positive change
in society by solving current and future problems through technology
o It makes engineers overlook how their technologies might exacerbate social
inequities on earth.
o Licklider mentioned that the Internet reflected research and military interests,
but he overlooked the social aspect
- It was decided that the Internet would work based on packet-switching technology and
protocols because…
o Baran wanted to create a technology that would survive a nuclear attack
o Davies wanted to make accessing time-sharing computer resources more
efficient (universities and other institutions were sharing computers at the time)
o Kleinrock published the first paper on packet-switching theory

The Military Internet: Advanced Research projects agency network


- They wanted to create a network for ARPA, known as ARPANET
o ARPANET’s governance body later evolved into IETF

7
Belén Luengo Palomino

o ARPANET opted for packet-switching instead of circuit-switching because it


was more efficient, resilient and harder for messages to be intercepted entirely
 these were all social needs of the time
 Efficiency responded not only to war, but also to cost reduction of cost
sharing
 Roberts was asked to build a network that would allow multiple
computer science departments to share their computing resources
(emphasis tends to be placed on the resilience, but this was the ultimate
reason)
 This tendency of sharing resources is a product of the era
Modularity
- ARPANET has 4 layers (top to bottom)  the application (or process) layer, the host-
to-host layer, the Internet layer, and the network interface (or local network layer).
o Separation allowed engineers to develop the network with specific protocols,
each responsible for a specific limited set of tasks without knowledge of the
entire network
o This modularity affects the vision of the IETF on regulation  “that is
somebody else’s problem”; engineers are not expected to consider the social
impact of their work
Protocols
- Modularity requires Internet protocols because layers need a shared language to
exchange data
o The Network Working Group (NWG) later became the IETF was responsible
for developing a set of informal norms and relationships to manage the
development of the network
 Initially formulated as the Request for Comments (RFC) series (the
IETF still has these, which set the technical requirements for
interoperability and socialize participants)
 The IETF is made of friends who push each other to do better
o 1973 DARPA (research agency of DOD) developed a new protocol called
Transmission Control Protocol (TCP) which enabled 2 hosts to connect and
facilitated the reliable delivery of data between them
 TCP was expanded with the Internet Protocol (IP) which addresses
routes and packets (TCP just ensures transmission is reliable)
 TCP  provides a single language that allows computers to
connect over the network (Host-to-host layer)
 IP  enables information to be routed between them (Internet
layer)
-
The Science Internet: National Science foundation network
-
The Mainstream Internet
-
The Modern Internet: Oh well, we tried
-

8
Belén Luengo Palomino

Advocacy in the Computing and Internet Industry


- Efforts have been made between the 1960s and now to include urgent societal concerns
in the development of the Internet.
Conclusion
-

SEMINAR – INTERNET HISTORY


- 1950s timesharing on expensive mainframe computers
o People would share them (they were very expensive) to compute one after
another, but this was very hard to manage
o More efficiency was required, which led to ARPA’s idea of an “intergalactic
network”
 The word internet comes from this idea of intergalactic!
o Core idea was that you could connect these computers to do research
- Stable Connections and Packets
o Accessing a mainframe remotely could be done through requiring a stable
connection, but if it dropped everything would be lost
o Instead, they decided to split up the data into packets and give each one an
address so that they would get reconstructed when they reached their
destination
o Thus, if one of them got lost, that specific one could be sent without having to
resend the entire data
 In fact, when you are video calling someone, there is a small gap
(larger if you are very far away) because the packets have to reconstruct
themselves
o Packet switching simultaneously invented in the National Physical Lab in
London by Davies, and RAND (Research and Development) in the US by
Baran
- ARPANET went Global
o The first place with internet outside of the US was UCL, and everyone was
connected to the internet through this university (1 st engineering computer in
the UK)
 Through the lab of Peter Kirstein in 1973
o However, this was the precursor of the Internet
 The Internet is a network of networks which are connected to each
other, this was made possible by Vint Cerf at Standford and Kahn at
ARPA, proposing TCP/IP in 1977 (still used as the foundation for
everything we use today). Once this idea took off, the internet we know
today was created
- The Internet is a Layered Network of Networks
o Application Layer  These include protocols for user applications: standards
for what is in the data packets. The web is HTTP, email is SMTP, DNS is a
protocol that turns IP addresses into web domains…
o Internet Layer  IP does not give a route for packets, but allows data packets
to reach their destination by identifying participating computers
o Transport Layer  end to end protocols to correctly move data between
networks
o Link Layer  physical, frequencies…

9
Belén Luengo Palomino

- USENET started in the late 1970s and it was a precursor to internet forums
o Users would post articles to their local server, which would periodically be
synced with other servers as a decentralized system
 IMDB started on USENET before the Web even existed
o Emails are basically the same, invented in 1971 by Tomlinson, they already
used the @ character to separate username and host
o Whole Earth ‘Lectronic Link (WELL) was used by people involved in 1960-
70s countercultural movements, computer enthusiasts, journalists…
- Most internet users were researchers
o They were not trying to create something that would be used by the entire
world, but simply make their tasks more efficient

LECTURE 2 – An Unregulable Network?


The Generative Internet – Zittrain
Introduction
- The Internet was also built to be open to any sort of device
- Personal Computers (PC) an affordable, multifunctional device that is readily available
at retail outlets and easily reconfigurable by its users for any number of purposes
o People freely write code in them and spur innovation (openness to third-party
innovation)
 Resisted control so far, but increasingly at odds with itself
 Threatened by its vulnerability to malicious code since it can run code
from third parties
o Author suggests greater constraints on that creativity to guarantee stability and
safety
 But many of today’s uses would have never been possible if that was
the case
- If complete fidelity to end-to-end network neutrality persists, our PCs may be replaced
by information appliances or may undergo a transformation from open platforms to
gated communities to prisons
o It is not a matter of Linux v Microsoft, since both are open (allow users to use
other people’s programs); it should be generative vs nongenerative
- Suggestion:
o Collective regulation might also entail new OS designs that chart a middle path
between a locked-down PC on the one hand and an utterly open one on the
other, such as an OS that permits users to shift their computers between “safe”
and “experimental” modes, classifying new or otherwise suspect code as suited
for only the experimental zone of the machine
A Mapping of Generative Technologies
- Generativity denotes a technology’s overall capacity to produce unprompted change
driven by large, varied, and uncoordinated audiences.
o Right now, and since the beginning, the Internet has a lot of it
o Connected to a grid without centralized control = nearly completely open to
innovation
A) Generative Technologies Defined
1. Capacity for leverage
 A generative technology makes difficult jobs easier

10
Belén Luengo Palomino

 The more effort a device or technology saves the more generative it is


2. Adaptability
 Adaptability refers to both the breadth of a technology’s use without
change and the readiness with which it might be modified to broaden its
range of uses.
 Adaptability in a tool better permits leverage for previously unforeseen
purposes.
3. Ease of Mastery
 reflects how easy it is for broad audiences both to adopt and to adapt it:
how much skill is necessary to make use of its leverage for tasks they
care about, regardless of whether the technology was designed with
those tasks in mind.
 Ease of mastery also refers to the ease with which people might deploy
and adapt a given technology without necessarily mastering all possible
uses.
4. Accessibility
 The more readily people can come to use and control a technology,
along with what information might be required to master it, the more
accessible the technology is
 Barriers to accessibility can include the sheer expense of producing it,
y, taxes and regulations imposed on its adoption or use and the use of
secrecy and obfuscation by its producers to maintain scarcity without
necessarily relying upon an affirmative intellectual property interest
5. Generativity Revisited
 As defined by these four criteria, generativity increases with the ability
of users to generate new, valuable uses that are easy to distribute and
are in turn sources of further innovation.
B) The Generative PC
- The modern PC’s generativity flows from a separation of hardware from software
o Hardware  hard to change once it leaves the factory
 Calculators have software inside hardware, by bolting (manufacturer
knows what use will be given to it)
o Software  it allows unmodified hardware to execute a new algorithm
 User can input new code
 the manufacturer can write new software after the computer leaves the
factory, and a consumer needs to know merely how to load the cassette

o PCs make use of Operative Systems (OS) which makes it easy to run new
software
 PCs were made to run software not foreseen by the manufacturer
 High-level programming increases accessibility  PCs are not more
leveraged but they make PCs more accessible to a wider audience of
programmers = a PC all programmers can program
o Market in third party software developed
 Such a range of developers enhanced the variety of applications that
were written not only because accessibility arguably increased the sheer
number of people coding, but also because people coded for different
reasons

11
Belén Luengo Palomino

 This diversity benefited manufacturers since greater availability of


software enhanced the value of PCs
o The true value of the PC lies in its availability and stability as a platform for
further innovation
 The hardware and OS are the “less important” parts
o Debate between free and proprietary Oss
 the generativity of the PC depends little on whether the OS may be
modified by end users and third-party software developer
 Windows is closed and still led to a lot of generativity  its
application programming interfaces enable a programmer to
rework nearly any part of the PC’s functionality and give
external developers ready access to the PC’s hardware inputs
and outputs)
o The technology and market structures that account for the highly generative PC
have endured despite the roller coaster of hardware and software
C) The Generative Internet
- The Internet today is exceptionally generative
o Thus, programmers independent of the Internet’s architects and service
providers can offer, and consumers can accept, new software or services.
- History
o Generativity not designed  those individuals thinking about the Internet in the
1960s and 1970s planned a network that would cobble together existing
networks and then wring as much use as possible from them
o They just wanted to connect networks, that was their concern. They wanted to
do so easily, without a central hub = stateless
o Introduction of commercial purposes = great generativity started
o the dominance of the Internet as the network to which PCs connected, rather
than the emergence of proprietary networks analogous to the information
appliances that PCs themselves beat
 First Internet existed through self-contained “walled garden” networks
(a bunch of universities sharing their content)
 Each network connected its members only to other subscribing
members and to content managed and cleared through the
network proprietor
o Peter Tattam (hobbyist) wrote Trumpet Winsock, a program that allowed
owners of PCs running Microsoft Windows to forge a point-to-point Internet
connection with the servers run by the nascent ISP industry (internet service
provider)
 marked the beginning of the end of proprietary information services
and peer-to-peer telephone-networked environments
 After recognizing the popularity of Tattam’s software, Microsoft
bundled the functionality of Winsock
o The greater generativity of the Internet compared to that of proprietary
networked content providers created a network externality:
 as more information consumers found their way to the Internet, there
was more reason for would-be sources of information to set up shop
there, in turn attracting more consumers

12
Belén Luengo Palomino

 Dispersed third parties could and did write clients and servers
for instant messaging, web browsing, e-mail exchange, and
Internet searching
o Internet remained broadly accessible
 anyone could cheaply sign on, immediately becoming a node on the
Internet equal to all others in information exchange capacity, limited
only by bandwidth
 The resulting Internet is a network that no one in particular owns and
that anyone can join.
- Hourglass figure  a simple set of narrow and slow-changing protocols in the middle,
resting on an open stable of physical carriers at the bottom and any number of
applications written by third parties on the top
o It is better to keep the basic network operating protocols simple because error
correction in data transmission (for instance) are best executed by client-side
applications
 Rather than changing the way routers on the Internet work
D) The Generative Grid
- Both noncommercial and commercial enterprises have taken advantage of open PC and
Internet technology, developing a variety of Internet-enabled applications and services
o Significantly, the last several years have witnessed a proliferation of PCs
hosting broadband Internet connections
o The generative PC has become intertwined with the generative Internet, and the
whole is now greater than the sum of its parts
- generativity is vulnerability in the current order
o millions of machines are connected to networks that can convey reprogramming
in a matter of seconds means that those computers stand exposed to near-
instantaneous change
 i.e. opens PCs to the prospect of mass infection by a computer virus
o Eliminate those vulnerabilities comes at the expense of generativity of the grid

Generative Discontent
- identify three powerful groups that may find common cause in seeking a less generative
grid:
a) regulators (in part driven by threatened economic interests, including those of
content providers)
b) mature technology industry players
c) consumers
A) Generative Equilibrium
- Professors such as David Johnson and David Post maintained that cyberspace is
different and therefore best regulated by its own sets of rules
o Professor Lessig argued that policymakers should typically refrain from using
the powers of regulation through code (this is, they should regulate)
 “Code is Law” as it deeply affects people’s behavior
 Fear that it will constraint people’s freedom
o The three branches of cyberlaw (first one is simply more regulation) can be
reconciled with:
 Locked-down PCs are possible but undesirable.
- Trusted systems  systems that can be trusted by outsiders against the people who use
them.

13
Belén Luengo Palomino

o In the consumer information technology context, such systems are typically


described as “copyright management” or “rights management” systems
 US government protects them
o These are the central fears of the “code is law” theory as applied to the Internet
and PCs
 technical barriers prevent people from making use of intellectual works,
and nearly any destruction of those barriers, even to enable lawful use,
is unlawful
 Lessig and Cohen raise alarm about such possibilities as losing the
ability to read anonymously, to lend a copyrighted work to a friend, and
to make fair use of materials that are encrypted for mere viewing
o These fears have remained hypothetical (piracy still happens to the same
degree)
- So long as code can be freely written by anyone and easily distributed to run on PC
platforms, trusted systems can serve as no more than inconveniences (can be overcome)
B) Generativity as Vulnerability: The cybersecurity fulcrum
- Consumers hold the key to the balance of power for tomorrow’s Internet
o They drive the market
 Not tolerate locked-down interest because it will lead to more of it
- A Threat Unanswered and Unrealized
o Morris worm affected 1000-6000 computers
 Network engineers and government officials did not take any
significant preventive actions to forestall another
 The decentralized, nonproprietary ownership of the Internet and its
computers made it difficult to implement any structural revisions to the
way they worked.
- The PC/Network Grid and a Now-Realized Threat
o Now more people online = 24/7 surveillance, easier to detect viruses faster
 But today’s viruses are highly and nearly instantly communicable,
capable of sweeping through a substantial worldwide population in a
matter of hours
A Postdiluvian Internet
- Changes that may occur to the Internet and will negatively affect many of the
generative features of today
A) Elements of a Postdiluvian Internet
- Information appliances
o An “information appliance” is one that will run only those programs designated
by the entity that built or sold it.
 i.e. appliances to connect your TV to the Internet and watch shows
online
o Limited  inability to create standardized digital
output (recorded programs cannot be uploaded online)
o It gives the appliance complete control over the
combined product/service that it provides + consumers
find it trustworthy
o = less incentives for coders to program because people
prefer this
- The Appliancized PC

14
Belén Luengo Palomino

o The PC is heading in the direction of these information appliances.


 caution users before they run unfamiliar code for the first time = users
might be scared of running perfectly safe code
o Consumers are confused, they are scared of viruses and might not lead the
market towards protecting a generative Internet even if it benefits society
 The user is simply not in the best position to determine what software is
good and what software is bad
o Third-party vendors can write their own drivers and leave them unsigned, they
can sign them on their own authority, or they can submit them to Microsoft for
approval
 Third option if a good idea  OS creator can promise assistance if
anything goes wrong
 Microsoft could charge a fee to approve that code
 Small coders decide either fight for approval or forgo it but risk being
downloaded in less PCs
o Automatic updating
 So far Apple and Microsoft install automatically only security related
updates that they deem “critical”; updates to the “regular” functioning
of the OS or auxiliary applications still require the consumer’s approval
 the OS and attendant applications become services rather than products
o Software makers can require consumers to purchase
regular updates and deactivate form a distance the
software of those who did not purchase the license =
reduce piracy
 this development is generatively neutral, merely shifting the locus of
generative software writing to a server on the Internet rather than on the
PC itself
B) Implications of a Postdiluvian Internet: More Regulability, Less Generativity
- Consumers deciding between security-flawed generative PCs and safer but more limited
information appliances (or appliancized PCs) may consistently undervalue the benefits
of future innovation (and therefore of generative PCs).
o benefits of future innovation are difficult to perceive in present-value terms
- From the regulators’ point of view, automatic updating presents new gatekeeping
opportunities
o Updates can be used to take away functionality due to “legal” concerns
o Professor Picker argues: once it becomes easy to revise distributed products to
make them less harmful (in the eyes of regulators), why not encourage such
revisions?
 Author: this can have an impact on generativity (iTunes never
succeeded in the podcast industry because it took them away too soon)
 Consider the consequences if OS makers were held responsible for all
applications running on their systems
o Shift of power from audiences towards OS developers
when deciding what to restrict
o frustrate the vast creativity and energy of “edge”
contributors
o License to code
 would exist, at first, simply to require a verifiable identity behind the
production of software

15
Belén Luengo Palomino

 However, different governments might make different judgments about


the software and thus could ask OS makers to block the offending
software only on PCs in their respective jurisdictions
o False dichotomy  we can make the grid more secure without sacrificing its
essential generative characteristics
 generative PCs and information appliances can complement each other
— the former providing a fertile soil for innovation, the latter providing
a stable, user-friendly instantiation of innovation.
How to Temper a Postdiluvian Internet
- some specific projects that could help solve some of the Internet’s most pressing
problems with as little constriction of its generative capacities as possible
A) Refining Principles of Internet Design and Governance
- Superseding the End-to-End Argument
o end-to-end neutrality does not fully capture the overall project of maintaining
generativity (which preserves freedom and creativity)
 According to end-to-end theory, placing control and intelligence at the
edges of a network (instead of middle/central intervention) maximizes
network flexibility and user choice
o However, consumers don’t know how to defend
against attacks and might demand (and manufacturers
provide) locked-down endpoint environments which
grant security and stability
o complete fidelity to end-to-end may cause users to
embrace the digital equivalent of gated communities
(prisons)
o they block generative possibility  the ability of
outsiders to offer code and services to users, giving
users and producers an opportunity to influence the
future without a regulator’s permission
o highly skilled users will still be able to enjoy
generative computing on platforms that are not locked
down, but the rest of the public will not
 New generativity principle  a rule that asks that modifications to the
PC/Internet grid be made when they will do the least harm to its
generative possibilities
o it may be preferable in the medium term to screen out
viruses through ISP operated network gateways
o Although such network screening theoretically opens
the door to additional filtering that may be undesirable,
this risk should be balanced against the very real risks
to generativity
- Reframing the Internet Governance Debate
o Those who care about the Internet’s future are unduly occupied with domain
names and ICANN, or the digital divide (ensuring Internet access to as many
people as possible)  This view is too narrow, threats to generativity are way
more serious
 A worthy Internet governance project to retain consumer choice
without creating a new bottleneck

16
Belén Luengo Palomino

o This project could set up a technical architecture to


label applications and fragments of executable code +
an organization to apply such labels
nondiscriminatorily
o Or where previous consumer decisions on whether to
run a code could guide other consumers’ choices
- Recognizing Interests in Tension with Generativity
o Some regulation is necessary
 In the beginning, it was unregulated because it was experimental
 Now, too widespread and we cannot allow regulators or consumer
choices to take away generativity
o truly harmful applications are the known exceptions to
be carved out from the universe of potential
applications the technology enables
- “Dual Machine” Solutions
o Combine locked-down information appliances and generative PCs within one
hardware
 build PCs with physical switches on the keyboard — switching
between “red” and “green” = maximize user choice
o The consumer could then switch between the two
modes, attempting to ensure that valuable or sensitive
data is created and stored in green mode and leaving
red mode for experimentation.
B) Policy Interventions
- Enabling Greater Opportunity for Direct Liability
o As the capacity to inflict damage increases with the Internet’s reach and with
the number of valuable activities reliant upon it, the imperatives to take action
will also increase
 Intermediaries will be called to supervise because they provide a
service capable of filtering user behavior = Preemptive reductions in
PC or Internet generativity may also arise
 Reduce pressure on institutional and technological gatekeepers by
making direct responsibility easier
o Sender authentication makes ISPs more accountable
for the emails accounts they allow creating, instead of
putting the pressure on the ones responsible for
processing incoming email who might classify as spam
legitimate emails
o When connecting to a WIFI each individual is
responsible for their actions since they have identified
themselves
o PROBLEM I SEE  user authentication, easier to
know what you are doing
- Establishing a Choice Between Generativity and Responsibility
o maintainers of technology platforms — ISPs and newly service-like OS makers
— should be encouraged to keep their platforms generative, rather than
narrowing their offerings to facilitate regulatory control

17
Belén Luengo Palomino

o In turn, the more service-oriented and less generative the platform, the more
legal responsibility we should impose on the technology provider to guarantee a
functioning system.
Conclusions
- Generativity has produced extraordinary progress in information technology =
extraordinary progress in the development of forms of artistic and political expression
o Now concerned with too much generativity; two alternatives
 Create two separate internets with distinct audiences = appliancized
fate = little innovation since less competitive pressure
 Maintain fundamental generativity while controlling threats
o Some limitations are required but make them without
creating centralized gatekeepers
o Create accountability for software creators while
keeping the Internet open

Piracy, Security and Architectures of Control – Cohen


- The new tools introduced to avoid piracy are not only legal changes, but they are
changing society, same with anti-terrorism surveillance
o Leading to Architectures of Control  configurations that define in a highly
granular fashion ranges of permitted conduct
 These have mostly been studied through the prism created by Lessig in
“Code and Other Laws of Cyberspace”
o code shapes behavior across a variety of domains 
code is law
 But not the only one  law, code, norms, and
the market shape behavior (four modalities)
 Three types of scholars analyzing them
1. Code is law  use legal standards applied to laws
affecting freedom of expression to assess digital
architectures
2. Code is NOT law  code origins in private behavior =
exercise of economic liberty, the market in action
3. Code is DIFFERENT from law  analyze it with its
own set of tools
 They all have a defect  too simplistic
o The architectures of control are embedded within
broader changes in patterns of social ordering in the
emerging information society
o These actors deploy more strategies not considered
within Code’s 4 categories
The Emergence of Architectures of Control
- We are not yet completely controlled by them
o Emerging gradually
 Reason  the desire to use information and information technologies
to manage risk and structure risk taking.

18
Belén Luengo Palomino

Prologue: “Computer Fraud and Abuse”


- 1980s first efforts to develop laws regulating access to computers and computerized
information
o Control revolution  made regulators go beyond the limited framework of
intellectual property laws to regulate information
 Some forms of information could be regulated by analogy to existing
common-law privacy protections
 BUT the threat of unauthorized access that might compromise the
security of the system was NOT regulated before
o CFAA 1980s prohibited access to computers
designated as protected
o It defines the limits of appropriate behavior with
respect to publicly available data according to the data
provider’s dictates
 i.e. people accessing information on a public
website through deep-link instead of home
page and owner disliking this
o But it didn’t regulate online conduct

Pervasively Distributed Copyright Enforcement


- to prevent online copyright infringement  enforce control of copyrighted content at
multiple points in the network; 6 types of strategies
1. Surface level implementation of automated restrictions on digital content
 technical protection measures (TPMs), and digital rights management
(DRM)
 Restrict the actions may take with the files
o Recording industry (music and DVDs block copying)
2. Third-party technology companies whose products and services are perceived
to facilitate particularly high levels of infringement
 Keep protected content protected
o US DMCA  penalizes circumvention of
technological measures that control access to
copyrighted works + bans the manufacture and
distribution of technologies that might enable
copyrighted content to be stripped free of its protection
 Minimize the availability of tools for reproducing, distributing, and
manipulating unprotected content
o Equipment and services that give users that freedom 
secondary liability for facilitating copyright
infringement
3. Trusted system efforts to move automated enforcement functions progressively
deeper into the logical and physical layers of the user’s electronic environment
 Making it way harder to hack systems, but hard to implement
 Debate regarding role of government in coordinating their
implementation
 Some made by Intel, some collaborative like the Trusted Computing
Group (TCG) focusing on personal computers
4. Third-party providers of network services (ISP and search engines) that play a
vital role in the distribution of online communications, including both protected
and unprotected content

19
Belén Luengo Palomino

 They should notice and takedown illicit content


5. Changing end-user behavior
 Through litigation, but it was too costly and was cancelled
6. Public awareness of copyright issues
 Campaigns designed to position online copyright infringement, and
particularly P2P file sharing, as morally objectionable and socially
insidious
- These methods shift the locus of control over intellectual consumption and communication
away from individuals and independent technology vendors
o and toward purveyors of copyrighted entertainment goods

(In)Security Everywhere
- Architectures designed to promote security are driven by a shared logic
o security is promoted by pervasively embedding technologies and protocols for
identification and authentication by cross-linking those capabilities with
pervasive, large-scale information collection and processing and by promoting
related (though arguably inconsistent) norms of ready disclosure and unceasing
vigilance.
 monitoring of movement in physical space
 trend of privatization of public spaces (surveilled gated communities)
o surveillance of networked digital communications
 inspecting data packets in transit, for monitoring wireless
transmissions, and for locating wireless users
 required telecommunications carriers to implement surveillance
capabilities that could be activated “expeditiously” following receipt of
a properly authorized request from law enforcement
 ISPs voluntarily do so to (without government push)
o processing of information about individuals and groups
 governments routinely use data mining and profiling technologies to
identify suspected threats
o racial and ethnic
 huge scope of data-processing activities occurring in the private sector
o growing market for personal information
o an industry devoted to data mining and “behavioral
advertising” has arisen
 In Europe, where data-protection laws are
stricter, there is less private-sector trade in
personal information, but also more
government freedom to collect and store data
about citizens.
o real-time identification and authentication of individuals across a wide range of
devices
 This strategy gains added momentum as it becomes linked with
strategies in the first three groups
o ordinary people who are the subjects of enhanced security measures
 inculcate appropriate beliefs about personal information management
(continue providing information)
o shape public opinion on issues related to terrorism, identity theft, and other
security threats

20
Belén Luengo Palomino

 if copyright infringement is a pandemic, global terrorism is a “cancer”


or “virus” that demands comprehensive, drastic immunotherapy
Technology as/and Regulation: Is Code the Answer?
- the regulatory framework outlined in Code remains situated squarely within the conceptual
landscape of liberal political theory
o the subject of regulation is the liberal subject
 a solitary, undifferentiated dot who interacts with regulatory forces that
stand out in sharp relief against an empty background
o liberty as the absence of constraint is not particularly useful for describing the
ways in which different digital architectures affect the experiences of network
users
 do not describe the conditions that actually exist in markets for the
technologies that constitute architectures of control
Code, Law, and Liberty
- If Code = law  what is it effect on protected liberties?
o Property rights
 architectures of control simply reinforce prerogatives of ownership
 circumventing a copy-protection device is no different from breaking
into a locked house
 owners of digital property may legitimately impose terms that involve
collection, retention, use, and sale of personal information as conditions
of licensed access
o Expression and personal liberties
 architectures of control stifle individual freedom of expression
 artificially restrict uses of digital content and foreclose the possibility of
anonymous self-expression
- Neither property theory nor speech theory definitively resolves questions about the
permissible extent of architectural control
- Code libertarians
o They agree that the decentralized, loosely coordinated strategies that I have
described evidence intent to restrict freedom of expression
 individual liberty will prove impervious to architectural control (people
will find a way around it)
- Darknet hypothesis
o any widely distributed object will be available to some fraction of users in a
form that permits copying
 A minimum of some copy protection will be broken
o For similar reasons, technically sophisticated commentators also tend to believe
that efforts to impose perfect surveillance are doomed to failure
o Ordinary people experience freedom spatially
 Users who have the technical capability to do so may retreat to darknets
or take refuge in black spaces not because they are up to no good but
rather because architectures of control allow no other refuge
o A society divided between controlled nets and darknets =/= a broader variety of
authorized spaces are subject to less rigid control
- Tinkering
o taking something apart to see how it works or to make it work better

21
Belén Luengo Palomino

o tinkering may enable network users to alter their presentation of identity in


some way, enabling them to use information resources without generating data
trails
Code and Markets
- Since code is produced, for the most part, via market-driven processes,
o then maybe regulation by code is most appropriately understood as a variant of
regulation by the market
- Market libertarian argument
o Some legal scholars argue that in a decentralized market economy, whatever
modes of social ordering emerge from the market will be modes that are chosen
by market participants, including both information vendors and information
consumers
o PROBLEM  market intimately bound with the actions of the government as
customer and regulator = choices available may lead to restrictive regimes
 code-based regulation is problematic when government attempts to
impose technology mandates or when market actors capture regulatory
processes and bend those processes to their own ends
 State and private interests are deeply and inevitably intertwined, and
architectures of control are emerging at the points of convergence
Code as Itself
- Some argue that code represents a unique mode of governance that is wholly new  perfect
panoptic surveillance
- Zittrain tackles the latter question, arguing that the move toward digital lockdown is
motivated principally by fear of the unknown
o networked information technologies should be prized to the extent that they
foster generativity
o Problem  maintaining current levels of generativity may be incompatible with
the kinds of security that people want
- Panopticon
o organizing metaphor for a group of disciplinary strategies embedded in the
operation of ordinary social institutions and coordinated by the everyday
routines and interactions of a variety of public actors – Foucault
o requires neither constant visual observation nor centralization of authority;
instead
 arrangement of social space that enables but simultaneously obviates
the need for continual surveillance
 proceeds from and is reinforced by the ordinary operation of social
institutions.
 accompanied by discourses that establish parameters of normal
behavior
 the institutionally embedded arrangements of spaces and discourses in
turn foster the widespread internalization of disciplinary norms
o

Challenges for a Theory of Code and Law


- Analyze these architectures depends on capacity to see them as socially driven solutions to
socially constructed problems
- The four-part Code framework is not enough, too simplistic

22
Belén Luengo Palomino

- Understanding the technical, social, and institutional changes now underway requires a
theoretical tool kit that encompasses the regulatory functions of institutions, artifacts, and
discourses

23

You might also like