GEC.E3 Ch8.L1 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Catanduanes State University

COLLEGE OF INFORMATION AND


COMMUNICATIONS TECHNOLOGY
Virac, Catanduanes

LEARNING MATERIALS AND COMPILATION OF


LECTURES/ACTIVITIES

GEC3
LIVING IN THE I.T. ERA
D ISC L AIM ER

This learning material is used in compliance with the flexible teaching-learning

approach, espoused by CHED in response to the pandemic that has globally affected

educational institutions. Authors and publishers of the contents are well acknowledged. Such

as, the college and its faculty do not claim ownership of all sourced information. This learning

material is solely for instructional purposes and not for commercialization. Moreover, copying

and/or sharing part/s if this learning material in all forms (such as, but not limited to social

media like Facebook, Instagram, etc.).

College of Information and Communications Technology


C H APTER 8: FU TUR E OF TH E INT ERN ET

LEARNING OUTCOMES
1. Evaluate the evolution of Internet and the underlying factors that help its growth;
2. Determine the capabilities of internet of things in the current and future generations
3. Compare the advantages and disadvantages of too much interconnectivity to the millennials;
and
4. Identify the opportunities and capabilities of Internet

KEY TERMS
1. RAND Corp.
2. DARPA
3. Packet switching
4. ARPANET
5. TCP
6. E-mail
7. Telnet
8. Usenet
9. GeoCities
10. Hypertext
11. AOL
12. Chat rooms
13. World Wide Web
14. Netscape Navigator
15. Web 2.0
16. IoT
L E S S O N 1 : O N L I N E S AF E T Y A N D S E C U R I T Y

The world would not be what it has become today without the internet. It touches just about every aspect
of how we live, work, socialize, shop, and play. But access to the internet is a recent phenomenon that’s
reshaped the world in a stunningly short amount of time. In just a few decades, the internet has gone from
a novel way for the US military to keep in touch to the always-connected heartbeat of the human race. With
each passing year, more and more people have gained access to the internet—here’s how they’ve logged
on.

As with most technologies, the roots of the Internet go back a long way, mostly to the post-World War II
era, but in some respects to the late 1930s. The evolution of the network to date can be summarized in
terms of two main phases: its development from a military experiment to a civilian utility, and the
commercialization of the network.

1.1 Phase one: from military experiment to civilian utility (1967–1995)


1.1.1 Pre-history: 1956–1966
The near indestructibility of information on the Internet derives from a military principle used in secure voice
transmission: decentralization. In the early 1970s, the RAND Corporation developed a technology (later
called “packet switching”) that allowed users to send secure voice messages. In contrast to a system
known as the hub-and-spoke model, where the telephone operator (the “hub”) would patch two people (the
“spokes”) through directly, this new system allowed for a voice message to be sent through an entire
network, or web, of carrier lines, without the need to travel through a central hub, allowing for many different
possible paths to the destination.

During the Cold War, the U.S. military was concerned about a nuclear attack destroying the hub in its hub-
and-spoke model; with this new web-like model, a secure voice transmission would be more likely to endure
a large-scale attack. A web of data pathways would still be able to transmit secure voice “packets,” even if
a few of the nodes—places where the web of connections intersected—were destroyed. Only through the
destruction of all the nodes in the web could the data traveling along it be completely wiped out—an unlikely
event in the case of a highly decentralized network.

This decentralized network could only function through common communication protocols. Just as we use
certain protocols when communicating over a telephone—“hello,” “goodbye,” and “hold on for a minute”
are three examples—any sort of machine-to-machine communication must also use protocols. These
protocols constitute a shared language enabling computers to understand each other clearly and easily.

1.1.2 The Building Blocks of the Internet


In 1973, the U.S. Defense Advanced Research Projects
Agency (DARPA) began research on protocols to allow
computers to communicate over a distributed network. This work
paralleled work done by the RAND Corporation, particularly in the
realm of a web-based network model of communication. Instead
of using electronic signals to send an unending stream of ones
and zeros over a line (the equivalent of a direct voice connection),
DARPA used this new packet-switching technology to send small
bundles of data. This way, a message that would have been an
unbroken stream of binary data—extremely vulnerable to errors
and corruption—could be packaged as only a few hundred
numbers.
Centralized versus distributed communication networks
Imagine a telephone conversation in which any static in the signal would make the message
incomprehensible. Whereas humans can infer meaning from “Meet me [static] the restaurant at 8:30” (we

38
replace the static with the word at), computers do not necessarily have that logical linguistic capability. To
a computer, this constant stream of data is incomplete—or “corrupted,” in technological terminology—and
confusing. Considering the susceptibility of electronic communication to noise or other forms of disruption,
it would seem like computer-to-computer transmission would be nearly impossible.

However, the packets in this packet-switching technology have something that allows the receiving
computer to make sure the packet has arrived uncorrupted. Because of this new technology and the shared
protocols that made computer-to-computer transmission possible, a single large message could be broken
into many pieces and sent through an entire web of connections, speeding up transmission and making
that transmission more secure.

One of the necessary parts of a network is a host. A host is a physical node that is directly connected to
the Internet and “directs traffic” by routing packets of data to and from other computers connected to it. In
a normal network, a specific computer is usually not directly connected to the Internet; it is connected
through a host. A host in this case is identified by an Internet protocol, or IP, address (a concept that is
explained in greater detail later). Each unique IP address refers to a single location on the global Internet,
but that IP address can serve as a gateway for many different computers. For example, a college campus
may have one global IP address for all of its students’ computers, and each student’s computer might then
have its own local IP address on the school’s network. This nested structure allows billions of different
global hosts, each with any number of computers connected within their internal networks. Think of a
campus postal system: All students share the same global address (1000 College Drive, Anywhere, VT
08759, for example), but they each have an internal mailbox within that system.

The early Internet was called ARPANET, after the U.S.


Advanced Research Projects Agency (which added
“Defense” to its name and became DARPA in 1973), and
consisted of just four hosts: UCLA, Stanford, UC Santa
Barbara, and the University of Utah. Now there are over
half a million hosts, and each of those hosts likely serves
thousands of people (Central Intelligence Agency). Each
host uses protocols to connect to an ever-growing
network of computers. Because of this, the Internet does
not exist in any one place in particular; rather, it is the
name we give to the huge network of interconnected
computers that collectively form the entity that we think
A TCP gateway is like a post office because of the way that it directs of as the Internet. The Internet is not a physical structure;
information to the correct location. it is the protocols that make this communication possible.

One of the other core components of the Internet is the Transmission Control Protocol (TCP) gateway.
Proposed in a 1974 paper, the TCP gateway acts “like a postal service (Cerf, et. al., 1974).” Without
knowing a specific physical address, any computer on the network can ask for the owner of any IP address,
and the TCP gateway will consult its directory of IP address listings to determine exactly which computer
the requester is trying to contact. The development of this technology was an essential building block in
the interlinking of networks, as computers could now communicate with each other without knowing the
specific address of a recipient; the TCP gateway would figure it all out. In addition, the TCP gateway checks
for errors and ensures that data reaches its destination uncorrupted. Today, this combination of TCP
gateways and IP addresses is called TCP/IP and is essentially a worldwide phone book for every host on
the Internet.

1.1.3 You’ve Got Mail: The Beginnings of the Electronic Mailbox


E-mail has, in one sense or another, been around for quite a while. Originally, electronic messages were
recorded within a single mainframe computer system. Each person working on the computer would have
a personal folder, so sending that person a message required nothing more than creating a new document
in that person’s folder. It was just like leaving a note on someone’s desk (Peter, 2004), so that the person
would see it when he or she logged onto the computer.
However, once networks began to develop, things became slightly more complicated. Computer
programmer Ray Tomlinson is credited with inventing the naming system we have today, using
the @ symbol to denote the server (or host, from the previous section). In other words, [email protected]
tells the host “gmail.com” (Google’s e-mail server) to drop the message into the folder belonging to “name.”
Tomlinson is credited with writing the first network e-mail using his program SNDMSG in 1971. This
invention of a simple standard for e-mail is often cited as one of the most important factors in the rapid
spread of the Internet, and is still one of the most widely used Internet services.

The use of e-mail grew in large part because of later commercial developments, especially America Online,
that made connecting to e-mail much easier than it had been at its inception. Internet service providers
(ISPs) packaged e-mail accounts with Internet access, and almost all web browsers (such as Netscape,
discussed later in the section) included a form of e-mail service. In addition to the ISPs, e-mail services like
Hotmail and Yahoo! Mail provided free e-mail addresses paid for by small text ads at the bottom of every
e-mail message sent. These free “webmail” services soon expanded to comprise a large part of the e-mail
services that are available today. Far from the original maximum inbox sizes of a few megabytes, today’s
e-mail services, like Google’s Gmail service, generally provide gigabytes of free storage space.

E-mail has revolutionized written communication. The speed and relatively inexpensive nature of e-mail
makes it a prime competitor of postal services—including FedEx and UPS—that pride themselves on
speed. Communicating via e-mail with someone on the other end of the world is just as quick and
inexpensive as communicating with a next-door neighbor. However, the growth of Internet shopping and
online companies such as Amazon.com has in many ways made the postal service and shipping
companies more prominent—not necessarily for communication, but for delivery and remote business
operations.

1.1.4 How Did We Get Here? The Late 1970s, Early 1980s, and Usenet
Almost as soon as TCP stitched the various networks together, a former DARPA scientist named Larry
Roberts founded the company Telnet, the first commercial packet-switching company. Two years later, in
1977, the invention of the dial-up modem (in combination with the wider availability of personal computers
like the Apple II) made it possible for anyone around the world to access the Internet. With availability
extended beyond purely academic and military circles, the Internet quickly became a staple for computer
hobbyists.

One of the consequences of the spread of the Internet to hobbyists was the founding of Usenet. In 1979,
University of North Carolina graduate students Tom Truscott and Jim Ellis connected three computers in a
small network and used a series of programming scripts to post and receive messages. In a very short
span of time, this system spread all over the burgeoning Internet. Much like an electronic version of
community bulletin boards, anyone with a computer could post a topic or reply on Usenet.

The group was fundamentally and explicitly anarchic, as outlined by the posting “What is Usenet?” This
document says, “Usenet is not a democracy…there is no person or group in charge of Usenet …Usenet
cannot be a democracy, autocracy, or any other kind of ‘-acy (Moraes, et. al., 1998).’” Usenet was not used
only for socializing, however, but also for collaboration. In some ways, the service allowed a new kind of
collaboration that seemed like the start of a revolution: “I was able to join rec.kites and collectively people
in Australia and New Zealand helped me solve a problem and get a circular two-line kite to fly,” one user
told the United Kingdom’s Guardian (Jeffery, et. al., 2009).

1.2 Phase two: the commercial Internet (1995–present)


1.2.1 GeoCities: Yahoo! Pioneers
Fast-forward to 1995: The president and founder of Beverly Hills Internet, David Bohnett, announces that
the name of his company is now “GeoCities.” GeoCities built its business by allowing users
(“homesteaders”) to create web pages in “communities” for free, with the stipulation that the company
placed a small advertising banner at the top of each page. Anyone could register a GeoCities site and
subsequently build a web page about a topic. Almost all of the community names, like Broadway (live
theater) and Athens (philosophy and education), were centered on specific topics.
This idea of centering communities on specific topics may have come from Usenet. In Usenet, the domain
alt.rec.kites refers to a specific topic (kites) within a category (recreation) within a larger community
(alternative topics). This hierarchical model allowed users to organize themselves across the vastness of
the Internet, even on a large site like GeoCities. The difference with GeoCities was that it allowed users to
do much more than post only text (the limitation of Usenet), while constraining them to a relatively small
pool of resources. Although each GeoCities user had only a few megabytes of web space, standardized
pictures—like mailbox icons and back buttons—were hosted on GeoCities’s main server. GeoCities was
such a large part of the Internet, and these standard icons were so ubiquitous, that they have now become
a veritable part of the Internet’s cultural history. The Web Elements category of the site Internet
Archaeology is a good example of how pervasive GeoCities graphics became (Internet Archaeology, 2010).

GeoCities built its business on a freemium model, where basic services are free but subscribers pay extra
for things like commercial pages or shopping carts. Other Internet businesses, like Skype and Flickr, use
the same model to keep a vast user base while still profiting from frequent users. Since loss of online
advertising revenue was seen as one of the main causes of the dot-com crash, many current web startups
are turning toward this freemium model to diversify their income streams (Miller, 2009).

GeoCities’s model was so successful that the company Yahoo! bought it for $3.6 billion at its peak in 1999.
At the time, GeoCities was the third-most-visited site on the web (behind Yahoo! and AOL), so it seemed
like a sure bet. A decade later, on October 26, 2009, Yahoo! closed GeoCities for good in every country
except Japan.

Diversification of revenue has become one of the most crucial elements of Internet businesses; from The
Wall Street Journal online to YouTube, almost every website is now looking for multiple income streams to
support its services.

1.2.2 Hypertext: Web 1.0


In 1989, Tim Berners-Lee, a graduate of Oxford University and software engineer at CERN (the European
particle physics laboratory), had the idea of using a new kind of protocol to share documents and
information throughout the local CERN network. Instead of transferring regular text-based documents, he
created a new language called hypertext markup language (HTML). Hypertext was a new word for text
that goes beyond the boundaries of a single document. Hypertext can include links to other documents
(hyperlinks), text-style formatting, images, and a wide variety of other components. The basic idea is that
documents can be constructed out of a variety of links and can be viewed just as if they are on the user’s
computer.

This new language required a new communication protocol so that


computers could interpret it, and Berners-Lee decided on the name
hypertext transfer protocol (HTTP). Through HTTP, hypertext
documents can be sent from computer to computer and can then be
interpreted by a browser, which turns the HTML files into readable
web pages. The browser that Berners-Lee created, called World
Wide Web, was a combination browser-editor, allowing users to
view other HTML documents and create their own (Berners-Lee,
2009).
Tim Berners-Lee’s first web browser was also a web
page editor.

Modern browsers, like Microsoft Internet Explorer and Mozilla Firefox, only allow for the viewing of web
pages; other increasingly complicated tools are now marketed for creating web pages, although even the
most complicated page can be written entirely from a program like Windows Notepad. The reason web
pages can be created with the simplest tools is the adoption of certain protocols by the most common
browsers. Because Internet Explorer, Firefox, Apple Safari, Google Chrome, and other browsers all
interpret the same code in more or less the same way, creating web pages is as simple as learning how to
speak the language of these browsers.
In 1991, the same year that Berners-Lee created his web browser, the Internet connection service Q-Link
was renamed America Online, or AOL for short. This service would eventually grow to employ over 20,000
people, on the basis of making Internet access available (and, critically, simple) for anyone with a telephone
line. Although the web in 1991 was not what it is today, AOL’s software allowed its users to create
communities based on just about any subject, and it only required a dial-up modem—a device that connects
any computer to the Internet via a telephone line—and the telephone line itself.

In addition, AOL incorporated two technologies—chat rooms and Instant Messenger—into a single program
(along with a web browser). Chat rooms allowed many users to type live messages to a “room” full of
people, while Instant Messenger allowed two users to communicate privately via text-based messages.
The most important aspect of AOL was its encapsulation of all these once-disparate programs into a single
user-friendly bundle. Although AOL was later disparaged for customer service issues like its users’ inability
to deactivate their service, its role in bringing the Internet to mainstream users was instrumental (Zeller Jr.,
2005).

In contrast to AOL’s proprietary services, the World Wide Web had to be viewed through a standalone
web browser. The first of these browsers to make its mark was the program Mosaic, released by the
National Center for Supercomputing Applications at the University of Illinois. Mosaic was offered for free
and grew very quickly in popularity due to features that now seem integral to the web. Things like
bookmarks, which allow users to save the location of particular pages without having to remember them,
and images, now an integral part of the web, were all inventions that made the web more usable for many
people (National Center for Supercomputing Appliances).

Although the web browser Mosaic has not been


updated since 1997, developers who worked on it went
on to create Netscape Navigator, an extremely popular
browser during the 1990s. AOL later bought the
Netscape company, and the Navigator browser was
discontinued in 2008, largely because Netscape
Navigator had lost the market to Microsoft’s Internet
Explorer web browser, which came preloaded on
Table 1. Browser Market Share (as of Feb. 2010)
Microsoft’s ubiquitous Windows operating system.
However, Netscape had long been converting its
Navigator software into an open-source program called Mozilla Firefox, which is now the second-most-
used web browser on the Internet (detailed in Table 1. “Browser Market Share (as of February 2010)”)
(NetMarketshare). Firefox represents about a quarter of the market—not bad, considering its lack of
advertising and Microsoft’s natural advantage of packaging Internet Explorer with the majority of personal
computers.

1.2.3 The Early Days of Social Media


The shared, generalized protocols of the Internet have allowed it to be easily adapted and extended into
many different facets of our lives. The Internet shapes everything, from our day-to-day routine—the ability
to read newspapers from around the world, for example—to the way research and collaboration are
conducted. There are three important aspects of communication that the Internet has changed, and these
have instigated profound changes in the way we connect with one another socially: the speed of
information, the volume of information, and the “democratization” of publishing, or the ability of anyone to
publish ideas on the web.

One of these startups, theGlobe.com, provided one of the earliest social networking services that exploded
in popularity. When theGlobe.com went public, its stock shot from a target price of $9 to a close of $63.50
a share (Kawamoto, 1998). The site itself was started in 1995, building its business on advertising. As
skepticism about the dot-com boom grew and advertisers became increasingly skittish about the value of
online ads, theGlobe.com ceased to be profitable and shut its doors as a social networking site (The Globe,
2009). Although advertising is pervasive on the Internet today, the current model—largely based on the
highly targeted Google AdSense service—did not come around until much later. In the earlier dot-com
years, the same ad might be shown on thousands of different web pages, whereas now advertising is often
specifically targeted to the content of an individual page.
However, that did not spell the end of social networking on the Internet. Social networking had been going
on since at least the invention of Usenet in 1979, but the recurring problem was always the same:
profitability. This model of free access to user-generated content departed from almost anything previously
seen in media, and revenue streams would have to be just as radical.

1.2.4 ‘Web 2.0’: 2000–2003


The Web was originally conceived as a means of sharing information among particle physicists who were
scattered across the world. Since most of that information was in the form of documents, the design was
therefore for a system that would make it possible to format these documents in a standardised way, publish
them online, and make them easy to access. So the first ‘release’ of the Web (to use a software term)
created a worldwide repository of linked, static documents held on servers distributed across the Internet.

Given that it was intended as a system for academic researchers, the original Web design was probably fit
for purpose in its first two years. But once the Mosaic browser appeared in 1993 and the commercial
possibilities of the technology became obvious to the corporate world, the limitations of the original concept
began to grate. The early Web did not make provisions for images, for example. And it was a one-way,
read-only medium with no mechanism for enabling people to interact with web pages, which meant that it
was unsuitable for e-commerce. There was no way for users to talk back to authors or publishers; no way
to change or personalise web pages; no way to find other readers of the same page; and no way to share
or collaborate over the Web.

From 1993 onwards therefore, there was a steady accretion of innovative technologies designed to extend
Berners-Lee's creation and to overcome some of its perceived limitations. The main driver behind this was
e-commerce, which desperately needed to transform the Web into a medium that facilitated transactions.

In order to make transactions possible, a whole range of problems had to be solved. For example, ways
had to be found to allow interactivity between browsers and servers; to facilitate personalisation of web
content; and to overcome the problem that the httpprotocol was both insecure (in that communications
between browser and server could be intercepted and monitored by third parties) and stateless (i.e. unable
to support multistep transactions).

In time, solutions to these problems emerged in the forms of: ‘cookies’; HTTPS (an encrypted version of
the basic httpprotocol); the evolution of browsers with capabilities added by specialised ‘plug-ins’ which
enabled them to handle audio and video and other kinds of file; and, eventually, JavaScript, which
effectively turned web pages into small virtual machines. Many of these technologies had an ad hoc feel to
them, which was hardly surprising, given that they had been grafted onto a system rather than being
designed into it. But they nevertheless proved extraordinarily powerful in supporting the dramatic expansion
of the Web from 1995 onwards.

In pondering the Web 1.0 enterprises that had survived the crash, and the new ones that had arisen
afterwards, it became clear that they had several important features in common, an observation which
eventually led to them being dubbed ‘Web 2.0’ by one prominent observer of the technology. One of these
features was that they harnessed the collective intelligence available on the Web, either via software such
as Google's PageRank algorithm (which ranks web pages using a kind of automated peer-review) or by
exploiting the willingness of users to engage with the enterprise (as, for example, in Amazon's utilisation of
product reviews by customers). Another example of collective intelligence at work was Wikipedia – an
enterprise made possible by Ward Cunningham's invention of the ‘wiki’ – a web page that could be edited
by anyone who read it. Cunningham's software transformed the Web from a one-way, read-only medium,
into what Tim Berners-Lee later called the ‘read–write Web’.

A second distinguishing feature of the ‘new’ Web was ‘user-generated content’ or ‘peer production’ – that
is, material created and published freely by people who do it for no apparent economic motive.

Another distinctive feature of the ‘new’ Web was that many of the emerging services on it were dynamically
interconnected by means of software tools like the syndication tool RSS and Application Programming
Interfaces (APIs). The latter provide the ‘hooks’ on which other pieces of software can hang. What was
distinctive about some of the web services that evolved after 1999 was that they used APIs to specify how
entire web services could work together. A typical example is the API published by Google for its Maps
service. This made it possible for people to create other services – called ‘mashups’ – which linked Google
Maps with other Internet-accessible data sources.

Fourthly, many of the new Web services were distinctive by never being ‘finished’ – by being in what
programmers would call a ‘perpetual Beta’ stage. This intrinsic, experimental ethos of the emerging Web
was exemplified by the Google search engine which, when it launched, and for a considerable time
afterwards, carried the subscript ‘BETA’. What was significant about this was that it signalled its designers’
philosophy of regarding their web-based service as a work in progress – subject to continual and sometimes
rapid change – rather than as something fixed and immutable. What made this possible of course, was the
fact that it was a cloud-based service, so every user's version of the software could be upgraded at a stroke,
and without any effort on their part, beyond occasionally upgrading their browser software or installing
some (free) plug-ins designed to take advantage of whatever new features Google had decided to add.

A final distinguishing characteristic of the post-1999 Web was that the enterprises and services that were
becoming dominant were effectively using the Web as a programming platform. So while the Internet was
the platform on which Web 1.0 was built, Web 1.0 in turn became the platform on which the iconic services
of Web 2.0 were constructed. This was made possible firstly by the fact that the Web provided a common
standard, and secondly by the fact that if a service was provided via the http protocol, it could bypass the
firewalls used by organisations to prevent unauthorised intrusions (since most firewalls were programmed
to allow ‘web pages’ to pass through).

1.2.5 ‘Mobile connectivity, surveillance, cybercrime, corporate power, changing patterns of use
and their implications: (2004–present)
The most recent phase in the evolution of the Internet has been characterised by significant changes in the
ways that people access and use the network and by the ways in which the infrastructure of the network
has evolved to cope with these changes.

Mobile connectivity

In many respects, the most significant moment in the recent history of the Internet was the arrival of the
‘smartphone’ – i.e. a mobile phone that can access the Internet – in 2007. Adoption of smartphones (and
related mobile devices, like tablet computers) has increased rapidly, to the point where it is clear that most
of the next few billion Internet users, mostly from developing countries, will access the network via a
smartphone. The implications of this development are profound. On the one hand, access to the network
– and all the good things that could flow from that – will come within the reach of communities that have
hitherto found themselves on the wrong side of the ‘digital divide’. On the other hand, ubiquitous mobile
connectivity will increase further the power and influence of corporations over Internet users because of (i)
the latter's dependence on companies for both connectivity and content, and (ii) mobile devices’
dependence on cloud computing resources for much of their functionality.

Social media

Online social networking services have quite a venerable pedigree in Internet terms, but in the last few
years the market has been dominated by Facebook (founded in 2004), LinkedIn (2003) and Twitter (2006).
Of these, Facebook is by far the most dominant. As of Autumn 2015, it had 1.55 billion ‘monthly active
users’, 90% of whom access the service from mobile devices. Given that Facebook was the brainchild of
a single individual, a Harvard sophomore, its current prominence is an impressive demonstration of the
capacity of the Internet to enable ‘permissionless innovation’.

Pervasive surveillance

‘Surveillance is the business model of the Internet. We build systems that spy on people in exchange for
services. Corporations call it marketing.’ This statement from a noted computer security expert is a
hyperbolic way of encapsulating the symbiotic relationship between Internet users and companies. On the
one hand, users clearly value online services like search and social networking, but they have traditionally
been reluctant to pay for them; on the other hand, Internet companies wanted to ‘get big fast’ in order to
harness network effects, and the quickest way to do that was to offer services for free. The business model
that emerged from this symbiotic relationship is advertising-based: users agree that the service providers
may gather data about them based on their online behaviour and use the resulting knowledge to target
advertising at them, hence the trope that ‘if the service is free, then you are the product’.

Up to now, this surveillance-based model has worked well for the Googles and Facebooks of the online
world. But its long-term sustainability is not assured; there are signs, for example, that users are becoming
resistant to targeted advertising, and use of ad-blocking software is on the rise.

The last 15 years have also seen massive expansion in state surveillance of Internet and mobile
communications, stimulated in large part by the ‘state of exception’ necessitated by the so-called war on
terror. There was probably a vague awareness among the general public that security and intelligence
services were monitoring people's communications, but it took the revelations by the former National
Security Agency (NSA) contractor, Edward Snowden, in 2013, to demonstrate the scale and intrusiveness
of this surveillance.

Snowden's revelations have provoked much controversy, prompted a number of official inquiries (notably
in the US and the UK) and the publication, in the UK, of a draft new Investigatory Powers Bill which is
scheduled to become law before the end of 2016. At the time of writing, the Bill is on its passage through
Parliament, but it seems unlikely that current surveillance practices will be abandoned, though oversight
arrangements may change. And although public attitudes to covert surveillance seem to be culturally
dependent, at least as measured by opinion polling, all the indications are that extensive surveillance of
communications has become a fixture in liberal democracies, with unpredictable long-term consequences
for privacy, human rights, and civil liberties.

Corporate power

Two aspects of ‘power’ are important in a networked world. One is the coercive, surveillance, and other
power exercised by states. The other is that wielded by the handful of large digital corporations that has
come to dominate the Internet over the last two decades. This raises a number of interrelated questions.
What exactly is the nature of digital corporations’ power? How does it differ from the kinds of power wielded
by large, non-digital companies? In what ways is it – or might it be – problematic? And are the legislative
tools possessed by states for the regulation of corporate power, fit for purpose in a digital era?

The five companies – Apple, Google, Facebook, Yahoo, Amazon, and Microsoft – have acquired significant
power and influence and play important roles in the everyday lives of billions of people. In three of these
cases – Apple, Amazon, and Microsoft – the power they wield mostly takes a familiar form: market
dominance in relatively conventional environments, those of retail commerce and computer software and/or
hardware respectively. In that sense, their market dominance seems relatively unproblematic, at least in
conceptual terms: all operate in well-understood market environments and in one case (Microsoft) antitrust
legislation has been brought to bear on the company by both US and European regulators. So although
the market power of the trio raises interesting legal and other questions, it does not appear to be
conceptually challenging.

The same cannot be said, however, of the power wielded by ‘pure’ Internet companies like Google,
Facebook (and to a lesser extent, Yahoo). Their power seems just as significant but is harder to
conceptualise.

Take Google, for example. Between January 2012 and January 2015, its global market share never
dropped below 87.72% (the lowest point, reached in October 2013). In Europe, its share is even higher:
around 93%. The global market share of its nearest rival, Microsoft's Bing, in January 2015, was 4.53%.

This raises several questions. The first is whether such dominance results in – or might lead to – abuses
of corporate power in ways that have become familiar since the 1890s in the United States, and for which
legal remedies exist, at least in theory.

But there is another aspect of Google's power that raises a more puzzling question. It is posed by a ruling
of the European Court of Justice in May 2014 in the so-called right to be forgotten case. The essence of
the matter is that individuals within the European Union now have a legal right to petition Google to remove
from its search results, links to online references to them that are in some way damaging or inaccurate.
Such online references are not published by Google itself, and even if Google accedes to the requests, the
offending references continue to be available online, so in that sense the phrase ‘right to be forgotten’ is
misleading. All that happens is that they disappear from Google searches for the complained-of information.
It would perhaps be more accurate, therefore, to describe this as the right not to be found by Google
searches.

One could say, therefore, that Google has the power to render people or organisations invisible – to
‘disappear’ them, as it were. This effect may not be intentional, but it is nevertheless real. And the capacity
to make that happen through what one might call ‘algorithmic airbrushing’ could be seen as analogous to
a power which was hitherto the prerogative of dictatorships: to airbrush opponents from the public record.
So this capacity to render people ‘invisible’ is clearly a kind of power. But what kind of power is it? Are there
analytical or theoretical tools that would enable us to assess and measure it?

This is just one example of the uncharted territory that societies are now trying to navigate. Similar
questions can be asked about Facebook's documented power to affect its users’ moods and to influence
their voting behaviour.

Cybercrime

The term ‘cybercrime’ covers a multitude of online misdeeds, from sophisticated attacks on government
and corporate websites, to spam emails offering fake prizes. Its rise seems correlated – at least in countries
like the UK – with a fall in reported offline crime. This might be a coincidence, but a more plausible
hypothesis is that it reflects the reality that the chances of being apprehended and convicted for online
crime are alarmingly low. There is general agreement that cybercrime is widespread and growing but few
authoritative estimates of its real scale. (One estimate puts the annual global cost at €750 billion). A study
carried out in October 2014 reported that fully one half of Britons had experienced crime online, with
offences ranging from identity theft and hacking to online abuse.

As far as companies are concerned, cybercrime is a real and growing threat and one that is chronically
under-reported. According to a 2014 study by PricewaterhouseCoopers, 69% of UK companies had
experienced a cybersecurity ‘incident’ in the previous year, but an earlier government inquiry found that
businesses reported only 2% of such incidents to police.

The widespread public perception that cybercrime is carried out by opportunistic hackers is misguided. In
fact, it is now a sophisticated global industry with its own underground economy, in which stolen personal
and financial data are freely traded in covert online marketplaces. Stolen credit card details, for example,
are available at prices in the £1 range in such marketplaces and personally identifiable information, like
social security numbers, fetch 10 times that. Stolen data fuel a range of criminal activities (phishing, hacking
of corporate systems, extortion via denial-of-service attacks) which are supported by ‘a fully fledged
infrastructure of malicious code writers, specialist web hosts, and individuals able to lease networks of
many thousands of compromised computers to carry out automated attacks’.

Since cybercrime is now a global industry, effective measures to deal with it require a co-ordinated
international response. Although in some areas (e.g. child pornography) law enforcement agencies have
shown that such co-operation can work, arrangements for dealing more generally with cybercrime are slow
and patchy. A solution to the problem lies some distance ahead in the future.

1.2.6 It’s possible a completely new paradigm will be invented for our super-fast, mobile future.
According to a recent consumer report (pdf) commissioned by networking hardware company Ericsson,
the average smartphone owner in the US currently uses around 8GB of data each month. The company
expects that number to balloon up to possibly 200GB per month by 2025. Mobile devices will likely not look
like they do now: In the same way using a smartphone to access the web in 2019 is nothing like using a
laptop to get online in 2003, or a desktop in 1993, it’s possible a completely new paradigm will be invented
for our super-fast, mobile future. The future of the web will likely be increasingly mobile, but probably won’t
be dominated by the devices of today.
As 5G wireless networks are deployed around the world today, many with the promise of download speeds
over 1 Gigabit per second (compared to LTE, which maxes out at around 25 Mbps in the US), and
connections so airtight it’ll feel like you’re in the same room as someone thousands of miles away. It’s easy
to see how the internet could progress from its simple roots, but not what form it will take.

It’s possible that the next iteration of the internet, powered by 5G, could introduce some fantastical-
sounding scenarios: surgeries performed remotely in real time; fleets of autonomous trucks all monitored
from afar; augmented reality glasses that overlay holographic information in front of us as we move through
the world; computers hosted in the cloud.

But for now, these are still pipe dreams. While the new networking technology is being rolled out, adoption
for the expensive networks will likely be slow, and there’s no guarantee that real-world infrastructure will
ever live up to the promise of what engineers can pull off in a lab. But then, people probably said the same
things about those early messages pinging back and forth from UCLA in the early 1970s.

You might also like