Unit III Internet and Extranet:: One Equity Partners)
Unit III Internet and Extranet:: One Equity Partners)
Automotive Network Exchange, The Largest Extranet, Architecture of the Internet, Intranet and Extranet,
Intranet software, Applications of Intranets, Intranet Application Case Studies, Considerations in Intranet
Deployment, The Extranets, The structures of Extranets, Extranet products & services, Applications of
Extranets, Business Models of Extranet Applications, Managerial Issues.
Electronic Payment Systems: Is SET a failure, Electronic Payments & Protocols, Security Schemes in
Electronic payment systems, Electronic Credit card system on the Internet, Electronic Fund transfer and
Debit cards on the Internet, Stored – value Cards and E- Cash, Electronic Check Systems, Prospect of
Electronic Payment Systems, Managerial Issues.
Automotive Network Exchange
The ANX Network (ANX) is the private network or extranet that was initially setup and
maintained by the big three automakers through the Automotive Industry Action Group
[http://www.aiag.org/] , General Motors, Ford, and Chrysler. It was built as a private
network for the auto industry around 1995 to provide consistent, reliable speed and
guaranteed security for data transmissions between the automakers and the companies that
they do business with. Since the time of its introduction over 4000 companies have joined
the "ANX Network".
To become a part of the ANX Network, a company needs to subscribe via a Certified
Service Provider (CSP). Currently there are only five CSP's due to the strict regulations
and high specifications. The short list of CSP's include AT&T, Verizon, Bell Canada
(Canada), Cavalier Communications, and SBC. ANXeBusiness manages and operates the
network. A CSP must meet these minimum criteria: Packet loss <1% & Latency <125ms.
A connection to the ANX Network consists of a dedicated circuit.
The AIAG sold the ANX Network to [http://www.saic.com SAIC] which formed
ANXeBusiness to grow the network and support the ANX Community. SAIC spun off
[http://www.anx.com/ ANXeBusiness] which is now owned by [http://www.oneequity.com/
One Equity Partners] .
An extranet is a private network that uses internet protocols , network connectivity, and
possibly the public telecommunication system to securely share part of an organizations
information or operations with suppliers , vendors,partners,customers or other businesses.
An extranet can be viewed as part of a companys intranet that is extended to users outside
the company (eg:normally over the internet).it has also been described as a “state of mind”
in which the internet is perceived as a way to do business with a pre approved set of other
companies B2B , in isolation from all other internet users.In contrast, B2C involves known
server(s) of one or more companies, communicating with previously unknown consumer
users.
An e-commerce sites integrates with a major retail partnet to automatically exchange
inventory data using extranet network.
Features of extranet:
1. The use of internet technologies and standards: these include the standardized
techniques for transmitting and sharing information and the methods for encrypting and
storing information , otherwise known as the Internet protocol.
2. The use of web browsers: users access Extranet information using a web browser like
microsoft internet explorer, netscape navigator or , more recently Mozillas Firefox.
3. Security: By their very nature , extranets are embroiled in concerns about security.To
protect the privacy of the information that is being transmitted, most extranets use
either secure communication lines or proven security and encryption technologies that
have been developed for the internet.
4. Central server/repository: Extranets usually have a central server where documents or
data reside. Members can access this information from any computer that has internet
access.
Extranet applications
An extranet application is a software data application that provides limited access to your
company’s internal data by outside users such as customers and suppliers. The limited access
typically includes the ability to order products and services, check order status, request customer
service and much more.
A properly developed extranet application provides the supply chain connection needed with
customers and suppliers to dramatically lessen routine and time consuming communications.
Doing so frees up resources to concentrate on customer service and expansion as opposed to
administrative office tasks such as data entry.
Just as intranets provide increased internal collaboration, extranets provide increased efficiencies
between your company and its customers and/or suppliers. Developing and implementing an
extranet application can provide you the competitive edge to stay ahead of the competition in the
eyes of your customers and a better ability to negotiate prices with your suppliers.
Disadvantages
1. Extranets can be expensive to implement and maintain within an organization (e.g.:
hardware, software, employee training costs) — if hosted internally instead of via an ASP.
2. Security of extranets can be a big concern when dealing with valuable information. System
access needs to be carefully controlled to avoid sensitive information falling into the wrong
hands.
3. Extranets can reduce personal contact (face-to-face meetings) with customers and business
partners. This could cause a lack of connections made between people and a company, which
hurts the business when it comes to loyalty of its business partners and customers
E-Commerce Business Models
Since eCommerce consists of doing business online or electronically, the business or revenue
models are somewhat different than that of a “brick and mortar” business. Common eCommerce
models are direct online sales, selling online advertising space, and online commissions.
+
What is the Internet?
The Internet is a worldwide, publicly accessible series of interconnected computer networks that
transmit data by packet switching using the standard Internet Protocol (IP). It is a “network of
networks” that consists of millions of smaller domestic, academic, business, and government
networks, which together carry various information and services, such as electronic mail, online
chat, file transfer, and the interlinked web pages and other resources of the World Wide Web
(WWW).
The Internet and the World Wide Web are not synonymous. The Internet is a collection of
interconnected computer networks, linked by copper wires, fiber-optic cables, wireless
connections, etc. In contrast, the Web is a collection of interconnected documents and other
resources, linked by hyperlinks and URLs. The World Wide Web is one of the services accessible
via the Internet, along with various others including e-mail, file sharing, online gaming and others
described below.
America Online, Comcast, Earthlink, etc. are examples of Internet service providers. They make
it physically possible for you to send and access data from the Internet. They allow you to send
and receive data to and from their computers or routers which are connected to the Internet.
World Wide Web is an example of an information protocol/service that can be used to send and
receive information over the Internet. It supports:
• Multimedia Information (text, movies, pictures, sound, programs . . . ).
• Hypertext Information (information that contains links to other information resources)
• Graphic User Interface (so users can point and click to request information instead of typing
in text commands).
The server software for the World Wide Web is called an HTTP server (or informally a Web
server). Examples are Apache and IIS. The client software for World Wide Web is called a Web
browser. Examples are: Netscape, Internet Explorer, Safari, Firefox, and Mozilla. These
examples are particular “brands” of software that have a similar function, just like Lotus 123 and
Excel are both spreadsheet software packages.
Internet Structure
The Internet is an international network of computers connected by wires such as telephone lines.
Schools, businesses, government offices, and many homes use the Internet to communicate with
one another. You have access to the Internet when you work in one of this university’s computer
labs. You also may have access at home or in your residence hall. If not, you can obtain access
once you have three things. First, you need a computer and a modem, a device that allows you to
connect your computer with the Internet. Many new computers have built-in modems. Second,
you need a browser, a piece of software that allows you to view information on the Internet. Many
new computers also come with a browser, usually Internet Explorer. You also can download
another popular browser, Netscape Navigator, from the Internet for free. Finally, you need to
subscribe to an Internet Service Provider, or ISP, such as America Online or Carolina Online.
One popular component of the Internet is electronic mail, or e-mail, which people at separate
locations can use to send messages to one another. In general, each of these people has an e-mail
address, which usually looks something like this: The first part of the address (.mark. Canada)
specifies the individual user, and the rest of the address refers to the server (uncp.edu), which is a
computer that can store a lot of information.
In addition to allowing people to send e-mail messages to one another, the Internet also allows
organizations and individuals to post information about themselves so that others can see it. For
example, many companies post pictures and descriptions on World Wide Web sites. In fact, you
can set up your own World Wide Web site by reserving space on a server. To understand how this
process works, imagine that you wanted to store some articles you have written at a library so that
people could come and read them. First, you would need to obtain permission from the librarians,
who would assign you a folder where they would store your articles. Whenever you finished a new
article, you would put a name on it and send it to the librarians, who would then place it in your
folder. When people wanted to read one of these articles, they would need to know the address of
the library, the name of your folder, and the name of the specific article they want to read. When
they supplied this information, the librarian would give them the article they want.
The World Wide Web works the same way. First you need to identify an Internet company
(librarian) and ask permission to save Web pages (articles) on its server (library). The company
(librarian) then assigns you a directory (folder) where it will store your Web pages (articles). As
you create each Web page (article), you give it a filename (name) and publish it on the server (send
it to the library). When people want to read your Web page (article), they need your Web address,
sometimes called a Uniform Resource Locator, or URL. The URL consists of the domain name of
the server (address of the library), name of your directory (name of your folder), and the filename
of the particular Web page (name of article).
The Internet and its Characteristics
The Internet by the late 1990s has evolved into a complex environment. Originally a military
communication’s network it is now routinely used for five types of operations: (i) long-distance
transactions (e.g. e-commerce, form-filling, remote work, entertainment); (ii) interpersonal
communication; (iii) data storage; (iv) research (i.e. data finding); (v) remote data access and
downloading.
The Internet is a dynamic and mercurial system endowed with a number of traits.
These are:
1. Technological neutrality. The Internet joins together computers of various sizes and
architectures. They may run on various operating systems and utilize a great variety of
communication links.
2. Built-in piecemeal change and evolution. The Internet is not a one-off development. It is an
energetic, polycentric, complex, growing, and self-refining system. It is a network which is
geared to expansion and growth. It is a system which scales up extremely well.
3. Robustness and reliability. All basic technical features of the Net such as the TCP/IP (transfer
control protocol/internet protocol) (Kessler and Shepard 1997), the multiplicity of routes
followed by the packet-switched data, and the sturdiness of related software are designed to
eliminate errors, to handle unexpected interruptions and interferences, to advise users of
encountered difficulties and to recover gracefully from any disasters and down-times.
4. Low cost. The Internet makes new uses of old technologies (standalone computers, operating
systems, telecommunication networks). Whenever possible, Internet operations piggyback on
already existing solutions. They rely on modularised, configurable, easy-to-replace, and easy-
to-upgrade off-the-shelf software and hardware.
5. Ubiquity. The robustness, modularisation and low cost of the system is coupled with the
growing densities of dedicated computer lines, network backbones, as well as wired and
wireless phone networks. This means that Internet-enabled tools are deployed in ever growing
numbers in an ever widening range of environments
The Internet Tools and their Characteristics
The evolution of the Internet is punctuated by the introduction and mass acceptance of such key
resources and tools as Unix, Email, Usenet newsgroups, Telnet, Listserv Mailing List Software,
File Transfer Protocol, Internet Relay Chat, WAIS, Gopher, WWW, and more recently by the
Altavista search engine, Java language
UNIX
The foundations of an operating system called Unix were laid at AT&T Bell Laboratories in 1969.
Unix is not a product of Internet culture. It is its catalyst and cornerstone. Internet culture owes
Unix a major debt in the four areas. These conceptual and procedural debts are: multitasking,
community fostering, openness and extensibility, and public access to the source code. Let’s
briefly look at each of these debts.
Unix was one of the first operating systems which embodied the principle of multitasking (time-
sharing). In most general terms it means that several users could simultaneously operate within a
single environment and that the system as a whole coped well with this complicated situation. Unix
was the first operating system which demonstrated in practical terms robustness and tolerance for
the variety of it’s users simultaneous activities.
Email
Email is the first of the Internet’s tools dedicated to the provision of fast, simple and global
communication between people. This revolutionary client/server software implied for the first time
that individuals (both as persons and roles) could have their unique electronic addresses. Within
this framework messages were now able to chase their individual recipients anywhere in the world.
The initial format of email communication was that of a one-to-one exchange of electronic
messages. This simple function was subsequently augmented by email’s ability to handle various
attachments, such as documents with complex formatting, numbers and graphic files. Later, with
the use of multi-recipient mailing lists electronic mail could be used for simple multicasting of
messages in the form of one-to-many transmissions.
Usenet Newsgroups
Usenet (Unix Users Network), the wide-area array of sites collating and swapping UUCP-based
messages was pioneered in 1979. Usenet was originally conceived as a surrogate for the Internet
(then called ARPANET). It was to be used by people who did not have ready access to the TCP/IP
protocol and yet wanted to discuss their various Unix tools. It was only in 1987 that the NNTP
(Network News Transfer Protocol) was established in order to enable Usenet to be carried on the
Internet (i.e. TCP/IP) networks (Laursen 1997).
Telnet
The networking tool called Telnet was invented in 1980 (Postel 1980). It allowed people (with
adequate access rights) to login remotely into any networked computer in the world and to employ
the usual gamut of computer commands. Thereby files and directories could be established,
renamed and deleted; electronic mail read and dispatched; Usenet flame wars indulged in; and
statistical packages run against numeric data - all at a distance. Moreover, results of all these and
other operations could be remotely directed to a printer or via FTP to another networked computer.
In short, Telnet gave us the ability to engage in long distance man-machine transactions, that is,
ability to do the work as telecommuters.
File Transfer Protocol
The FTP client/server technology was first introduced in 1985 (Barnes 1997). Its usefulness to
Internet culture is three-fold. Firstly, the FTP was a first widely-accepted tool for systematic
permanent storage and world-wide transmission of substantial electronic information (e.g.
programs, text files, image files).
Secondly, FTP archives promoted the use of anonymous login (i.e. limited public access)
techniques as a way of coping with the mounting general requests for access to the archived
information. That novel technique placed electronic visitors in a strictly circumscribed work
environment. There they could browse through data subdirectories, copy relevant files, as well as
deposit (within the context of a dedicated area) new digital material. However, the FTP software
would not let them wander across other parts of the host, nor did the visitors have the right to
change any component part of the accessed electronic archive.
Thirdly, the rapid proliferation in the number of public access FTP archives all over the world
necessitated techniques for keeping an authoritative, up-to-date catalogue of their contents. This
was accomplished through the Archie database (Deutsch et al. 1995) and its many mirrors. Archie
used an automated process which periodically scanned the entire contents of all known
“anonymous FTP” sites and report findings back to its central database.
This approach, albeit encumbered by the need to give explicit instructions as to which of the FTP
systems need to be monitored, nevertheless integrated a motley collection of online resources into
a single, cohesive, distributed information system.
Web based Client/Server
Gopher
Gopher client/server software was used for the first time in 1991 (La Tour nd; Liu, C. et al. 1994).
It was a ground-breaking development on two accounts. Firstly, it acted as a predictable, unified
environment for handling an array of other electronic tools, such as Telnet, FTP and WAIS.
Secondly, Gopher acted as electronic glue which seamlessly linked together archipelagos of
information tracked by and referenced by other gopher systems. In short, Gopher was the first ever
tool capable of the creation and mapping of a rich, large-scale, and infinitely extendable
information space.
World Wide Web Server
The first prototype of the WWW server was built in 1991 (Cailliau 1995, Berners- Lee, nd;
Berners-Lee 1998). The WWW server is an invention which has redefined the way the Internet is
visualized by its users.
Firstly, the WWW server introduced to the Internet the powerful point-and-click hypertext
capabilities. The hypertext notions of a home page and links spanning the entire body of data was
first successfully employed on a small, standalone scale in 1986 in the Macintosh software called
Hypercard (Goodman 1987). The WWW however, was the first hypertext technology applied to
distributed online information. This invention was previously theoretically anticipated by a
number of writers, including in the 1945 by Vannevar Bush of the Memex fame, and again in the
1965 by Theodor Nelson who embarked on the never-completed Project Xanadu (Nielsen 1995,
Gilster 1997:267). Hypertext itself is not an new idea. It is already implicitly present (albeit in an
imperfect because a paperbased form) in the first alphabetically ordered dictionaries such as Grand
dictionnaire historique, compiled in 1674 by Louis Moreriego; or John Harris’ Lexicon Technicum
which was published in 1704 (PWN 1964). It is also evident in the apparatus, such as footnotes,
commentaries, appendices and references, of a 19th century scholarly monograph.
The hypertext principle as employed by the WWW server meant that any part of any text (and
subsequently, image) document could act as a portal leading directly to any other nominated
segment of any other document anywhere in the world.
Secondly, the WWW server introduced an explicit address for subsets of information. Common
and simple addressing methodology (Universal Resource Locater [URL] scheme) enabled users to
uniquely identify AND access any piece of networked information anywhere in the document, or
anywhere on one’s computer, or - with the same ease - anywhere in the world.
Thirdly, the WWW provided a common, simple, effective and extendable language for document
markup. The HTML language could be used in three different yet complementary ways: (a) as a
tool for establishing the logical structure of a document; (b) as a tool for shaping the size,
appearance and layout of lines of text on the page; (c) as a tool for building the internal (i.e. within
the same document) and external (to a different document residing on the same or totally different
server) hypertext connections.
The interlocking features of the hypertext, URLs and the markup language, have laid foundations
for today’s global, blindingly fast and infinitely complex cyberspace. Moreover, the World Wide
Web, like gopher before it, was also a powerful electronic glue which smoothly integrated not only
most of the existing Internet tools (Email, Usenet, Telnet, Listservs FTP, IRC, and Gopher (but,
surprisingly, not WAIS), but also the whole body of online information which could accessed by
all those tools. However, the revolutionary strengths of the Web have not been immediately
obvious to the most of the Internet community, who initially regarded the WWW as a mere (and
possibly clumsy) variant of the then popular Gopher technology. This situation has changed only
with the introduction of PC-based Web browsers with user-friendly, graphics-interfaces.
World Wide Web Browsers
The principle of a client/server division of labour was put to work yet again in the form of a series
of WWW browsers such as Mosaic (built in 1993), Lynx (which is an ASCII, Telnet-based client
software), Erwise, Viola, Cello, as well as, since 1994, several editions of Netscape and Explorer
Each of the Web browsers, except for Lynx, which constitutes a deliberately simplified and thus
very fast software, provided Internauts with series of novel capabilities.
These are: (a) an ability to handle multi-format, or multimedia (numbers, text, images, animations,
video, sound) data within the framework of a single online document; (b) the ability to configure
and modify the appearance of received information in a manner which best suits the preferences
of the reader; (c) the ability to use the browser as a WYSIWYG (“what you see is what you get”)
tool for crafting and proofreading of the locally created HTML pages on a user’s PC; (d) ability to
acquire, save and display the full HTML source code for any and all of the published web
documents.
Elements of Internet Architecture
• Protocol Layering
• Networks
• Routers
• Addressing Architecture
Protocol Layering
To communicate using the Internet system, a host must implement the layered set of protocols
comprising the Internet protocol suite. A host typically must implement at least one protocol from
each layer.
The protocol layers used in the Internet architecture are as follows
Application Layer
The Application Layer is the top layer of the Internet protocol suite. The Internet suite does not
further subdivide the Application Layer, although some application layer protocols do contain
some internal sub-layering. The application layer of the Internet suite essentially combines the
functions of the top two layers - Presentation and Application – of the OSI Reference Model
[ARCH:8]. The Application Layer in the Internet protocol suite also includes some of the function
relegated to the Session Layer in the OSI Reference Model.
We distinguish two categories of application layer protocols: user protocols that provide service
directly to users, and support protocols that provide common system functions. The most common
Internet user protocols are:
• Telnet (remote login)
• FTP (file transfer)
• SMTP (electronic mail delivery)
There are a number of other standardized user protocols and many private user protocols. Support
protocols, used for host name mapping, booting, and management include SNMP, BOOTP, TFTP,
the Domain Name System (DNS) protocol, and a variety of routing protocols.
Transport Layer
The Transport Layer provides end-to-end communication services. This layer is roughly
equivalent to the Transport Layer in the OSI Reference Model, except that it also incorporates
some of OSI’s Session Layer establishment and destruction functions.
There are two primary Transport Layer protocols at present:
• Transmission Control Protocol (TCP)
• User Datagram Protocol (UDP)
TCP is a reliable connection-oriented transport service that provides end-to-end reliability,
resequencing, and flow control. UDP is a connectionless (datagram) transport service. Other
transport protocols have been developed by the research community, and the set of official Internet
transport protocols may be expanded in the future.
Internet Layer
All Internet transport protocols use the Internet Protocol (IP) to carry data from source host to
destination host. IP is a connectionless or datagram internetwork service, providing no end-to-end
delivery guarantees. IP datagrams may arrive at the destination host damaged, duplicated, out of
order, or not at all. The layers above IP are responsible for reliable delivery service when it is
required. The IP protocol includes provision for addressing, type-of-service specification,
fragmentation and reassembly, and security.
The datagram or connectionless nature of IP is a fundamental and characteristic feature of the
Internet architecture. The Internet Control Message Protocol (ICMP) is a control protocol that is
considered to be an integral part of IP, although it is architecturally layered upon IP - it uses IP to
carry its data end-to-end. ICMP provides error reporting, congestion reporting, and first-hop router
redirection.
The Internet Group Management Protocol (IGMP) is an Internet layer protocol used for
establishing dynamic host groups for IP multicasting.
Link Layer
To communicate on a directly connected network, a host must implement the communication
protocol used to interface to that network. We call this a Link Layer protocol. Some older Internet
documents refer to this layer as the Network Layer, but it is not the same as the Network Layer in
the OSI Reference Model.
This layer contains everything below the Internet Layer and above the Physical Layer (which is
the media connectivity, normally electrical or optical, which encodes and transports messages). Its
responsibility is the correct delivery of messages, among which it does not differentiate.
Protocols in this Layer are generally outside the scope of Internet standardization; the Internet
(intentionally) uses existing standards whenever possible. Thus, Internet Link Layer standards
usually address only address resolution and rules for transmitting IP packets over specific Link
Layer protocols.
Networks
The constituent networks of the Internet system are required to provide only packet
(connectionless) transport. According to the IP service specification, datagrams can be delivered
out of order, be lost or duplicated, and/or contain errors.
For reasonable performance of the protocols that use IP (e.g., TCP), the loss rate of the network
should be very low. In networks providing connection-oriented service, the extra reliability
provided by virtual circuits enhances the end-end robustness of the system, but is not necessary
for Internet operation.
Constituent networks may generally be divided into two classes:
• Local-Area Networks (LANs) LANs may have a variety of designs. LANs normally cover a
small geographical area (e.g., a single building or plant site) and provide high bandwidth with
low delays. LANs may be passive (similar to Ethernet) or they may be active (such as ATM).
• Wide-Area Networks (WANs) Geographically dispersed hosts and LANs are interconnected by
wide-area networks, also called long-haul networks.
• These networks may have a complex internal structure of lines and packetswitches, or they may
be as simple as point-to-point lines.
Routers
In the Internet model, constituent networks are connected together by IP datagram forwarders
which are called routers or IP routers. In this document, every use of the term router is equivalent
to IP router. Many older Internet documents refer to routers as gateways. Historically, routers have
been realized with packet-switching software executing on a general-purpose CPU. However, as
custom hardware development becomes cheaper and as higher throughput is required, special
purpose hardware is becoming increasingly common. This specification applies to routers
regardless of how they are implemented.
A router connects to two or more logical interfaces, represented by IP subnets or unnumbered point
to point lines . Thus, it has at least one physical interface. Forwarding an IP datagram generally
requires the router to choose the address and relevant interface of the next-hop router or (for the
final hop) the destination host. This choice, called relaying or forwarding depends upon a route
database within the router. The route database is also called a routing table or forwarding table.
The term “router” derives from the process of building this route database; routing protocols and
configuration interact in a process called routing. The routing database should be maintained
dynamically to reflect the current topology of the Internet system. A router normally accomplishes
this by participating in distributed routing and reachability algorithms with other routers.
Routers provide datagram transport only, and they seek to minimize the state information
necessary to sustain this service in the interest of routing flexibility and robustness.
Packet switching devices may also operate at the Link Layer; such devices are usually called
bridges. Network segments that are connected by bridges share the same IP network prefix forming
a single IP subnet. These other devices are outside the scope of this document.
Common uses of the Internet
E-mail
The concept of sending electronic text messages between parties in a way analogous to mailing
letters or memos predates the creation of the Internet. Even today it can be important to distinguish
between Internet and internal e-mail systems. Internet e-mail may travel and be stored unencrypted
on many other networks and machines out of both the sender’s and the recipient’s control. During
this time it is quite possible for the content to be read and even tampered with by third parties, if
anyone considers it important enough. Purely internal or intranet mail systems, where the
information never leaves the corporate or organization’s network, are much more secure, although
in any organization there will be IT and other personnel whose job may involve monitoring, and
occasionally accessing, the e-mail of other employees not addressed to them.
The World Wide Web
Many people use the terms Internet and World Wide Web (or just the Web) interchangeably, but,
as discussed above, the two terms are not synonymous.
The World Wide Web is a huge set of interlinked documents, images and other resources, linked
by hyperlinks and URLs. These hyperlinks and URLs allow the web servers and other machines
that store originals, and cached copies, of these resources to deliver them as required using HTTP
(Hypertext Transfer Protocol). HTTP is only one of the communication protocols used on the
Internet.Web services also use HTTP to allow software systems to communicate in order to share
and exchange business logic and data.
Software products that can access the resources of the Web are correctly termed user agents. In
normal use, web browsers, such as Internet Explorer and Firefox, access web pages and allow
users to navigate from one to another via hyperlinks. Web documents may contain almost any
combination of computer data including photographs, graphics, sounds, text, video, multimedia
and interactive content including games, office applications and scientific demonstrations.
Through keyword-driven Internet research using search engines like Yahoo! and Google, millions
of people worldwide have easy, instant access to a vast and diverse amount of online information.
Compared to encyclopedias and traditional libraries, the World Wide Web has enabled a sudden
and extreme decentralization of information and data.
It is also easier, using the Web, than ever before for individuals and organizations to publish ideas
and information to an extremely large audience. Anyone can find ways to publish a web page or
build a website for very little initial cost. Publishing and maintaining large, professional websites
full of attractive, diverse and up-to-date information is still a difficult and expensive proposition,
however.
Many individuals and some companies and groups use “web logs” or blogs, which are largely used
as easily updatable online diaries. Some commercial organizations encourage staff to fill them with
advice on their areas of specialization in the hope that visitors will be impressed by the expert
knowledge and free information, and be attracted to the corporation as a result. One example of
this practice is Microsoft, whose product developers publish their personal blogs in order to pique
the public’s interest in their work.
Collections of personal web pages published by large service providers remain popular, and have
become increasingly sophisticated. Whereas operations such as Angelfire and GeoCities have
existed since the early days of the Web, newer offerings from, for example, Facebook and
MySpace currently have large followings. These operations often brand themselves as social
network services rather than simply as web page hosts. Advertising on popular web pages can be
lucrative, and e-commerce or the sale of products and services directly via the Web continues to
grow.
In the early days, web pages were usually created as sets of complete and isolated HTML text files
stored on a web server. More recently, websites are more often created using content management
system (CMS) or wiki software with, initially, very little content. Contributors to these systems,
who may be paid staff, members of a club or other organization or members of the public, fill
underlying databases with content using editing pages designed for that purpose, while casual
visitors view and read this content in its final HTML form. There may or may not be editorial,
approval and security systems built into the process of taking newly entered content and making
it available to the target visitors.
Remote access
The Internet allows computer users to connect to other computers and information stores easily,
wherever they may be across the world. They may do this with or without the use of security,
authentication and encryption technologies, depending on the requirements. This is encouraging
new ways of working from home, collaboration and information sharing in many industries. An
accountant sitting at home can audit the books of a company based in another country, on a server
situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts
could have been created by home-working bookkeepers, in other remote locations, based on
information e-mailed to them from offices all over the world. Some of these things were possible
before the widespread use of the
Internet, but the cost of private leased lines would have made many of them infeasible in practice.
An office worker away from his desk, perhaps on the other side of the world on a business trip or
a holiday, can open a remote desktop session into his normal office PC using a secure Virtual
Private Network (VPN) connection via the Internet. This gives the worker complete access to all
of his or her normal files and data, including e-mail and other applications, while away from the
office.
This concept is also referred to by some network security people as the Virtual Private Nightmare,
because it extends the secure perimeter of a corporate network into its employees’ homes; this has
been the source of some notable security breaches, but also provides security for the workers.
Collaboration
The low cost and nearly instantaneous sharing of ideas, knowledge, and skills has made
collaborative work dramatically easier. Not only can a group cheaply communicate and test, but
the wide reach of the Internet allows such groups to easily form in the first place, even among
niche interests. An example of this is the free software movement in software development, which
produced GNU and Linux from scratch and has taken over development of Mozilla and
OpenOffice.org (formerly known as Netscape Communicator and StarOffice).
Films such as Zeitgeist, Loose Change and Endgame have had extensive coverage on the Internet,
while being virtually ignored in the mainstream media. Internet “chat”, whether in the form of IRC
“chat rooms” or channels, or via instant messaging systems, allow colleagues to stay in touch in a
very convenient way when working at their computers during the day. Messages can be sent and
viewed even more quickly and conveniently than via e-mail. Extension to these systems may allow
files to be exchanged, “whiteboard” drawings to be shared as well as voice and video contact
between team members.
Version control systems allow collaborating teams to work on shared sets of documents without
either accidentally overwriting each other’s work or having members wait until they get “sent”
documents to be able to add their thoughts and changes.
File sharing
A computer file can be e-mailed to customers, colleagues and friends as an attachment. It can be
uploaded to a website or FTP server for easy download by others.It can be put into a “shared
location” or onto a file server for instant use by colleagues. The load of bulk downloads to many
users can be eased by the use of “mirror” servers or peer-to-peer networks.
In any of these cases, access to the file may be controlled by user authentication; the transit of the
file over the Internet may be obscured by encryption, and money may change hands before or after
access to the file is given. The price can be paid by the remote charging of funds from, for example,
a credit card whose details are also passed— hopefully fully encrypted—across the Internet. The
origin and authenticity of the file received may be checked by digital signatures or by MD5 or
other message digests. These simple features of the Internet, over a worldwide basis, are changing
the basis for the production, sale, and distribution of anything that can be reduced to a computer
file for transmission. This includes all manner of print publications, software products, news,
music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts
in each of the existing industries that previously controlled the production and distribution of these
products.
Internet collaboration technology enables business and project teams to share documents,
calendars and other information. Such collaboration occurs in a wide variety of areas including
scientific research, software development, conference planning, political activism and creative
writing.
Streaming media
Many existing radio and television broadcasters provide Internet “feeds” of their live audio and
video streams (for example, the BBC). They may also allow time-shift viewing or listening such
as Preview, Classic Clips and Listen Again features. These providers have been joined by a range
of pure Internet “broadcasters” who never had on-air licenses.
This means that an Internet-connected device, such as a computer or something more specific, can
be used to access on-line media in much the same way as was previously possible only with a
television or radio receiver. The range of material is much wider, from pornography to highly
specialized, technical web casts. Pod casting is a variation on this theme, where—usually audio—
material is first downloaded in full and then may be played back on a computer or shifted to a
digital audio player to be listened to on the move. These techniques using simple equipment allow
anybody, with little censorship or licensing control, to broadcast audio-visual material on a
worldwide basis.
Webcams can be seen as an even lower-budget extension of this phenomenon. While some
webcams can give full-frame-rate video, the picture is usually either small or updates slowly.
Internet users can watch animals around an African waterhole, ships in the Panama Canal, the
traffic at a local roundabout or their own premises, live and in real time. Video chat rooms, video
conferencing, and remote controllable webcams are also popular. Many uses can be found for
personal webcams in and around the home, with and without two-way sound.
You Tube, sometimes described as an Internet phenomenon because of the vast amount of users
and how rapidly the site’s popularity has grown, was founded on February 15, 2005. It is now the
leading website for free streaming video. It uses a flash-based web player which streams video
files in the format FLV. Users are able to watch videos without signing up; however, if users do
sign up they are able to upload an unlimited amount of videos and they are given their own personal
profile. It is currently estimated that there are 64,000,000 videos on YouTube, and it is also
currently estimated that 825,000 new videos are uploaded every day.
Voice telephony (VoIP)
VoIP stands for Voice over IP, where IP refers to the Internet Protocol that underlies all Internet
communication. This phenomenon began as an optional two-way voice extension to some of the
instant messaging systems that took off around the year 2000. In recent years many VoIP systems
have become as easy to use and as convenient as a normal telephone. The benefit is that, as the
Internet carries the actual voice traffic, VoIP can be free or cost much less than a normal telephone
call, especially over long distances and especially for those with always-on Internet connections
such as cable or ADSL.
Thus, VoIP is maturing into a viable alternative to traditional telephones. Interoperability between
different providers has improved and the ability to call or receive a call from a traditional telephone
is available. Simple, inexpensive VoIP modems are now available that eliminate the need for a PC.
Voice quality can still vary from call to call but is often equal to and can even exceed that of
traditional calls. Remaining problems for VoIP include emergency telephone number dialling and
reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally
available. Traditional phones are line-powered and operate during a power failure; VoIP does not
do so without a backup power source for the electronics.
Most VoIP providers offer unlimited national calling, but the direction in VoIP is clearly toward
global coverage with unlimited minutes for a low monthly fee.VoIP has also become increasingly
popular within the gaming world, as a form of communication between players. Popular gaming
VoIP clients include Ventrilo and Teamspeak, and there are others available also. The PlayStation
3 and Xbox 360 also offer VoIP chat features.
Internet access
Common methods of home access include dial-up, landline broadband (over coaxial cable, fiber
optic or copper wires), Wi-Fi, satellite and 3G technology cell phones.Public places to use the
Internet include libraries and Internet cafes, where computers with Internet connections are
available. There are also Internet access points in many public places such as airport halls and
coffee shops, in some cases just for brief use while standing. Various terms are used, such as
“public Internet kiosk”, “public access terminal”, and “Web payphone”. Many hotels now also
have public terminals, though these are usually fee-based. These terminals are widely accessed for
various usage like ticket booking, bank deposit, online payment etc. Wi-Fi provides wireless
access to computer networks, and therefore can do so to the Internet itself.
Hotspots providing such access include Wi- Fi cafes, where would-be users need to bring their
own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to
customers only, or fee-based.A hotspot need not be limited to a confined location. A whole campus
or park, or even an entire city can be enabled. Grassroots efforts have led to wireless community
networks.
Commercial Wi-Fi services covering large city areas are in place in London, Vienna, Toronto, San
Francisco, Philadelphia, Chicago and Pittsburgh. The Internet can then be accessed from such
places as a park bench.Apart from Wi-Fi, there have been experiments with proprietary mobile
wireless networks like Ricochet, various high-speed data services over cellular phone networks,
and fixed wireless services. High-end mobile phones such as smartphones generally come with
Internet access through the phone network. Web browsers such as Opera are available on these
advanced handsets, which can also run a wide variety of other Internet software. More mobile
phones have Internet access than PCs, though this is not as widely used. An Internet access
provider and protocol matrix differentiates the methods used to get online.
Marketing
The Internet has also become a large market for companies; some of the biggest companies today
have grown by taking advantage of the efficient nature of low-cost advertising and commerce
through the Internet, also known as e-commerce. It is the fastest way to spread information to a
vast number of people simultaneously. The Internet has also subsequently revolutionized
shopping—for example; a person can order a CD online and receive it in the mail within a couple
of days, or download it directly in some cases. The Internet has also greatly facilitated personalized
marketing which allows a company to market a product to a specific person or a specific group of
people more so than any other advertising medium.
Examples of personalized marketing include online communities such as MySpace, Friendster,
Orkut, Facebook and others which thousands of Internet users join to advertise themselves and
make friends online. Many of these users are young teens and adolescents ranging from 13 to 25
years old. In turn, when they advertise themselves they advertise interests and hobbies, which
online marketing companies can use as information as to what those users will purchase online,
and advertise their own companies’ products to those users.
• Extranet Applications
– Supply Chain Management
• Example: Dell Computers
– Real-Time Access to Information
• Example: CSX railroad
– Collaboration
• Example: Caterpillar
Secure Electronic Transaction or SET is a system which ensures security and integrity of
electronic transactions done using credit cards in a scenario. SET is not some system that enables
payment but it is a security protocol applied on those payments. It uses different encryption and
hashing techniques to secure payments over internet done through credit cards. SET protocol was
supported in development by major organizations like Visa, Mastercard, Microsoft which provided
its Secure Transaction Technology (STT) and NetScape which provided technology of Secure
Socket Layer (SSL).
SET protocol restricts revealing of credit card details to merchants thus keeping hackers and
thieves at bay. SET protocol includes Certification Authorities for making use of standard Digital
Certificates like X.509 Certificate.
Before discussing SET further, let’s see a general scenario of electronic transaction, which includes
client, payment gateway, client financial institution, merchant and merchant financial institution.
Requirements
SET protocol has some requirements to meet, some of the important requirements are :
Participants
In the general scenario of online transaction, SET includes similar participants:
1. Cardholder – customer
2. Issuer – customer financial institution
3. Merchant
4. Acquirer – Merchant financial
5. Certificate authority – Authority which follows certain standards and issues
certificates(like X.509V3) to all other participants.
E-commerce sites use electronic payment, where electronic payment refers to paperless monetary
transactions. Electronic payment has revolutionized the business processing by reducing the
paperwork, transaction costs, and labor cost. Being user friendly and less time-consuming than
manual processing, it helps business organization to expand its market reach/expansion. Listed
below are some of the modes of electronic payments −
• Credit Card
• Debit Card
• Smart Card
• E-Money
• Electronic Fund Transfer (EFT)
Credit Card
Payment using credit card is one of most common mode of electronic payment. Credit card is small
plastic card with a unique number attached with an account. It has also a magnetic strip embedded
in it which is used to read credit card via card readers. When a customer purchases a product via
credit card, credit card issuer bank pays on behalf of the customer and customer has a certain time
period after which he/she can pay the credit card bill. It is usually credit card monthly payment
cycle. Following are the actors in the credit card system.
Step Description
Step 1 Bank issues and activates a credit card to the customer on his/her request.
The customer presents the credit card information to the merchant site or to
Step 2
the merchant from whom he/she wants to purchase a product/service.
Merchant validates the customer's identity by asking for approval from the
Step 3
card brand company.
Card brand company authenticates the credit card and pays the transaction by
Step 4
credit. Merchant keeps the sales slip.
Merchant submits the sales slip to acquirer banks and gets the service charges
Step 5
paid to him/her.
Acquirer bank requests the card brand company to clear the credit amount and
Step 6
gets the payment.
Now the card brand company asks to clear the amount from the issuer bank
Step 6
and the amount gets transferred to the card brand company.
Debit Card
Debit card, like credit card, is a small plastic card with a unique number mapped with the bank
account number. It is required to have a bank account before getting a debit card from the bank.
The major difference between a debit card and a credit card is that in case of payment through
debit card, the amount gets deducted from the card's bank account immediately and there should
be sufficient balance in the bank account for the transaction to get completed; whereas in case of
a credit card transaction, there is no such compulsion.
Debit cards free the customer to carry cash and cheques. Even merchants accept a debit card
readily. Having a restriction on the amount that can be withdrawn in a day using a debit card helps
the customer to keep a check on his/her spending.
Smart Card
Smart card is again similar to a credit card or a debit card in appearance, but it has a small
microprocessor chip embedded in it. It has the capacity to store a customer’s work-related and/or
personal information. Smart cards are also used to store money and the amount gets deducted after
every transaction.
Smart cards can only be accessed using a PIN that every customer is assigned with. Smart cards
are secure, as they store information in encrypted format and are less expensive/provides faster
processing. Mondex and Visa Cash cards are examples of smart cards.
E-Money
E-Money transactions refer to situation where payment is done over the network and the amount
gets transferred from one financial body to another financial body without any involvement of a
middleman. E-money transactions are faster, convenient, and saves a lot of time.
Online payments done via credit cards, debit cards, or smart cards are examples of emoney
transactions. Another popular example is e-cash. In case of e-cash, both customer and merchant
have to sign up with the bank or company issuing e-cash.
It is a very popular electronic payment method to transfer money from one bank account to another
bank account. Accounts can be in the same bank or different banks. Fund transfer can be done
using ATM (Automated Teller Machine) or using a computer.
Nowadays, internet-based EFT is getting popular. In this case, a customer uses the website
provided by the bank, logs in to the bank's website and registers another bank account. He/she then
places a request to transfer certain amount to that account. Customer's bank transfers the amount
to other account if it is in the same bank, otherwise the transfer request is forwarded to an ACH
(Automated Clearing House) to transfer the amount to other account and the amount is deducted
from the customer's account. Once the amount is transferred to other account, the customer is
notified of the fund transfer by the bank.
Electronic Funds Transfer (EFT) is a system of transferring money from one bank account directly
to another without any paper money changing hands. One of the most widely-used EFT programs
is direct deposit, through which payroll is deposited straight into an employee's bank account.
However, EFT refers to any transfer of funds initiated through an electronic terminal, including
credit card, ATM, Fedwire and point-of-sale (POS) transactions. It is used for both credit transfers,
such as payroll payments, and for debit transfers, such as mortgage payments.
Transactions are processed by the bank through the Automated Clearing House (ACH) network,
the secure transfer system that connects all U.S. financial institutions. For payments, funds are
transferred electronically from one bank account to the billing company's bank, usually less than
a day after the scheduled payment date.
The ACH Network operates as a batch processing system. Financial institutions accumulate ACH
transactions throughout the day, which are handled via batch processing later on. According to
NACHA, which creates payment and financial messaging rules and standards, the ACH Network
handles 24 billion EFTs each year, accounting for more than $41 trillion transferred. The ACH
Network is one of the largest and most reliable payment systems in the world, according to the
association.
To complete an EFT, the receiving party must provide the following information:
• The name of the bank receiving funds
• The type of account receiving funds (e.g., checking or savings)
• The bank’s ABA routing number
• The recipient’s account number
The growing popularity of EFT for online bill payment is paving the way for paperless transactions
where checks, stamps, envelopes and paper bills are obsolete. The benefits of EFT include reduced
administrative costs, increased efficiency, simplified bookkeeping, and greater security. However,
the number of companies who send and receive bills through the Internet is still relatively small.
Types of EFTs
• Direct deposit: Enables businesses to pay employees. During the employee onboarding
process, new employees typically specify the financial institution to receive the direct
deposit payments.
• Wire transfers: Used for non-regular payments, such as the down payment on a house.
• Automated Teller Machines (ATMs): Allows cash withdrawals and deposits, fund
transfers and checking of account balances at multiple locations, such as branch locations,
retail stores, shopping malls and airports.
• Debit cards: Allows users to pay for transactions and have those funds deducted from the
account linked to the card.
• Pay-by-phone systems: Allows users to pay bills or transfer money over the phone.
• Online banking: Available via personal computer, tablet or smartphone. Using online
banking, users can access accounts to make payments, transfer funds and check balances.
Security is an essential part of any transaction that takes place over the internet. Customers will
lose his/her faith in e-business if its security is compromised. Following are the essential
requirements for safe e-payments/transactions −
• Encryption − It is a very effective and practical way to safeguard the data being
transmitted over the network. Sender of the information encrypts the data using a secret
code and only the specified receiver can decrypt the data using the same or a different
secret code.
• Digital Signature − Digital signature ensures the authenticity of the information. A digital
signature is an e-signature authenticated through encryption and password.
• Security Certificates − Security certificate is a unique digital id used to verify the identity
of an individual website or user.
We will discuss here some of the popular protocols used over the internet to ensure secured online
transactions.
It is the most commonly used protocol and is widely used across the industry. It meets following
security requirements −
• Authentication
• Encryption
• Integrity
• Non-reputability
"https://" is to be used for HTTP urls with SSL, where as "http:/" is to be used for HTTP urls
without SSL.
SHTTP extends the HTTP internet protocol with public key encryption, authentication, and digital
signature over the internet. Secure HTTP supports multiple security mechanism, providing security
to the end-users. SHTTP works by negotiating encryption scheme types used between the client
and the server.
Stored-Value Card
A stored value card is a type of electronic bank debit card. Stored-value cards have a specific
dollar value programed into them. Banks provide these cards as a service for customers who
cannot open checking or other deposit accounts.
Stored-Value Card Definition
Stored-value cards come in two major categories. Closed-loop cards have a one-time limit, such
as with merchant gift cards and prepaid phone cards. Holders of open-loop cards, on the other
hand, may reload these with cash and use them again.
Stored Value Card Versus Debit Card
A stored value card differs from a debit card in that a debit card does not have a specific value of
money attached to it. Rather, it is a payment card that deducts money directly from a consumer’s
checking account when making a purchase. In this regard, its value directly correlates with the
value of the attached checking account.
Unlike a credit card, however (see below), debit cards generally do not allow a user to go into
debt. In addition, the cards often have daily limits for purchases (i.e. consumers might not be
able to spend large sums of money with simply a debit card). At times, if a user has signed up for
overdraft coverage, it may be possible to extend the amount funds after a checking account
reaches zero. Overdraft allowance lets the individual continue withdrawing money – similar to a
credit card.
Some financial institutions offer overdraft protection, in which they bar an individual from
withdrawing if the account hits a set limit, such as $100, or more. This ensures the account never
goes below zero, triggering a service fee.
Stored Value Card Versus Credit Card
A credit card may also be used to make purchases in person at a store, over the phone or online.
Unlike a debit card or stored value card, however, a credit card allows the user to carry a balance.
In exchange for this privilege of using loaned funds, users often pay interest on an existing
balance. Credit cards may even charge higher interest rates than other personal loans, such as
auto loans, home equity loans, student loans, and mortgage loans (although rates are generally
lower than payday loans).
Unlike closed loop stored value cards, credit card loans are open-ended. A user can borrow
repeatedly as long as they stay below their credit limit.
Electronic Check
An electronic check, or e-check, is a form of payment made via the Internet, or another data
network, designed to perform the same function as a conventional paper check. Since the check is
in an electronic format, it can be processed in fewer steps.
Additionally, it has more security features than standard paper checks including authentication,
public key cryptography, digital signatures, and encryption, among others.
Key Takeaways
• An electronic check is a form of payment made via the internet that is designed to perform
the same function as a conventional paper check.
• One of the more frequently used versions of the electronic check is the direct deposit
system offered by many employers.
• Generally, the costs associated with issuing an electronic check are notably lower than
those associated with paper checks.
• An electronic check has more security features than standard paper checks.
An electronic check is part of the larger electronic banking field and part of a subset of transactions
referred to as electronic fund transfers (EFTs). This includes not only electronic checks but also
other computerized banking functions such as ATM withdrawals and deposits, debit card
transactions and remote check depositing features. The transactions require the use of various
computer and networking technologies to gain access to the relevant account data to perform the
requested actions.
Electronic checks were developed in response to the transactions that arose in the world of
electronic commerce. Electronic checks can be used to make a payment for any transaction that a
paper check can cover, and are governed by the same laws that apply to paper checks. This was
the first form of Internet-based payment used by the U.S. Treasury for making large online
payments.
Generally, the costs associated with issuing an electronic check are notably lower than those
associated with paper checks. Not only is there no requirement for a physical paper check, which
costs money to produce, but also electronic checks do not require physical postage in cases of
payments being made to entities outside the direct reach of the entity issuing the funds.
E-commerce is undergoing huge growth in terms of the volume of goods and services that are
being traded on-line. New areas such as B2B and the related business to- government (B2G) e-
commerce are developing as well as the potential for large numbers of people engaging in m-
commerce from wireless handsets are increasing. Even the most optimistic estimations of e-
commerce still place the goods value at less than 1% of the total value of goods and service traded
in the conventional economy, so as larger numbers of people come on-line, there is plenty of scope
for growth. In order to bring an on-line transaction to completion, payment must be fully integrated
into the on-line dialogue Banks will find a demand from their large business clients to effect high-
value bank mediated transfers of funds easily and efficiently. Similar demand will be experienced
in Europe and Asia and, to a lesser extent, the developing world. It may be that developments such
as Worldwide Automated Clearing House (WATCH) may eventually lead to a situation in which
individuals and organizations transacting on the Internet can easily move funds to and from any
country in the world. It may be that these new payment systems providers can be more agile in
responding to customer needs and may supplant banks for certain classes of payments. This is
particularly appropriate in countries whose banking infrastructure is less developed than advanced
countries. A large number of companies have developed universal payment portal offering a whole
host of ostensibly free information and services to consumers; The use of real micro payments,
though, is clearly more flexible and allows a much clearer link between the content delivered and
the amount paid. M-commerce is undoubtedly the most active area in electronic payments. As
telecommunications manufacturers and network operators seek to define the shape of the mobile
Internet, startup companies are busy coming up with new ways to make payments on-line. One
very large area of uncertainty is the degree to which the mobile Internet will resemble the fixed-
line Internet.. With the advent of modern technologies in telecommunications, infrastructure and
protocols, future payments will be made through e-payments by Business to Business, Business to
Customer, Customer to Government.
Managerial Issues
Managerial issues for electronic payment systems vary depending upon the business position.
• Security solution providers can cultivate the opportunity of providing solutions for secure
electronic payment systems. Typical ones include authentication, encryption, integrity, and
nonrepudiation.
• Electronic payment systems solution providers can offer various types of electronic
payment systems to e-stores and banks. The SET solution of having the certificate on the
smart card is an emerging issue to be resolved.
• Electronic stores should select an appropriate set of electronic payment systems. Until
electronic payment methods become popular among customers, it is necessary to offer
traditional payment methods as well.
• Banks need to develop cyber-banks compatible with the various electronic payment
systems (credit card, debit card, stored-value card, and e-check) that will be used by
customers at e-stores. Watch for the development of consistent standards in certificates and
stored-value-card protocols.
• Credit card brand companies need to develop standards like SET and watch 'the
acceptance by customers. It is necessary to balance security with efficiency. Careful
attention is needed to determine when the SSL-based solution will be replaced by the SET
-based solution and whether to combine the credit card with the open or closed stored-value
card.
• Smart card brands should develop a business model in cooperation with application
sectors (like transportation and pay phones) and banks. Having standards is the key to
expand interoperable applications. In designing business models, it is important to consider
the adequate number of smart cards from the customer’s point of view.
• Certificate authorities need to identify all types of certificates to be provided. Banks and
credit card companies need to consider whether they should become a clearing agent.