0% found this document useful (0 votes)
140 views227 pages

Management Information Systems

This document provides a summary of an open educational resource textbook on information systems for business. The textbook was created by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, and Wael Abdeljabbar for the Open Education Resource Initiative, which aims to reduce textbook costs for students. The textbook covers topics such as hardware, software, data, networking, information systems security, leveraging information technology for competitive advantage, business processes, roles in information systems, and future trends in information systems. It is freely available online for students, faculty, and scholars.

Uploaded by

kuku
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
140 views227 pages

Management Information Systems

This document provides a summary of an open educational resource textbook on information systems for business. The textbook was created by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, and Wael Abdeljabbar for the Open Education Resource Initiative, which aims to reduce textbook costs for students. The textbook covers topics such as hardware, software, data, networking, information systems security, leveraging information technology for competitive advantage, business processes, roles in information systems, and future trends in information systems. It is freely available online for students, faculty, and scholars.

Uploaded by

kuku
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 227

INFORMATION

SYSTEMS FOR
BUSINESS

Ly-Huong T. Pham, Tejal Desai-Naik, Laurie


Hammond, & Wael Abdeljabbar
OERI
Information Systems for Business
Revised First Edition (2021)

LY-HUONG T. PHAM, PH.D., MBA

TEJAL DESAI-NAIK

LAURIE HAMMOND

WAEL ABDELJABBAR, PH.D.


This text is disseminated via the Open Education Resource (OER) LibreTexts Project (https://LibreTexts.org) and like the hundreds
of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all,
pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully
consult the applicable license(s) before pursuing such effects.
Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their
students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new
technologies to support learning.

The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform
for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our
students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-
access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource
environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being
optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are
organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields)
integrated.
The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot
Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions
Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120,
1525057, and 1413739.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not
necessarily reflect the views of the National Science Foundation nor the US Department of Education.
Have questions or comments? For information about adoptions or adaptions contact [email protected]. More information on our
activities can be found via Facebook (https://facebook.com/Libretexts), Twitter (https://twitter.com/libretexts), or our blog
(http://Blog.Libretexts.org).
This text was compiled on 12/21/2023
An Open Educational Resource Supported by the Academic Senate for
California Community Colleges Open Educational Resources Initiative

The Academic Senate for California Community Colleges Open Educational Resources Initiative (OERI)
was funded by the California legislature in trailer bill language during the summer of 2018. The OERI’s
mission is to reduce the cost of educational resources for students by expanding the availability and
adoption of high quality Open Educational Resources (OER). The OERI facilitates and coordinates the
curation and development of OER texts, ancillaries, and support systems. In addition, the OERI supports
local OER implementation efforts through the provision of professional development, technical support,
and technical resources.

The information in this resource is intended solely for use by the user who accepts full responsibility for
its use. Although the author(s) and ASCCC OERI have made every effort to ensure that the information
in this resource is accurate, openly licensed, and accessible at press time, the author(s) and ASCCC
OERI do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption
caused by errors or omissions, whether such errors or omissions result from negligence, accident, or any
other cause.

Please bring all such errors and changes to the attention of Academic Senate of California Community
Colleges OER Initiative via e-mail ([email protected]).

Academic Senate for California Community Colleges


One Capitol Mall, Suite 230
Sacramento, CA 95814
TABLE OF CONTENTS
ASCCC OERI
About the Book
Licensing
Preface

1: What is an Information System?


1: What Is an Information System?
1.1: Introduction
1.2: Identifying the Components of Information Systems
1.3: The Role of Information Systems
1.4: Can Information Systems Bring Competitive Advantage?
1.5: Summary
1.6: Study Questions
2: Hardware
2.1: Introduction
2.2: Tour of a Digital Device
2.3: Sidebar- Moore’s Law
2.4: Removable Media
2.5: Other Computing Devices
2.6: Summary
2.7: Study Questions
3: Software
3.1: Introduction to Software
3.2: Types of Software
3.3: Cloud Computing
3.4: Software Creation
3.5: Summary
3.6: Study Questions
4: Data and Databases
4.1: Introduction to Data and Databases
4.2: Examples of Data
4.3: Structured Query Language
4.4: Designing a Database
4.5: Sidebar- The Difference between a Database and a Spreadsheet
4.6: Big Data
4.7: Data Warehouse
4.8: Data Mining
4.9: Database Management Systems
4.10: Enterprise Databases
4.11: Knowledge Management
4.12: Sidebar- What is data science?
4.13: Summary
4.14: Study Questions
5: Networking and Communication

1 https://workforce.libretexts.org/@go/page/15308
5.1: Introduction to Networking and Communication
5.2: A Brief History of the Internet
5.3: Networking Today
5.4: How has the Human Network Influenced you?
5.5: Providing Resources in a Network
5.6: LANs, WANs, and the Internet
5.7: Network Representations
5.8: The Internet, Intranets, and Extranets
5.9: Internet Connections
5.10: The Network as a Platform Converged Networks
5.11: Reliable Network
5.12: The Changing Network Environment Network Trends
5.13: Technology Trends in the Home
5.14: Network Security
5.15: Summary
5.16: Study Questions
6: Information Systems Security
6.1: Introduction
6.2: The Information Security Triad- Confidentiality, Integrity, Availability (CIA)
6.3: Tools for Information Security
6.4: Threat Impact
6.5: Fighters in the War Against Cybercrime- The Modern Security Operations Center
6.6: Security vs. Availability
6.7: Summary
6.8: Study Questions

2: Information Systems for Strategic Advantage


7: Leveraging Information Technology (IT) for Competitive Advantage
7.1: Introduction
7.2: The Productivity Paradox
7.3: Competitive Advantage
7.4: Using Information Systems for Competitive Advantage
7.5: Investing in IT for Competitive Advantage
7.6: Summary
7.7: Study Questions
8: Business Processes
8.1: Introduction
8.2: What Is a Business Process?
8.3: Summary
8.4: Study Questions
9: The People in Information System
9.1: Introduction
9.2: The Creators of Information Systems
9.3: Information-Systems Operations and Administration
9.4: Managing Information Systems
9.5: Emerging Roles
9.6: Career Path in Information Systems
9.7: Information-Systems Users – Types of Users
9.8: Summary
9.9: Study Questions

2 https://workforce.libretexts.org/@go/page/15308
10: Information Systems Development
10.1: Introduction
10.2: Systems Development Life Cycle (SDLC) Model
10.3: Software Development
10.4: Implementation Methodologies
10.5: Summary
10.6: Study Questions
10.7: Summary

3: Information Systems Beyond the Organization


11: Information Systems Beyond the Organization
11.1: Introduction
11.2: The Global Firm
11.3: The Digital Divide
11.4: Summary
11.5: Study Questions
12: The Ethical and Legal Implications of Information System
12.1: Introduction
12.2: Intellectual Property
12.3: The Digital Millennium Copyright Act
12.4: Summary
12.5: Study Questions
13: Future Trends in Information Systems
13.1: Introduction
13.2: Collaborative
13.3: Internet of Things (IoT)
13.4: Future of Information Systems
13.5: Study Questions

Index

Glossary
Detailed Licensing

3 https://workforce.libretexts.org/@go/page/15308
About the Book

An Open Educational Resource Supported by the Academic Senate for California Community
Colleges Open Educational Resources Initiative
The Academic Senate for California Community Colleges Open Educational Resources Initiative (OERI) was funded by the
California legislature in trailer bill language during the summer of 2018. The OERI’s mission is to reduce the cost of educational
resources for students by expanding the availability and adoption of high quality Open Educational Resources (OER). The OERI
facilitates and coordinates the curation and development of OER texts, ancillaries, and support systems. In addition, the OERI
supports local OER implementation efforts through the provision of professional development, technical support, and technical
resources.
The information in this resource is intended solely for use by the user who accepts full responsibility for its use. Although the
author(s) and ASCCC OERI have made every effort to ensure that the information in this resource is accurate, openly licensed, and
accessible at press time, the author(s) and ASCCC OERI do not assume and hereby disclaim any liability to any party for any loss,
damage, or disruption caused by errors or omissions, whether such errors or omissions result from negligence, accident, or any
other cause.
Please bring all such errors and changes to the resource to the attention of Academic Senate of California Community Colleges
OER Initiative via e-mail ([email protected]).
Academic Senate for California Community Colleges
One Capitol mall, Suite 230
Sacramento

Book Contributors
This book is written for a general business audience and the California Community College course C-ID-BUS 140, Business
Information System, Computer Information System.
Information Systems for Business and Beyond was originally developed in 2014 by David T. Bourgeois Ph.D., and is licensed
under CC BY 4.0.
The book was updated in 2019 by James L. Smith Ph.D., Shouhong Wong, Ph.D., and Joseph Mortati, MBA, and is licensed under
CC BY NC-SA 3.0
This Revised First Edition (2021) was edited by:
Ly-Huong T. Pham, MBA, Ph.D. (all chapters)
Tejal Desai-Naik (chapters 7, 8, 9, and 12)
Laurie Hammond (chapters 2, 4, and 11)
Wael Abdeljabbar, Ph.D. (chapters 5 and 6)
Renee N. Albrecht is acknowledged for her early contribution to our editorial process.
This Revised First Edition is licensed CC BY - NC 4.0.

1 https://workforce.libretexts.org/@go/page/13109
Licensing
A detailed breakdown of this resource's licensing can be found in Back Matter/Detailed Licensing.

1 https://workforce.libretexts.org/@go/page/24885
Preface
Introduction
Welcome to Information Systems for Business. In this book, you will be introduced to the concept of information systems, their use
in business, and emerging trends. You will gain insights into how firms can use information systems to sustain their competitive
advantages, how it helps connect people globally, and how you may use it for your personal and professional career development.

Audience
This book is written as an introductory text, meant for those with little or no experience with computers or information systems.
While sometimes the descriptions can get a little bit technical, every effort has been made to convey the information essential to
understanding a topic and not getting bogged down in detailed terminology.

Chapter Outline
The text is organized around thirteen chapters divided into three major parts, as follows:

Part 1: What Is an Information System?


Chapter 1: What Is an Information System? – This chapter provides an overview of information systems and their components,
including the history of how we got where we are today.
Chapter 2: Hardware – We discuss hardware and how it works. We will look at different types of computing devices, computer
parts, learn how they interact and the effect of the commoditization of these devices.
Chapter 3: Software – Software and hardware cannot function without each other. Without software, hardware is useless.
Without hardware, the software has no hardware to run on. This chapter discusses the types of software, their purpose, and how
they support different hardware devices, individuals, groups, and organizations.
Chapter 4: Data and Databases – This chapter explores how organizations use information systems to turn data into information
and knowledge to be used for competitive advantage. We will discuss how different types of data are captured and managed,
different types of databases, and how individuals and organizations use them.
Chapter 5: Networking and Communication – Today’s computing and smart devices are expected to be always connected
devices to support the way we learn, communicate, do business, work, and play, in any place, on any devices, and at any time.
In this chapter, we review the history of networking, how the Internet works, and the use of multiple networks in organizations
today.
Chapter 6: Information Systems Security – We discuss the information security triad of confidentiality, integrity, and
availability. We will review different types of threats and associated costs for individuals, organizations, and nations. We will
discuss different security tools and technologies, how security operation centers can secure organizations’ resources and assets,
and a primer on personal information security.

Part 2: Information Systems for Strategic Advantage


Chapter 7: Leveraging Information Technology (IT) for Competitive Advantage – This chapter examines the impact that
information systems have on organizations, how they can use IT to develop and sustain competitive advantages, and improve
operational effectiveness in their value chain decision-making processes. We will discuss seminal works by Brynjolfsson, Carr,
and Porter related to IT and competitive advantage.
Chapter 8: Business Processes – Business processes are the essence of what a business does, and information systems play an
important role in making them work. This chapter will discuss business process management, business process reengineering,
and ERP systems.
Chapter 9: The People in Information Systems – This chapter will provide an overview of the different types of people involved
in information systems. This includes people(and machines) who create information systems, those who operate and administer
information systems, those who manage or support information systems, those who use information systems, and IT's job
outlook.
Chapter 10: Information Systems Development – People build information systems for people’s use. This chapter will look at
different methods to manage an information system's development process, with special attention to software development,
review mobile application development, and discuss end-user computing. We will look at key trade-offs that organizations face
in making critical decisions to “build vs. buy or subscribe,” the balancing act between scope, cost, and time while delivering a
high-quality project and obtaining the buy-in from the users.

1 https://workforce.libretexts.org/@go/page/13110
Part 3: Information Systems beyond the Organization
Chapter 11: Globalization and the Digital Divide – The rapid rise of the Internet has made it easier than ever to do business
worldwide. This chapter will look at the impact that the Internet is having on the globalization of business. Firms will need to
manage challenges and leverage opportunities due to globalization and digitalization. It will discuss the digital divide concept,
what steps have been taken to date to alleviate it, and what more needs to be done.
Chapter 12: The Ethical and Legal Implications of Information Systems – The rapid changes in all the components of
information systems in the past few decades have brought a broad array of new capabilities and powers to governments,
organizations, and individuals alike. This chapter will discuss the effects that these new capabilities have had and the legal and
regulatory changes that have been put in place in response, and what ethical issues organizations and IT communities need to
consider in using or developing emerging solutions and services that regulations are not fully developed.
Chapter 13: Future Trends in Information Systems – This final chapter will present an overview or advance of some new or
recently introduced technologies. From wearable technology, virtual reality, Internet of Things, quantum computing to artificial
intelligence, this chapter will provide a look forward to what the next few years will bring to potentially transform how we
learn, communicate, do business, work, and play.

For the Student


Each chapter in this text begins with a list of the relevant learning objectives and ends with a chapter summary. Following the
summary is a list of study questions that highlight key topics in the chapter and suggested exercises to apply what you learn from
each chapter to the current environment. To get the best learning experience, you would be wise to begin by reading the learning
objectives, the summary, the questions at the end of the chapter and reflect how your personal or professional growth can be
enhanced.

For the Instructor


Learning objectives can be found at the beginning of each chapter. Of course, all chapters are recommended for use in an
introductory information systems course. However, for courses on a shorter calendar or courses using additional textbooks, a
review of the learning objectives will help determine which chapters can be omitted.
At the end of each chapter, there is a set of study questions and exercises. The study questions can be assigned to help focus
students’ reading on the learning objectives. The exercises are meant to be a more in-depth, experiential way for students to learn
chapter topics and reflect how what they have learned in each chapter can help them in their chosen interest or career. It is
recommended that you review any exercise before assigning it, adding any detail needed (such as length, due date, extra resources,
etc.) to complete the assignments.

2 https://workforce.libretexts.org/@go/page/13110
SECTION OVERVIEW

1: What is an Information System?


1: What Is an Information System?
1.1: Introduction
1.2: Identifying the Components of Information Systems
1.3: The Role of Information Systems
1.4: Can Information Systems Bring Competitive Advantage?
1.5: Summary
1.6: Study Questions

2: Hardware
2.1: Introduction
2.2: Tour of a Digital Device
2.3: Sidebar- Moore’s Law
2.4: Removable Media
2.5: Other Computing Devices
2.6: Summary
2.7: Study Questions

3: Software
3.1: Introduction to Software
3.2: Types of Software
3.3: Cloud Computing
3.4: Software Creation
3.5: Summary
3.6: Study Questions

4: Data and Databases


4.1: Introduction to Data and Databases
4.2: Examples of Data
4.3: Structured Query Language
4.4: Designing a Database
4.5: Sidebar- The Difference between a Database and a Spreadsheet
4.6: Big Data
4.7: Data Warehouse
4.8: Data Mining
4.9: Database Management Systems
4.10: Enterprise Databases
4.11: Knowledge Management
4.12: Sidebar- What is data science?
4.13: Summary
4.14: Study Questions

1
5: Networking and Communication
5.1: Introduction to Networking and Communication
5.2: A Brief History of the Internet
5.3: Networking Today
5.4: How has the Human Network Influenced you?
5.5: Providing Resources in a Network
5.6: LANs, WANs, and the Internet
5.7: Network Representations
5.8: The Internet, Intranets, and Extranets
5.9: Internet Connections
5.10: The Network as a Platform Converged Networks
5.11: Reliable Network
5.12: The Changing Network Environment Network Trends
5.13: Technology Trends in the Home
5.14: Network Security
5.15: Summary
5.16: Study Questions

6: Information Systems Security


6.1: Introduction
6.2: The Information Security Triad- Confidentiality, Integrity, Availability (CIA)
6.3: Tools for Information Security
6.4: Threat Impact
6.5: Fighters in the War Against Cybercrime- The Modern Security Operations Center
6.6: Security vs. Availability
6.7: Summary
6.8: Study Questions

This page titled 1: What is an Information System? is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong
T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

2
CHAPTER OVERVIEW

1: What Is an Information System?


 Learning Objectives

Upon successful completion of this chapter, you will be able to:


Define what an information system is by identifying its major components;
Describe the basic history of information systems;
Discuss the role of and the purpose of information systems; and
Explain why IT matters

This chapter provides an overview of information systems and their components, including the history of how we got where we are
today.
1.1: Introduction
1.2: Identifying the Components of Information Systems
1.3: The Role of Information Systems
1.4: Can Information Systems Bring Competitive Advantage?
1.5: Summary
1.6: Study Questions

This page titled 1: What Is an Information System? is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong
T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1
1.1: Introduction
Introduction
In the course of a given day, think of activities that you do to entertain yourself, deliver a work product, purchase something, or
interact with your family, friends, or co-workers. How many times do you snap a picture, post a text, or email your friends? Can
you even remember the number of times you used a search engine in a day? Consider what you are using to do these activities.
Most likely, many, if not all, of these activities involve using technologies such as a smartphone, a laptop, a website, or an app.
These activities are also enabled by Wi-Fi networks that surround us everywhere, be it on the school’s campus, workplace, the
airport, or even cars. You are already a user of one or more information systems, using one or more electronic devices, different
software, or apps, and connect globally through different networks. Welcome to the world of information systems!
Information systems affect our personal, career, society, and the global economy by evolving to change businesses and the way we
live. To prepare yourself to participate in developing or using information, building a business, or advancing your career, you must
be familiar with an information system's fundamental concepts.

Defining Information Systems


Students from diverse disciplines, including business, are often required to take a course to learn about information systems. Let’s
start with the term Information System (IS). What comes to your mind? Computers? Devices? Apps? Here are a few definitions
from a few sources:
“ Information Systems is an academic study of systems with a specific reference to information and the complementary
networks of hardware and software that people and organizations use to collect, filter, process, create and also distribute data .”
(Wikipedia Information Systems, 2020)
“Information systems are combinations of hardware, software, and telecommunications networks that people build and use to
collect, create, and distribute useful data, typically in organizational settings.” (Valacich et al., 2010)
“Information systems are interrelated components working together to collect, process, store, and disseminate information to
support decision making, coordination, control, analysis, and visualization in an organization.” (Laudon et al., 2012)
They sound similar, yet there is something different in each as well. In fact, these authors define the terms from these perspectives:
What are the components that make up an information system? How do they work together?
What is the role of IS in providing value to businesses and to individuals in solving their needs?
Let’s examine each perspective.

References
Information Systems. (2017, June 05). Retrieved July 28, 2020, from https://en.Wikipedia.org/wiki/Information_system
Laudon, K.C. and Laudon, J. P. (2012). Management Information Systems, twelfth edition. Upper Saddle River, New Jersey:
Prentice-Hall.
Valacich, J. and Schneider, C. (2010). Information Systems Today – Managing in the Digital World, fourth edition. Upper Saddle
River, New Jersey: Prentice-Hall.

This page titled 1.1: Introduction is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1.1.1 https://workforce.libretexts.org/@go/page/9745
1.2: Identifying the Components of Information Systems
Let’s use your experience as users to understand the above definitions. For example, let’s say you work for a small business, and
your manager asks you to track the expenses of the business and send her the list so that she can see where the money has gone.
You decide to use a spreadsheet on your laptop to enter the list of expenses you have collected and then email the spreadsheet to
her once you are done. You will need to have a system, a laptop, a spreadsheet running and connect to email, and an internet
connection. All these components must work together perfectly! In essence, you are using the interrelated components in an IS to
allow it to collect, process, store, and disseminate information. The role of this IS system is to enable you to create new value (i.e.,
expense tracker) and for your manager to use the information you disseminate “to support decision making, coordination, control,
analysis, and visualization in an organization.” (Laudon et al., 2011) You and your manager have obtained your goals through the
processes you have created to capture the data, calculate it, check it, and how and when your manager receives the new information
you created to make her decision to manage her company.
Hence, information systems can be viewed as having six major components: hardware, software, network communications, data,
people, and processes.

Figure 1.2.1 : Components of Information Systems. Image by Ly-Huong Pham is licensed under CC BY NC
Each has a specific role, and all roles must work together to have a working information system. In this book, we group the first
four components as Technology. People and Processes are the two components that deliver value to organizations in how they use
the collection of technologies to meet specific organizations’ goals.

Technology
Technology can be thought of as the application of scientific knowledge for practical purposes. From the invention of the wheel to
the harnessing of electricity for artificial lighting, technology is a part of our lives in so many ways that we tend to take it for
granted. As discussed before, the first four components of information systems – hardware, software, network communication, and
data, are all technologies that must integrate well together. Each of these will get its own chapter and a much lengthier discussion,
but we will take a moment to introduce them to give you a big picture of what each component is and how they work together.

Hardware
Hardware represents the physical components of an information system. Some can be seen or touched easily, while others reside
inside a device that can only be seen by opening up the device's case. Keyboards, mice, pens, disk drives, iPads, printers, and flash
drives are all visible examples. Computer chips, motherboards, and internal memory chips are the hardware that resides inside a
computer case and not usually visible from the outside. Chapter 2 will go into more details to discuss how they function and work
together. For example, users use a keyboard to enter data or use a pen to draw a picture.

1.2.1 https://workforce.libretexts.org/@go/page/9746
Figure 1.2.2 : Keyboard and iPad by Firmbee from Pixabay, Pen by athree23 from Pixabay, Printer by Steve Buissinne from
Pixabay, Keyboard by Gerd Altmann from Pixabay. All images are licensed under CC BY 2.0

Software
Software is a set of instructions that tell the hardware what to do. Software is not tangible – it cannot be touched. Programmers
create software programs by following a specific process to enter a list of instructions that tell the hardware what to do. There are
several categories of software, with the two main categories being operating-system and application software.

Figure 1.2.3 : This image is a derivative work from David Bourgeois is licensed under CC BY 2.0. This work “Hardware, Software,
Users - Interrelated” by Ly-Huong Pham is licensed under CC BY-NC
Operating system software provides an interface between the hardware and application to protect the programmers from learning
about the underlying hardware's specifics. Chapter 3 will discuss Software more thoroughly. Here are a few examples:
Examples of Operating Systems and Applications by Devices
Devices Operating Systems Applications

Adobe Photoshop, Microsoft Excel, Google


Desktop Apple macOS, Microsoft Windows
Map

Mobile Google Android, Apple iOS Texting, Google Map

Data
The third component is data. You can think of data as a collection of non-disputable raw facts. For example, your first name,
driver's license number, the city you live in, a picture of your pet, a clip of your voice, and your phone number are all pieces of raw
data. You can see or hear your data, but by themselves, they don’t give you any additional meanings beyond the data itself. For
example, you can read a driver's license number of a person, you may recognize it as a driver's license number, but you know
nothing else about this person. They are typically what IS would need to collect from you or other sources. However, once these
raw data are aggregated, indexed, and organized together into a logical fashion using software such as a spreadsheet, or a database,
the collection of these organized data will present new information and insights that a single raw fact can’t convey. The example of
collecting all expenses (i.e., raw data) to create an expense tracker (new information derived) discussed earlier is also a good

1.2.2 https://workforce.libretexts.org/@go/page/9746
example. In fact, all of the definitions presented at the beginning of this chapter focused on how information systems manage data.
Organizations collect all kinds of data, processed and organized them in some fashion, and use it to make decisions. These
decisions can then be analyzed as to their effectiveness, and the organization can be improved. Chapter 4 will focus on data and
databases and their uses in organizations.

Networking Communication
The components of hardware, software, and data have long been considered the core technology of information systems. However,
networking communication is another component of an IS that some believe should be in its own category. An information system
can exist without the ability to communicate. For instance, the first personal computers were stand-alone machines that did not
have access to the Internet. Information Systems, however, have evolved since they were developed. For example, we used to have
only desktop operating system software or hardware. However, in today’s environment, the operating system software now
includes mobile OS, and hardware now includes other hardware devices besides desktops. It is extremely rare for a computer
device that does not connect to another device or a network. Chapter 5 will go into this topic in greater detail.

Figure 1.2.4 : Network by Gerd Altmann from Pixabay is licensed under CC BY-SA 2.0

People
People built computers for people to use. This means that there are many different categories in the development and management
of information systems to help organizations to create value and improve productivity, such as:
Users: these are the people who actually use an IS to perform a job function or task. Examples include: a student uses a
spreadsheet or a word processing software program.
Technical Developers: these are the people who actually create the technologies used to build an information system. Examples
include a computer chip engineer, a software programmer, and an application programmer.
Business Professionals: these are the CEOs, owners, managers, entrepreneurs, employees who use IS to start or expand their
business to perform their job functions such as accounting, marketing, sales, human resources, support customers, among
others. Examples include famous CEOs such as Jeff Bezos of Amazon, Steve Jobs of Apple, Bill Gates of Microsoft, and Marc
Benioff of Salesforce.

1.2.3 https://workforce.libretexts.org/@go/page/9746
Figure 1.2.5 : Jeff Bezos, by Seattle City Council via Flicker, Steve Jobs and Bill Gates by Joi Ito via Flickr, Marc Benioff by
Global Climate Action Summit 2018 via Flicker, All images are licensed under CC BY-SA 2.0
IT Support: These specialized professionals are trained to keep the information systems running smoothly to support the
business and keep it safe from illegal attacks. Examples include network analysts, data center support, help-desk support.
These are just some of the key people; more details will be covered in Chapters 9 and 10.

Process
The last component of information systems is Process. A business process is a series of steps undertaken to achieve a desired
outcome or goal. Businesses have to continually innovate to either create more revenues through new products and services that
fulfill customers’ needs or to find cost-saving opportunities in the ways they run their companies. Simply automating activities
using technology is not enough. Information systems are becoming more and more integrated with organizational processes to
deliver value in revenue-generating and cost-saving activities that can give companies competitive advantages over their
competitors. Specialized standards or processes such as “business process reengineering,” “business process management,”
“enterprise resource planning,” and “customer relationship management” all have to do with the continued improvement of these
business procedures and the integration of technology with them to improve internal efficiencies and to gain a deeper
understanding of customers’ needs. Businesses hoping to gain an advantage over their competitors are highly focused on this
component of information systems. We will discuss processes in Chapter 8.

Reference
Laudon, K. C., & Laudon, J. P. (2011). Management information systems. Upper Saddle River, NJ: Prentice-Hall.

This page titled 1.2: Identifying the Components of Information Systems is shared under a CC BY 3.0 license and was authored, remixed, and/or
curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative
(OERI)) .

1.2.4 https://workforce.libretexts.org/@go/page/9746
1.3: The Role of Information Systems
Now that we have explored the different components of information systems (IS), we need to turn our attention to IS's role in an
organization. From our definitions above, we see that these components collect, store, organize, and distribute data throughout the
organization, which is the first half of the definition. We can now ask what do these components actually do for an organization to
address the second part of the definition of an IS “to support decision making, coordination, control, analysis, and visualization in
an organization” Earlier, we discussed how IS collects raw data to organize them to create new information to aid in the running of
a business. To help management to make informed critical decisions, IS has to take the information further by transforming it into
organizational knowledge. In fact, we could say that one of the roles of IS is to take data and turn it into information and then
transform that into organizational knowledge. As technology has developed and the business world becomes more data-driven, so
has IS's role, from a tool to run an organization efficiently to a strategic tool for competitive advantages. To get a full appreciation
of IS's role, we will review how IS has changed over the years to create new opportunities for businesses and address evolving
human needs.

The Early Years (1930s-1950s)


We may say that computer history came to public view in the 1930s when George Stibitz developed the “Model K” Adder on his
kitchen table using telephone company relays and proved the viability of the concept of ‘Boolean logic,’ a fundamental concept in
the design of computers. From 1939 on, we saw the evolution of special-purpose equipment to general-purpose computers by
companies that are now iconic in the computing industry; Hewlett-Packard with their first product HP200A Audio Oscillator that
Disney’s Fantasia used. The 1940s gave us the first computer program running a computer through the work of John von
Newmann, Frederic Williams, Tom Kilburn, and Geoff Toothill. The 1950s gave us the first commercial computer, the UNIVAC 1,
made by Remington Rand and delivered to the US Census Bureau; it weighed 29,000 pounds and cost more than $1,000,000 each.
(Computer History Museum, n.d.)

Figure 1.3.1 : Model K Adder, Image by Arnold Reinhold is licensed under CC BY 4.0
Software evolved along with the hardware evolution. Grace Hopper completed A-0, the program that allowed programmers to enter
instructions to hardware with English-like words on the UNIVAC 1. With the arrival of general and commercial computers, we
entered what is now referred to as the mainframe era. (Computer History Museum, n.d.)

Figure 1.3.2 : Univac 1, U.S. Census Bureau employees are licensed under CC-PD (right) Commodore Grace M. Hopper, Image by
James S. Davis is licensed under CC-PD

1.3.1 https://workforce.libretexts.org/@go/page/9747
The Mainframe Era
From the late 1950s through the 1960s, computers were seen to more efficiently do calculations. These first business computers
were room-sized monsters, with several refrigerator-sized machines linked together. These devices' primary work was to organize
and store large volumes of information that were tedious to manage by hand. More companies were founded to expand the
computer hardware and software industry, such as Digital Equipment Corporation (DEC), RCA, and IBM. Only large businesses,
universities, and government agencies could afford them, and they took a crew of specialized personnel and specialized facilities to
install them.
IBM introduced System/360 with five models. It was hailed as a major milestone in computing history for it was targeted at
business besides the existing scientific customers, and equally important, all models could run the same software (Computer
History, n.d.). These models could serve up to hundreds of users at a time through the technique called time-sharing. Typical
functions included scientific calculations and accounting under the broader umbrella of “data processing.”

Figure 1.3.3 : Registered trademark of International Business Machines


In the late 1960s, the Manufacturing Resources Planning (MRP) systems were introduced. This software, running on a mainframe
computer, gave companies the ability to manage the manufacturing process, making it more efficient. From tracking inventory to
creating bills of materials to scheduling production, the MRP systems (and later the MRP II systems) gave more businesses a
reason to integrate computing into their processes. IBM became the dominant mainframe company. Nicknamed “Big Blue,” the
company became synonymous with business computing. Continued software improvement and the availability of cheaper hardware
eventually brought mainframe computers (and their little sibling, the minicomputer) into most large businesses.

The PC Revolution
The 1970s ushered in the growth era in both making the computers smaller- microcomputers, and faster big machines-
supercomputers. In 1975, the first microcomputer was announced on the cover of Popular Mechanics: the Altair 8800, invented by
Ed Roberts, who coined the term “personal computer.” The Altair was sold for $297-$395, and came with 256 bytes of memory,
and licensed Bill Gates and Paul Allen’s BASIC programming language. Its immediate popularity sparked entrepreneurs'
imagination everywhere, and there were quickly dozens of companies making these “personal computers.” Though at first just a
niche product for computer hobbyists, improvements in usability and practical software availability led to growing sales. The most
prominent of these early personal computer makers was a little company known as Apple Computer, headed by Steve Jobs and
Steve Wozniak, with the hugely successful “Apple II .” (Computer History Museum, n.d.)

Figure 1.3.4 : Altair 8800 Computer with 8 inch floppy disk system - Image by Swtpc6800 is licensed under CC-PD. (right) Apple
II Computer - Image by Rama is licensed under CC BY-SA 2.0 FR
Hardware companies such as Intel and Motorola continued to introduce faster and faster microprocessors (i.e., computer chips).
Not wanting to be left out of the revolution, in 1981, IBM (teaming with a little company called Microsoft for their operating
system software) released their own version of the personal computer, called the “PC.” Businesses, which had used IBM

1.3.2 https://workforce.libretexts.org/@go/page/9747
mainframes for years to run their businesses, finally had the permission they needed to bring personal computers into their
companies, and the IBM PC took off. The IBM PC was named Time magazine’s “Man of the Year” in 1982.
Because of the IBM PC’s open architecture, it was easy for other companies to copy or “clone” it. During the 1980s, many new
computer companies sprang up, offering less expensive versions of the PC. This drove prices down and spurred innovation.
Microsoft developed its Windows operating system and made the PC even easier to use. Common uses for the PC during this
period included word processing, spreadsheets, and databases. These early PCs were not connected to any network; for the most
part, they stood alone as islands of innovation within the larger organization. The price of PCs becomes more and more affordable
with new companies such as Dell.
Today, we continue to see PCs' miniaturization into a new range of hardware devices such as laptops, Apple iPhone, Amazon
Kindle, Google Nest, and the Apple Watch. Not only did the computers become smaller, but they also became faster and more
powerful; the big computers, in turn, evolved into supercomputers, with IBM Inc. and Cray Inc. among the leading vendors.

Client-Server
By the mid-1980s, businesses began to see the need to connect their computers to collaborate and share resources. This networking
architecture was referred to as “client-server” because users would log in to the local area network (LAN) from their PC (the
“client”) by connecting to a powerful computer called a “server,” which would then grant them rights to different resources on the
network (such as shared file areas and a printer). Software companies began developing applications that allowed multiple users to
access the same data at the same time. This evolved into software applications for communicating, with the first prevalent use of
electronic mail appearing at this time.

Figure 1.3.5 : Registered trademark of SAP


This networking and data sharing all stayed within the confines of each business, for the most part. While there was sharing of
electronic data between companies, this was a very specialized function. Computers were now seen as tools to collaborate
internally within an organization. In fact, these computers' networks were becoming so powerful that they were replacing many of
the functions previously performed by the larger mainframe computers at a fraction of the cost.
During this era, the first Enterprise Resource Planning (ERP) systems were developed and run on the client-server architecture. An
ERP system is a software application with a centralized database that can be used to run a company’s entire business. With separate
modules for accounting, finance, inventory, human resources, and many more, ERP systems, with Germany’s SAP leading the way,
representing state of the art in information systems integration. We will discuss ERP systems as part of the chapter on Process
(Chapter 9).

The Internet, World Wide Web, and Web 1.0


Networking communication along with software technologies evolve through all periods: the modem in the 1940s, clickable link in
the 1950s, the email as the “killer app’ and now iconic “@” the mobile networks in the 1970s, and the early rise of online
communities through companies such as AOL in the early 1980s. First invented in 1969 as part of a US-government funded project
called ARPA, the Internet was confined to use by universities, government agencies, and researchers for many years. However, the
complicated way of using the Internet made it unsuitable for mainstream use in business.
One exception to this was the ability to expand electronic mail outside the confines of a single organization. While the first email
messages on the Internet were sent in the early 1970s, companies who wanted to expand their LAN-based email started hooking up
to the Internet in the 1980s. Companies began connecting their internal networks to the Internet to communicate between their
employees and employees at other companies. With these early Internet connections, the computer truly began to evolve from a
computational device to a communications device.
In 1989, Tim Berners-Lee from CERN laboratory developed an application (CERN, n.d.), a browser, to give a simpler and more
intuitive graphical user interface to existing technologies such as clickable link, to make the ability to share and locate vast amounts
of information easily available to the mass in addition to the researchers. This is what we called as the World Wide Web. 4 This

1.3.3 https://workforce.libretexts.org/@go/page/9747
invention became the launching point of the growth of the Internet as a way for businesses to share information about themselves
and for consumers to find them easily.
As web browsers and Internet connections became the norm, companies worldwide rushed to grab domain names and create
websites. Even individuals would create personal websites to post pictures to share with friends and family. For the first time, users
could create content on their own and join the global economy.
In 1991, the National Science Foundation, which governed how the Internet was used, lifted restrictions on its commercial use.
These policy changes ushered in new companies establishing new e-commerce industries such as eBay and Amazon.com. The fast
expansion of the digital marketplace led to the dot-com boom through the late 1990s and then the dot-com bust in 2000. An
important outcome of the Internet boom period was that thousands of miles of Internet connections were laid around the world
during that time. The world became truly “wired” heading into the new millennium, ushering in the era of globalization, which we
will discuss in Chapter 11.

Figure 1.3.6 : Registered trademark of Amazon Technologies, Inc.


The digital world also became a more dangerous place as more companies and users were connected globally. Once slowly
propagated through the sharing of computer disks, computer viruses and worms could now grow with tremendous speed via the
Internet and the proliferation of new hardware devices for personal or home use. Operating and application software had to evolve
to defend against this threat, and a whole new industry of computer and Internet security arose as the threats kept increasing and
became more sophisticated. We will study information security in Chapter 6.

Web 2.0 and e-Commerce


Perhaps, you noticed that in the Web 1.0 period, users and companies could create content but could not interact with each other
directly on a website. Despite the Internet's bust, technologies continue to evolve due to increased needs from customers to
personalize their experience and engage directly with businesses.
Websites become interactive; instead of just visiting a site to find out about a business and purchase its products, customers can
now interact with companies directly, and most profoundly, customers can also interact with each other to share their experience
without undue influence from companies or even buy things directly from each other. This new type of interactive website, where
users did not have to know how to create a web page or do any programming to put information online, became known as web 2.0.
Web 2.0 is exemplified by blogging, social networking, bartering, purchasing, and post interactive comments on many websites.
This new web-2.0 world, in which online interaction became expected, had a big impact on many businesses and even whole
industries. Some industries, such as bookstores, found themselves relegated to niche status. Others, such as video rental chains and
travel agencies, began going out of business as online technologies replaced them. This process of technology replacing an
intermediary in a transaction is called disintermediation. One such successful company is Amazon which has disintermediated
many intermediaries in many industries, and it is one of the leading e-commerce websites.
As the world became more connected, new questions arose. Should access to the Internet be considered a right? What is legal to
copy or share on the internet? How can companies protect data (kept or given by the users) private? Are there laws that need to be
updated or created to protect people’s data, including children’s data? Policymakers are still catching up with technology advances
even though many laws have been updated or created. Ethical issues surrounding information systems will be covered in Chapter
12.

The Post PC and Web 2.0 World


After thirty years as the primary computing device used in most businesses, sales of the PC are now beginning to decline as tablets
and smartphones are taking off. Just as the mainframe before it, the PC will continue to play a key role in business but will no
longer be the primary way people interact or do business. The limited storage and processing power of these mobile devices is
being offset by a move to “cloud” computing, which allows for storage, sharing, and backup of the information on a massive scale.
Users continue to push for faster and smaller computing devices. Historically, we saw that microcomputers displaced mainframes,
laptops displaced (almost) desktops. We now see that smartphones and tablets are displacing laptops in many situations. Will

1.3.4 https://workforce.libretexts.org/@go/page/9747
hardware vendors hit the physical limitations due to the small size of devices? Is this the beginning of a new era of invention of
new computing paradigms such as Quantum computing, a trendy topic that we will cover in more detail in Chapter 13?
Tons of content has been generated by the users in the web 2.0 world, and businesses have been monetizing this user-generated
content without sharing any of their profits. How will the role of users change in this new world? Will the users want a share of this
profit? Will the users finally have ownership of their own data? What new knowledge can be created from the massive user-
generated and business-generated content?
Below is a chart showing the evolution of some of the advances in information systems to date.
The Eras of Business Computing
Era Hardware Operating System Applications

The first computer program was


Model K, HP’s test equipment,
Early years (1930s) written to run and store on a
Calculator, UNIVAC 1
computer.

Terminals connected to a
Mainframe (1970s) mainframe computer, IBM System Time-sharing (TSO) on MVS Custom-written MRP software
360

IBM PC or compatible.
Sometimes connected to the
PC (mid-1980s) mainframe computer via an MS-DOS WordPerfect, Lotus 1-2-3
expansion card.
Intel microprocessor

IBM PC “clone” on a Novell


Client-Server (the late 80s to early Windows for Workgroups, Microsoft Word, Microsoft Excel,
Network.
90s) MacOS email
Apple’s Apple-1

World Wide Web (the mid-90s to IBM PC “clone” connected to the


Windows XP, macOS Microsoft Office, Internet Explorer
early 2000s) company intranet.

Laptop connected to company Wi- Microsoft Office, Firefox, social


Web 2.0 (mid-2000s to present) Fi. Windows 7, Linux, macOS media platforms, blogging, search,
Smartphones texting

Mobile-friendly websites, more


Apple iPad, robots, Fitbit, watch,
Post-Web 2.0 (today and beyond) iOS, Android, Windows 10 mobile apps
Kindle, Nest, cars, drones
eCommerce

We seem to be at a tipping point of many technological advances that have come of age. The miniaturization of devices such as
cameras, sensors, faster and smaller processors, software advances in fields such as artificial intelligence, combined with the
availability of massive data, have begun to bring in new types of computing devices, small and big, that can do things that were
unheard in the last four decades. A robot the size of a fly is already in limited use, a driverless car is in the ‘test-drive’ phase in a
few cities, among other new advances to meet customers’ today needs and anticipate new ones for the future. “Where do we go
from here?” is a question that you are now part of the conversation as you go through the rest of the chapters. We may not know
exactly what the future will look like, but we can reasonably assume that information systems will touch almost every aspect of our
personal, work-life, local and global social norms. Are you prepared to be an even more sophisticated user? Are you preparing
yourself to be competitive in your chosen field? Are there new norms to be embraced?

References
Timeline of Computer History: Computer History Museum. (n.d.). Retrieved July 10, 2020, from
https://www.computerhistory.org/timeline/computers/
CERN. (n.d.) The Birth of the Web. Retrieved from http://public.web.cern.ch/public/en/about/web-en.html

This page titled 1.3: The Role of Information Systems is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-
Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1.3.5 https://workforce.libretexts.org/@go/page/9747
1.4: Can Information Systems Bring Competitive Advantage?
It has always been the assumption that the implementation of information systems will, in and of itself, bring a business
competitive advantage, especially in the cost-saving or improve efficiency. The more investment in information systems, the more
efficiencies are expected by management.
In 2003, Nicholas Carr wrote an article, “IT Doesn’t Matter,” in the Harvard Business Review (Carr, 2003) and raised the idea that
information technology has become just a commodity. Instead of viewing technology as an investment that will make a company
stand out, it should be seen as something like electricity: It should be managed to reduce costs, ensure that it is always running, and
be as risk-free as possible.
This article was both hailed and scorned at the time. While it is true that IT should be managed to reduce cost, improve efficiencies,
history has shown us that many companies have leveraged information systems to build wildly successful businesses, such as
Amazon, Apple, Walmart. Chapter 7 will discuss competitive advantage in great detail.

 Sidebar: Walmart Uses Information Systems to Become the World’s Leading Retailer
Walmart is the world’s largest retailer, with gross revenue of $534.6 billion and a market of $366.7B in the fiscal year that
ended on January 31, 2020 (source: Yahoo finance on 7/13/2020). Walmart currently has approximately 11,500 stores and e-
commerce websites in 27 countries, serving nearly 265 million customers every week worldwide (Wal-Mart, 2020). Walmart’s
rise to prominence is due in no small part to its use of information systems.

Figure 1.4.1 : Registered Trademark of Walmart, Inc.


One of the keys to this success was the implementation of Retail Link, a supply-chain management system. This system,
unique when initially implemented in the mid-1980s, allowed Walmart’s suppliers to directly access the inventory levels and
sales information of their products at any of Walmart’s more than ten thousand stores. Using Retail Link, suppliers can analyze
how well their products are selling at one or more Walmart stores, with a range of reporting options. Further, Walmart requires
the suppliers to use Retail Link to manage their own inventory levels. If a supplier feels that their products are selling out too
quickly, they can use Retail Link to petition Walmart to raise their inventory levels. This has essentially allowed Walmart to
“hire” thousands of product managers, all of whom have a vested interest in managing products. This revolutionary approach
to managing inventory has allowed Walmart to continue driving prices down and responding to market forces quickly.
However, Amazon’s fast rise as the leader in eCommerce has given Walmart a new formidable competitor. Walmart continues
to innovate with information technology combined with their physical stores to compete with Amazon, locking the two in a
fierce battle to retain the largest retailer's title. Using its tremendous market presence, any technology that Walmart requires its
suppliers to implement immediately becomes a business standard.

References
Carr, Nicholas (2003). Retrieved from https://hbr.org/2003/05/it-doesnt-matter
Wal-Mart Stores Inc. (2020 ). Retrieved July 13, 2020, from www.annualreports.com/Compan...art-stores-inc
Yahoo Finance - Stock Market Live, Quotes, Business & Finance News. (2020). Retrieved July 13, 2020, from
https://finance.yahoo.com/

This page titled 1.4: Can Information Systems Bring Competitive Advantage? is shared under a CC BY 3.0 license and was authored, remixed,
and/or curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative
(OERI)) .

1.4.1 https://workforce.libretexts.org/@go/page/9748
1.5: Summary
Summary
In this chapter, you have been introduced to the concept of information systems. We have reviewed several definitions, focusing on
information systems components: technology (hardware, software, data, networking communication), people, and process. We have
reviewed the evolution of the technology and how the business use of information systems has evolved over the years, from the use
of large mainframe computers for number crunching, through the introduction of the PC and networks for business applications, all
the way to the era of mobile computing for both business and personal applications. During each of these phases, innovations in
technology allowed businesses and individuals to integrate technology more deeply.
It is a foregone conclusion that almost all, if not all, companies are using information systems. Yet, history also has shown us that
some companies are very successful and some are failures. By the time you complete this book, you should understand the
important role of IS in helping improve efficiencies and know-how to leverage IS to develop sustained competitive advantages for
every company or your own career.

This page titled 1.5: Summary is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1.5.1 https://workforce.libretexts.org/@go/page/9749
1.6: Study Questions
Study Questions
1. What are the components that make up an information system?
2. List three examples of information system hardware
3. Identify which component of information systems include Microsoft Windows
4. What is application software?
5. Describe the different roles people play in information systems
6. Describe what a process is and its purpose
7. What was invented first, the personal computer or the Internet?
8. Which comes first, the internet or the world wide web?
9. What helps make the internet usable for the masses, not just researchers?
10. What does it mean to say we are in a “post-PC and Web 2.0 world”?
11. What is Carr’s main argument about information technology? Is it true then, and is it true now?

Exercises
1. Suppose you had to explain to a member of your family or one of your closest friends the concept of an information system.
How would you define it? Write a one-paragraph description in your own words that you feel would best describe an
information system to your friends or family.
2. Of the six components of an information system (hardware, software, data, network communications, people, process), which
do you think is the most important to a business organization's success? Write a one-paragraph answer to this question that
includes an example from your personal experience to support your answer.
3. We all interact with various information systems every day: at the grocery store, at work, at school, even in our cars (at least
some of us). Make a list of the different information systems you interact with every day. See if you can identify the
technologies, people, and processes involved in making these systems work.
4. Do you agree that we are in a post-Web 2.0 stage in the evolution of information systems? Some people argue that we will
always need the personal computer, but it will not be the primary device used to manipulate information. Others think that a
whole new era of mobile, biological, or even neurological computing is coming. Do some original research and make your
prediction about what business computing will look like in the next three to five years.
5. The Walmart case study introduced you to how that company used information systems to become the world’s leading retailer.
Walmart has continued to innovate and is still looked to as a leader in the use of technology. Do some original research and
write a one-page report detailing a new technology that Walmart has recently implemented or is pioneering to stay competitive.

This page titled 1.6: Study Questions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1.6.1 https://workforce.libretexts.org/@go/page/9750
CHAPTER OVERVIEW

2: Hardware
 Learning Objectives

Upon successful completion of this chapter, you will be able to:


Describe information systems hardware.
Identify the primary components of a computer and the functions they perform.
Explain the effect of the commoditization of the personal computer.

We discuss hardware and how it works. We will look at different types of computing devices, computer parts, learn how they
interact and the effect of the commoditization of these devices.
2.1: Introduction
2.2: Tour of a Digital Device
2.3: Sidebar- Moore’s Law
2.4: Removable Media
2.5: Other Computing Devices
2.6: Summary
2.7: Study Questions

This page titled 2: Hardware is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal Desai-
Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1
2.1: Introduction
Information systems are made up of six components: hardware, software, data, communication, people, and process. In this
chapter, we will review hardware. Hardware is the tangible or physical parts of computing devices to function. We will review the
components of information systems, learn how it works, and discuss some of the current trends.
As stated above, computer hardware encompasses digital devices that you can physically touch. This includes devices such as the
following:
desktop computers
laptop computers
mobile phones
smartphones
smartwatches
tablet computers
e-readers
storage devices, such as flash drives
input devices, such as keyboards, mice, and scanners
output devices such as 3d printers and speakers
Besides these more traditional computer hardware devices, many items that were once not considered digital devices are now
becoming computerized. Digital technologies are now being integrated into many everyday objects, so the days of a device being
labeled categorically as computer hardware may be ending. Examples of these types of digital devices include automobiles,
refrigerators, and even soft-drink dispensers. In this chapter, we will also explore digital devices, beginning with defining the term.

Digital Devices
A digital device is any equipment containing a computer or microcontroller; included in these devices are smartphones, watches,
and tablets. A digital device processes electronic signals that represent either a one (“on”) or a zero (“off”). The presence of an
electronic signal represents the “ on ” state; the absence of an electronic signal represents the “ off ” state. Each one or zero is
referred to as a bit (a contraction of binary digit); a group of eight bits is a byte. The first personal computers could process 8 bits of
data at once; modern PCs can now process 128 bits of data at a time. The larger the bit, the faster information can be processed
simultaneously.

Sidebar: Understanding Binary


As you know, the system of numbering we are most familiar with is base-ten numbering. In base-ten numbering, each column in
the number represents a power of ten, with the far-right column representing 10^0 (ones), the next column from the right
representing 10^1 (tens), then 10^2 (hundreds), then 10^3 (thousands), etc. For example, the number 1010 in decimal represents: (1
x 1000) + (0 x 100) + (1 x 10) + (0 x 1).
Computers use the base-two numbering system, also known as binary. In this system, each column in the number represents a
power of two, with the far-right column representing 2^0 (ones), the next column from the right representing 2^1 (tens), then 2^2
(fours), then 2^3 (eights), etc. For example, the number 1010 in binary represents (1 x 8 ) + (0 x 4) + (1 x 2) + (0 x 1). In base ten,
this evaluates to 10.
As digital devices' capacities grew, new terms were developed to identify the capacities of processors, memory, and disk storage
space. Prefixes were applied to the word byte to represent different orders of magnitude. Since these are digital specifications, the
prefixes were originally meant to represent multiples of 1024 (which is 210) but have more recently been rounded to mean
multiples of 1000.
The following table contains a listing of Binary prefixes:
Binary Prefixes and Examples
Prefix Represents Example

kilo one thousand kilobyte=one thousand bytes

mega one million megabyte=one million bytes

2.1.1 https://workforce.libretexts.org/@go/page/9752
Prefix Represents Example

Giga one billion gigabyte=one billion bytes

tera one trillion terabyte=one trillion bytes

Peta one quadrillion petabyte=one quadrillion bytes

exa one quintillion exabyte=one quintillion bytes

Zetta one sextillion zettabytes=one sextillion bytes

yotta one septillion yottabytes=one septillion bytes

This page titled 2.1: Introduction is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

2.1.2 https://workforce.libretexts.org/@go/page/9752
2.2: Tour of a Digital Device
We will begin with the personal computers, which consist of the same basic components:
Motherboard (circuit board)
Central Processing Unit ( CPU)
Random Access Memory (RAM)
Video Card
Power Supply
Hard Drive (HDD)
Solid-State Drive (SSD)
Optical Drive (DVD/CD drive)
Card Reader (SD/SDHC, CF, etc.)
It also turns out that almost every digital device uses the same set of components, so examining the personal computer will give us
insight into the structure of various digital devices. So let’s take a “tour” of a personal computer and see what makes them function.

Processing Data: The CPU


As stated in the previous section, most computing devices have a similar architecture. The core of this architecture is the central
processing unit or CPU. The CPU can be thought of as the “brain” of the device or main processor. Back in the day, the CPU was
made up of hundreds of wires that carried information.

Figure 2.2.1 : Personal Computer by Green Chameleon on Unsplash is licensed under CC BY-SA 2.0
These wires carried out the commands sent to it by the software and returned results to be acted upon. The earliest CPUs were large
circuit boards with limited functionality. Today, a CPU is generally on one chip and can perform a large variety of functions. There
are two primary manufacturers of CPUs for personal computers: Intel and Advanced Micro Devices (AMD).
The speed (“clock time”) of a CPU regulates the rate of instruction and executes and synchronizes the various computer
components. The faster the clock, the quicker the CPU can execute instruction per second. The clock is measured in hertz. A hertz
is defined as one cycle per second. Using the binary prefixes mentioned above, we can see that a kilohertz (abbreviated kHz) is one
thousand cycles per second, a megahertz (MHz) is one million cycles per second, and a gigahertz (GHz) is one billion cycles per
second. The CPU’s processing power increases at an amazing rate (see the sidebar about Moore’s Law). Besides a faster clock
time, many CPU chips now contain multiple processors per chip.
A multi-core processor is a single integrated circuit that contains multiple chips. These chips are commonly known as cores. The
multi-core runs and reads instructions on the cores at the same time, increasing the speed. A computer with two processors is
known as dual-core, or quad-core (four processors), increasing the processing power of a computer by providing multiple CPUs'
capability.
When computers are running with multiple cores, additional heat is generated; this is why companies build in fans on top of the
CPU. Macs have built-in a fail-safe that the computer will shut itself down to avoid damage when the temperature builds too

2.2.1 https://workforce.libretexts.org/@go/page/9753
rapidly. Smartphones avail themselves to hot temperatures. As our devices get smaller, we have many parts placed in a compact
area, and in turn, devices will generate more heat. Running many apps on your phone simultaneously is another way to increase the
phone's heat; this is why it is important to close applications after use.

Figure 2.2.2 : (a) Bottom view of an Intel central processing unit Core i7 Skylake type core, model 6700K. LGA 1151 socket, 14
nm process, core frequency 4.00 GHz. Manufactured in Vietnam. Image by Eric Gaba is licensed under CC BY-SA. (b) Top view
of an Intel central processing unit Core i7 Skylake type core, model 6700K. LGA 1151 socket, 14 nm process, core frequency 4.00
GHz. Manufactured in Vietnam. Image by Eric Gaba is licensed under CC BY-SA
Graphics processing unit (GPU) is an electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation
of images in a frame buffer for output. Devices that use GPUs are personal computers, smartphones, and game consoles. Nvidia is
one of the powerhouse companies that manufacture HD graphics cards. Nvidia has been a leader in GPU’s chips, one of the most
popular chips is the Nvidia GeForce, which is integrated with laptops, PCs, and virtual reality processors. Nvidia has also worked
with many companies expanding its GPU chip market. Some notable companies that Nvidia works with are Tesla, Quadro, and
GRID.

Figure 2.2.3 : NVIDIA GeForce 6800 Ultra & NVIDIA GeForce 7950 GX2. Image by Hyins is licensed under CC PD

This page titled 2.2: Tour of a Digital Device is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

2.2.2 https://workforce.libretexts.org/@go/page/9753
2.3: Sidebar- Moore’s Law
Technology is advancing, and computers are getting faster every year. Consumers often are unsure of buying today’s smartphone,
tablet, or PC model because a more advanced model will be out shortly, leaving them with regret that it won’t be the most
advanced anymore. Gordon Moore, the co-founder of Fairchild and one of Intel's founders, recognized this phenomenon in 1965,
noting that microprocessor transistor counts had been doubling every year. His insight eventually evolved into Moore’s Law, which
states that the number of transistors on a chip will double every two years. (Moore, 1965). This has been generalized into the
concept that computing power will double every two years for the same price point. Another way of looking at this is to think that
the same computing power price will be cut in half every two years. Though many have predicted its demise, Moore’s Law has
held for over fifty-five years. Technology is changing with innovation in design and AI support. Experts now believe,

“The name of the game now is the technology may not be traditional silicon transistors;
now it may be quantum computing, which is a different structure and nano-biotechnology,
which consists of proteins and enzymes that are organic."
Therefore it is likely in the next five years, the emphasis of Moore’s Law will change. Experts believe that Moore’s law will not be
able to go on indefinitely because of physical limits on shrinking the size of components on a chip continually. Currently, the
billions of transistors on chips are not visible to the naked eye. It is thought that if Moore’s law were to continue through 2050,
engineers would have to design transistors from components that are smaller than a single atom of hydrogen.

Figure 2.3.1 : Moore’s Law over 120 years. Image by Jurvetsonis licensed under CC BY-SA 2.0
This figure represents Moore’s law empirical relationship linked to transistors' number in a dense integrated circuit that doubles
about every two years.
There will be a point, someday, where we hit the apex of processing technology as challenges occur to move forward to shrink
circuits at the time of exponential growth will get more expensive. Moore’s Law will then be outdated due to technology
innovation. Engineers will continue to strive for new ways to increase performance (Moore, 1965).

Motherboard
The motherboard is the main circuit board hub of the computer. The hub connects the inputs and components of the computer. It
also controls the power received by the hard drive and video card. The motherboard is a crucial component, housing the central
processing unit (CPU), memory, and input and output connectors. The CPU, memory, and storage components, among other things,
all connect to the motherboard. Motherboards come in different shapes and sizes; the prices of motherboards also vary depending
on complexity. Complexity depends on how compact or expandable the computer is designed to be. Most modern motherboards
have many integrated components, such as video and sound processing, requiring separate components.

2.3.1 https://workforce.libretexts.org/@go/page/9754
Figure 2.3.2 : Computer Motherboard by MH Rhee is licensed under CC BY-SA 2.0

Random-Access Memory
When a computer starts up, it begins to load information from the hard disk into its working memory. Your computer's short-term
memory is called random-access memory (RAM), which transfers data much faster than the hard disk. Any program that you are
running on the computer is loaded into RAM for processing. RAM is a high-speed component that stores all the information the
computer needs for current and near-future use. Accessing RAM is much quicker than retrieving it from the hard drive. For a
computer to work effectively, a minimal amount of RAM must be installed. In most cases, adding more RAM will allow the
computer to run faster. Increasing the RAM size, the number of times this access operation is carried out is reduced, making the
computer run faster. Another characteristic of RAM is that it is volatile or temporary memory. This means that it can store data as
long as it receives power; when the computer is turned off, any data stored in RAM is lost. This is why we need hard drives and
SSDs that hold the information when we shut off the system.
RAM is generally installed in a personal computer by using a dual-inline memory module (DIMM). The type of DIMM accepted
into a computer is dependent upon the motherboard. As described by Moore’s Law, the amount of memory and speeds of DIMMs
have increased dramatically over the years.

Hard Disk and Hard Drive


While the RAM is used as working memory, the computer also needs a place to store data for the longer term. Most of today’s
personal computers use a hard disk for long-term data storage. A hard disk is a magnetic material disk; a hard disk drive or HDD is
the device for storing the data into a hard disk. The disk is where data is stored when the computer is turned off and retrieved from
when the computer is turned on. The HDD provides lots of storage at an inexpensive cost compared to the SSD.

Solid-State Drives
SSD is a new generation device replacing hard disks. They are much faster, and they utilize flash-based memory. Semiconductor
chips are used to store data, not magnetic media. An embedded processor (or brain) reads and writes data. The brain, called a
controller, is an important factor in determining the read and write speed. SSD’s are decreasing in price, but they are expensive.
SSD’s have no moving parts, unlike the HDD, which deals with wear and tear of spinning and break down.

Comparison of SSD vs. HDD


The checkmarks represent the best selection in the category.
Comparison of Solid State Drives and Hard Disk Drives
Attribute SSD (Solid State Drive) HDD (Hard Disk Drive)

Less power draw, averages 2 – 3 watts, More power draw-- averages 6 – 7 watts and
Power Draw / Battery Life
resulting in 30+ minute battery boost. therefore uses more battery.

Expensive, roughly $0.20 per gigabyte (based alt Only around $0.03 per gigabyte, very
Cost
on buying a 1TB drive). cheap (buying a 4TB model)

Typically around 500GB and 2TB maximum


Typically not larger than 1TB for notebook
Capacity for notebook size drives; 10TB max for
size drives; 4TB max for desktops.
desktops.

2.3.2 https://workforce.libretexts.org/@go/page/9754
Attribute SSD (Solid State Drive) HDD (Hard Disk Drive)

Operating System Boot-Time Around 10-13 seconds average bootup time. Around 30-40 seconds average bootup time.

There are no moving parts and, as such, no


Noise Audible clicks and spinning can be heard.
sound.

The spinning of the platters can sometimes


Vibration No vibration as there are no moving parts.
result in vibration.
HDD doesn’t produce much heat, but it will
Lower power draw and no moving parts, so have a measurable amount more heat than an
Heat Produced
little heat is produced. SSD due to moving parts and higher power
draw.

Mean time between failure rate of 2.0 Mean time between failure rate of 1.5 million
Failure Rate
million hours. hours.

Generally above 200 MB/s and up to 550 The range can be anywhere from 50 – 120
File Copy / Write Speed
MB/s for cutting-edge drives. MB/s.

Full Disk Encryption (FDE) Supported on Full Disk Encryption (FDE) Supported on
Encryption
some models. some models.

File Opening Speed Up to 30% faster than HDD. Slower than SSD.

An SSD is safe from any effects of


Magnetism Affected? Magnets can erase data.
magnetism.

Reference
Moore, Gordon E. (1965). "Cramming more components onto integrated circuits" (PDF). Electronics Magazine. p. 4. Retrieved
2012-10-18.

This page titled 2.3: Sidebar- Moore’s Law is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

2.3.3 https://workforce.libretexts.org/@go/page/9754
2.4: Removable Media
Removable Media
Removable storage has changed greatly over the four decades of PCs. CD-ROM drives have replaced floppy disks, and then they
were replaced by USB (Universal Serial Bus) drives. USB (Universal Serial Bus) drives are now standard on all PCs with
capacities approaching 512 gigabytes. Speeds have also increased from 480 Megabits in USB 2.0 to 10 Gigabits in USB 3.1. USB
devices also use EEPROM technology. Since the USB is a cross-platform technology, it is supported by most operating systems.
This helps connect to other devices such as printers, tv’s external hard drives, and the list goes on. “There are now by one count six
billion USB devices in the world.” (Johnson, 2019)

Figure 2.4.1 : USB Connections. Image by Bruno /Germany is licensed under CC BY-SA 2.0

Network Connection
When personal computers were first developed, they were stand-alone units, which meant that data was brought into the computer
or removed from the computer via removable media, such as the floppy disk. Engineers as early as 1965 saw merit in being able to
connect and share information with other computers. The term used was networking as the connections increased to multiple users,
it grew to inter-networking. The abbreviated version is now called the internet. In the mid-1980s, organizations began to see the
value in connecting computers together via a digital network. Because of this, personal computers needed the ability to connect to
these networks. Initially, this was done by adding an expansion card to the computer that enabled the network connection. By the
mid-1990s, network ports were standard on most personal computers. The configuration of these ports has evolved over the years,
becoming more standardized over time. Today, almost all devices plug into a computer through the use of a USB port. This port
type, first introduced in 1996, has increased in its capabilities, both in its data transfer rate and power supply.
For a personal computer to be useful, it must have channels for receiving input from the user and channels for delivering output to
the user. These input and output devices connect to the computer via various connection ports, which generally are part of the
motherboard and are accessible outside the computer case. In early personal computers, specific ports were designed for each type
of output device. The configuration of these ports has evolved over the years, becoming more and more standardized over time.
Today, almost all devices plug into a computer through the use of a USB port. This port type, first introduced in 1996, has
increased in its capabilities, both in its data transfer rate and power supplied.

Bluetooth
Besides USB, some input and output devices connect to the computer via a wireless-technology standard called Bluetooth.
Bluetooth was first invented in the 1990s and exchanges data over short distances using radio waves.

2.4.1 https://workforce.libretexts.org/@go/page/9755
Figure 2.4.2 : Bluetooth by Ranjith Alingal on Unsplash is licensed under CC BY-SA 2.0
Bluetooth generally has a range of 100 to 150 feet. It was not until 1999 that it reached its first general public users. Two devices
communicating with Bluetooth must both have a Bluetooth communication chip installed. Bluetooth devices include pairing your
phone to your car, computer keyboards, speakers, headsets, and home security, to name just a few.

Input Devices
All personal computers need components that allow the user to input data. Early computers used simply a keyboard to allow the
user to enter data or select an item from a menu to run a program. With the advent of the graphical user interface, the mouse
became a standard component of a computer. These two components are still the primary input devices to a personal computer,
though variations of each have been introduced with varying levels of success over the years. For example, many new devices now
use a touch screen as the primary way of entering data. Besides the keyboard and mouse, additional input devices are becoming
more common. Scanners allow users to input documents into a computer, either as images or as text. Microphones can be used to
record audio or give voice commands. Webcams and other video cameras can be used to record video or participate in a video chat
session. The list continues to grow, such as joysticks used for gaming, digital cameras, and touch screens. Smartwatches are
wearable compact computers on the wrist. The watch's functionality is similar to the smartphone offering mobile apps and
WiFi/Bluetooth connectivity. Specialized watches for health and sports enthusiasts have also emerged, offering counts of steps
taken, heart rate, and blood pressure monitoring; a popular brand is Fitbit.

Figure 2.4.3 : (a) Barcode scanner by PublicDomainPictures from Pixabay is licensed under CC BY-SA 2.0 (b) Fitbit. Image by
Andres Urena on Unsplash is licensed under CC BY-SA 2.0 (c) Smartphone. Image by Selwyn van Haaren on Unsplash is licensed
under CC BY-SA 2.0.

Output Devices
Output devices are essential as well. The most obvious output device is a display, visually representing the state of the computer. In
some cases, a personal computer can support multiple displays or be connected to larger-format displays such as a projector or
large-screen television. Besides displays, other output devices include speakers for audio output and printers for printed output. 3D
printers have changed the way we build toys, tools, homes, and even body parts. The process of 3D printing that differentiates itself
from a regular printer is called additive manufacturing.

2.4.2 https://workforce.libretexts.org/@go/page/9755
Figure 2.4.4 : 3D Printer. Image by Rob Wingate on Unsplash CC BY-SA 2.0
Additive manufacturing breaks down an object and builds it layer by layer, making three-dimensional objects.
The most popular material used is plastic, but other materials can be used, such as gold and bio-material, to make human parts such
as a nose or ear. The 3D printers have proven themselves in many different industries and have offered an inexpensive route for
prototyping.

 Sidebar: What Hardware Components Contribute to the Speed of My Computer?

A computer's speed is determined by many elements, some related to hardware and some related to software. In hardware,
speed is improved by giving the electrons shorter distances to traverse to complete a circuit. Since the first CPU was created in
the early 1970s, engineers have constantly worked to figure out how to shrink these circuits and put more and more circuits
onto the same chip. And this work has paid off – the speed of computing devices has been continuously improving ever since.
The hardware components that contribute to a personal computer's speed are the CPU, the motherboard, RAM, and the hard
disk. In most cases, these items can be replaced with newer, faster components. In the case of RAM, simply adding more RAM
can also speed up the computer.
The table below shows how each of these components contributes to the speed of a computer. Besides upgrading hardware,
many changes can be made to the software to enhance the computer's speed.
How Components Impact the Speed of a Computer
Component Speed measured by Units Description

The time it takes to complete a


circuit.
Memory does affect computer
CPU Clock speed GHz speed. The CPU moves
information from the memory
while retrieving information from
running applications.
How much data can move across
Motherboard Bus speed MHz
the bus simultaneously.
The time it takes for data to be
RAM Data transfer rate MB/s transferred from the memory to
the system.
The time it takes before the disk
Hard Disk Access time ms
can transfer data.
The time it takes for data to be
Router Data transfer rate MBit/s
transferred from disk to system.

2.4.3 https://workforce.libretexts.org/@go/page/9755
Reference
Johnson, J. (2019). The unlikely origins of USB, the port that changed everything. FastCompany. Retrieved August 6, 2020, from
https://www.fastcompany.com/3060705/an-oral-history-of-the-usb

This page titled 2.4: Removable Media is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

2.4.4 https://workforce.libretexts.org/@go/page/9755
2.5: Other Computing Devices
A personal computer is designed to be a general-purpose device. That is, it can be used to solve many different types of problems.
As the technologies of the personal computer have become more commonplace, many of the components have been integrated into
other devices that previously were purely mechanical. We have also seen an evolution in what defines a computer. Ever since the
invention of the personal computer, users have clamored for a way to carry them around. Here we will examine several types of
devices that represent the latest trends in personal computing.

Portable Computers
In 1983, Compaq Computer Corporation developed the first commercially successful portable personal computer. By today’s
standards, the Compaq PC was not very portable: weighing in at 28 pounds, this computer was portable only in the most literal
sense – it could be carried around. But this was no laptop; the computer was designed like a suitcase, to be lugged around and laid
on its side to be used. Besides portability, the Compaq was successful because it was fully compatible with the software being run
by the IBM PC, which was the standard for business.
In the years that followed, portable computing continued to improve, giving us laptop and notebook computers. The “luggable”
computer has given way to a much lighter clamshell computer that weighs from 4 to 6 pounds and runs on batteries. In fact, the
most recent advances in technology give us a new class of laptops that is quickly becoming the standard: these laptops are
extremely light and portable and use less power than their larger counterparts. The screens are larger, and the weight of some can
be less than three pounds.
The ACER SWIFT 7 is a good example of this. Its specification is:
CPU: Intel Core i7-7Y75
Graphics: Intel HD Graphics 615
RAM: 8GB
Screen: 14-inch Full HD
Storage: 256GB SSD
Weight: 1.179 kg (2.6 pounds)
This is simply amazing!
Finally, as more and more organizations and individuals are moving much of their computing to the Internet or cloud, laptops are
being developed that use “the cloud” for all of their data and application storage. These laptops are also extremely light because
they have no need for a hard disk at all! A good example of this type of laptop (sometimes called a netbook) is Samsung’s
Chromebook.

Smartphones
The first modern-day mobile phone was invented in 1973. Resembling a brick and weighing in at two pounds, it was priced out of
reach for most consumers at nearly four thousand dollars. Since then, mobile phones have become smaller and less expensive;
today, mobile phones are a modern convenience available to all levels of society. As mobile phones evolved, they became more
like small walking computers. These smartphones have many of the same characteristics as a personal computer, such as an
operating system and memory. The first smartphone was the IBM Simon, introduced in 1994.

2.5.1 https://workforce.libretexts.org/@go/page/9756
Figure 2.5.1 : Smartphone. Image by Syaibatul Hamdi from Pixabay is licensed under CC BY-SA 2.0
In January of 2007, Apple introduced the iPhone. Its ease of use and intuitive interface made it an immediate success and solidified
the future of smartphones. Running on an operating system called iOS, the iPhone was really a small computer with a touch-screen
interface. In 2008, the first Android phone was released, with similar functionality.
Consider the following data regarding mobile computing :
There are 4.57 billion global mobile Internet users as of April 2020. (Statista, 2020)
It is expected by 2024, approximately 187.5 million U.S. users will have made at least one purchase via a web browser or
mobile app on their mobile device.(Clement, 2020)
In 2020, U.S. mobile retail revenues were expected to amount to 339.03 billion U.S. dollars.(Clement, 2019)
The average order value for online orders placed on Smartphones in the second quarter of 2019 is $86.47, while the average
order value for orders placed on Tablets is $96.88.(Clement, 2020)
As of 2020, there are 4.5 billion active social media users in the world; As of July 2019, there were an estimated 3.46 billion
actively using their mobile devices for social media-related activities. (Clement, 2020)
90 percent of the time spent on mobile devices is spent on apps. (Saccomani, 2019)
Mobile traffic is responsible for 51.9 percent of Internet traffic in the first quarter of 2020 — compared to 50.3 percent from
2017. (Clement, 2020)
While the total percentage of mobile traffic is more than desktop, engagement on the desktop is 46.51 percent in 2020. (Petrov,
2020)
2020, mobile traffic is at 51.3, and desktop engagement is at 48.7 percent over the years, users are moving away from the
desktop. (Broadband Search, 2020)

Tablet Computers
The tablet is larger than a smartphone and smaller than a notebook. A tablet uses a touch screen as its primary input and is small
enough and light enough to be easily transported. They generally have no keyboard and are self-contained inside a rectangular case.
Apple set the standard for tablet computing with the introduction of the iPad in 2010 using iOS, the operating system of the iPhone.
After the success of the iPad, computer manufacturers began to develop new tablets that utilized operating systems that were
designed for mobile devices, such as Android.
Global market share for tablets has changed since the early days of Apple’s dominance. Today the iPad has about 58.66%,
Samsung at 21.73%, and Amazon at 5.55% as of June 2020 (Statistica: E-commerce, 2020). The market popularity of the tablet has
been steadily declining in recent years.

Integrated Computing and Internet of Things (IoT)


Along with advances in computers themselves, computing technology is being integrated into many everyday products such as
security systems, thermostats, refrigerators, airplanes, cars, electronic appliances, lights in the household, alarm clocks, speaker
systems, vending machines, and commercial environments, just to name a few. Integrated computing technology has enhanced the
capabilities of these devices and adds capabilities into our everyday lives, thanks in part to IoT.
These three short videos highlight some of the latest ways computing technologies are being integrated into everyday products
through the Internet of Things (IoT):

2.5.2 https://workforce.libretexts.org/@go/page/9756
The video is about the internet of things.: The Internet of Things [video file: 3:21 minutes] Closed Captioned
This video is about how to update your home to a smart home.: How to start a Smart Home in 2020 [video file: 2:01 minutes]
Closed Captioned
This video takes you for a drive in Tesla’s autopilot mode.: How Tesla’sAuto-pilot Mode Work [video file: 10:04 minutes]
Closed Captioned

The Commoditization of the Personal Computer


Since the late 1970’s the personal computer has gone from a technical marvel to part of our everyday lives; it has also become a
commodity. The PC has become a commodity in the sense that there is very little differentiation between computers, and the
primary factor that controls their sale is their price. Hundreds of manufacturers all over the world now create parts for personal
computers. Dozens of companies buy these parts and assemble the computers. As commodities, there are essentially no differences
between computers made by these different companies. Profit margins for personal computers are razor-thin, leading hardware
developers to find the lowest-cost manufacturing.
Apple has differentiated itself from the pack and achieved a competitive advantage in a challenging market. The cost of their
product is significantly higher, but you are buying a high-quality product and design. Apple designs both the hardware as well as
their software in-house. The hardware and software design of the Mac works seamlessly with its other products such as the iPhone
and iPad. The engineers at Apple are constantly updating software apps and updating hardware in order to remain a leader in the
PC world.
This is an interesting article on the newest innovation for smartphones (Stuff, 2020).
Smartphone shipments are forecasted from 2010 to 2023 to grow from 304.7M units in 2010 to an estimate of 1.484 billion units in
2023 (Statista, 2019).

The Problem of Electronic Waste


Personal computers have become a common fixture in households since the early eighties. The average life span of many of these
devices is between three to five years. Recycling has become a hot subject for companies who want to be viewed by consumers as
Green companies. Consumers are demanding companies make a commitment to the environment. Worldwide, almost 45 million
tons of electronics were tossed out in 2016. Out of that staggering amount of electronic waste, only 20% has been recycled in some
shape or form. The remaining 80% made its way to a more environmentally damaging end at the landfill. Mobile phones are now
available in even the remotest parts of the world and, after a few years of use, they are discarded. Where does this electronic debris
end up?

Figure 2.5.2 : Electronic Waste. Image by George Hotelling from Flicker is licensed under CC BY-SA 2.0
Many developing nations accept this e-waste. Abroad, these recyclers re-purpose parts and extract minerals, gold, and cobalt from
these devices. These dumps have become health hazards for those living near them.
Proper safe practices are ignored, and whatever waste is not usable is dumped improperly. Consumers are trying to change this
common practice by demanding companies be transparent as to how they are addressing e-waste. Though many manufacturers
have made strides in using materials that can be recycled, electronic waste is a problem with which we must all deal with.
In 2006 the Green Electronics Council launched the Electronic Product Environmental Assessment Tool (EPEAT). This tool helps
purchasers of electronics to evaluate the effect of products on the environment. They give a ranking of how companies are doing in

2.5.3 https://workforce.libretexts.org/@go/page/9756
gold, silver, and bronze levels. When the first began, three manufactures of PC and electronic equipment manufactures participated
with 60 products. The US government in 2007 then created the U.S. Federal Acquisition Regulations (FAR), requiring federal
agencies to make purchases based on EPEAT status. In 2015 EPEAT added in Imaging Equipment and Television categories. Today
many large companies are using EPEAT standards such as Amazon and Apple. EPEAT systems are widely accepted, and over 43
countries are participating, and the number continues to grow.

References
Broadband Search (2020). Mobile Vs. Desktop Internet Usage. Retrieved September 1, 2020, from
https://www.broadbandsearch.net/blog/mobile-desktop-internet-usage-statistics
Statista (2019). Mobile share of website visits worldwide 2018. Retrieved September 1, 2020, from
https://www.statista.com/statistics/241462/global-mobile-phone-website-traffic-share
Clement, J. (2020, July 16). U.S. mobile buyers 2020. Retrieved September 1, 2020, from
https://www.statista.com/statistics/241471/number-of-mobile-buyers-in-the-us
Coldfusion. (2015). How Tesla’sAuto-pilot Mode Works. Youtube. [video file: 10:04 minutes] Closed Captioned
Edureka! (2020). The Internet of Things. Youtube. [video file: 3:21 minutes] Closed Captioned
Petrov, C. (2020, August 11). 55+ Mobile Vs. Desktop Usage Stats You Should Know in 2020. Retrieved September 1, 2020, from
https://techjury.net/blog/mobile-vs-desktop-usage/
Six Months later reviews. (2020). How to start a Smart Home in 2020. Youtube. [video file: 2:01 minutes] Closed Captioned
Statista (2020). Key Figures in E-Commerce. Retrieved September 1, 2020, from https://www.statista.com/search/?
q=+Key+Figures+of+E-Commerce&qKat=search
Striapunina, K. (2020, June 08). E-commerce revenue in China 2017-2024. Retrieved September 1, 2020, from
https://www.statista.com/forecasts/246041/e-commerce-revenue-forecast-in-china

This page titled 2.5: Other Computing Devices is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

2.5.4 https://workforce.libretexts.org/@go/page/9756
2.6: Summary
Summary
Information systems hardware consists of the components of digital technology that you can touch. In this chapter, we focused on
the personal computer and its components. We reviewed the personal computer configuration because it has many of the same
attributes as other digital computing devices. A personal computer comprises many components, most importantly the CPU,
motherboard, RAM, hard disk, removable media, and input/output devices. We also reviewed some personal computer variations,
such as the tablet computer, Bluetooth, and the smartphone. By Moore’s Law, these technologies have improved quickly over the
years, making today’s computing devices much more powerful than devices just a few years ago. Finally, we discussed two of the
consequences of this evolution: the commoditization of the personal computer and the problem of electronic waste.

This page titled 2.6: Summary is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

2.6.1 https://workforce.libretexts.org/@go/page/9757
2.7: Study Questions
Study Questions
1. Write your own description of what the term information systems hardware means.
2. Explain why Moore’s Law may not be a valid theory in the next five years.
3. Write a summary of one of the items linked to in the “Integrated Computing” section.
4. Explain why the personal computer is now considered a commodity.
5. What is the difference between a USB and a USB portal, and what was the reason for the need?.
6. List the following in increasing order (slowest to fastest): megahertz, kilohertz, gigahertz.
7. What are the differences between HDD and SSD?
8. Why are desktops declining in popularity?.
9. What is IoT?
10. Why is Apple a leader in the computer industry?

Exercises
1. Review the sidebar on the binary number system. How would you represent the number 16 in binary? How about the number
100? Besides decimal and binary, other number bases are used in computing and programming. One of the most used bases is
hexadecimal, which is base-16. In base-16, the numerals 0 through 9 are supplemented with the letters A (10) through F (15).
How would you represent the decimal number 100 in hexadecimal?
2. Go to Old-Computer.com - Pick one computer from the listing and write a brief summary. Include the specifications for CPU,
memory, and screen size. Now find the specifications of a computer being offered for sale today and compare. Did Moore’s
Law hold?
3. Under the category of IoT, pick two products and explain how IoT has changed the product. Review the price before and after
the technology was introduced. Has this new technology increased popularity for the item?.
4. Go on the web and compare and contrast two smartphones on the market. Is one better than the other, and if so, why. Be sure to
include the price.
5. Review the e-waste policies in your area. Do you feel they are helping or ignoring this growing crisis?
6. Now find at least two more scholarly articles on this topic. Prepare a PowerPoint of at least 10 slides that summarize the issue
and recommend a possible solution based on your research.
7. As with any technology text, there have been advances in technologies since publication. What technology that has been
developed recently would you add to this chapter?
8. What is the current state of solid-state drives vs. hard disks? Describe the ideal user for each. Do original research online where
you can compare prices on solid-state drives and hard disks. Be sure you note the differences in price, capacity, and speed.

This page titled 2.7: Study Questions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

2.7.1 https://workforce.libretexts.org/@go/page/9888
CHAPTER OVERVIEW

3: Software
 Learning Objectives

Upon successful completion of this chapter, you will be able to:


Define the term software;
Describe the two primary categories of software;
Describe the role ERP software plays in an organization;
Describe the process to write a computer program;
Describe cloud computing and its advantages and disadvantages for use in an organization; and
Define the term open-source and identify its primary characteristics.

Software and hardware cannot function without each other. Without software, hardware is useless. Without hardware, the software
has no hardware to run on. This chapter discusses the types of software, their purpose, and how they support different hardware
devices, individuals, groups, and organizations.
3.1: Introduction to Software
3.2: Types of Software
3.3: Cloud Computing
3.4: Software Creation
3.5: Summary
3.6: Study Questions

This page titled 3: Software is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal Desai-
Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1
3.1: Introduction to Software
The second component of an information system is software. Software is the means to take a user’s data and process it to perform
its intended action. Software translates what users want to do into a set of instructions that tell the hardware what to do. A set of
instructions is also called a computer program. For example, when a user presses the letter ‘A” key on the keyboard when using a
word processing app, it is the word processing software that tells the hardware that the user pressed the key ‘A’ on the keyboard
and fetches the image of the letter A to display on the screen as feedback to the user that the user’s data is received correctly.
Software is created through the process of programming. We will cover the creation of software in this chapter and more detail in
chapter 10. In essence, hardware is the machine, and software is the intelligence that tells the hardware what to do. Without
software, the hardware would not be functional.

This page titled 3.1: Introduction to Software is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

3.1.1 https://workforce.libretexts.org/@go/page/9759
3.2: Types of Software
The software component can be broadly divided into two categories: system software and application software.
The system software is a collection of computer programs that provide a software platform for other software programs. It also
insulates the hardware's specifics from the applications and users as much as possible by managing the hardware and the networks.
It consists of
1. Operating System
2. Utilities
Application software is a computer program that delivers a specific activity for the users (i.e., create a document, draw a picture). It
can be for either
1. a general-purpose (i.e., Microsoft Word, Google doc) or
2. for a particular purpose (i.e., weather forecast, CAD engineering)

Figure 3.2.1 : Overview of software types. Image by Ly-Huong Pham is licensed under CC BY-NC

System Software
Operating Systems
The operating system provides several essential functions, including:
1. Managing the hardware resources of the computer
2. Providing the user-interface components
3. Providing a platform for software developers to write applications.
An operating system (OS) is a key component of the system software. Examples of popular operating systems are Google
AndroidTM, Microsoft WindowsTM, and Apple iOSTM.
An OS is a set of programs that coordinate hardware components and other programs and acts as an interface with application
software and networks. Some examples include getting input from a keyboard device, displaying output to a screen, storing or
retrieving data from a disk drive.

3.2.1 https://workforce.libretexts.org/@go/page/9760
Figure 3.2.2 : Operating System Role. Image by Ly-Huong T. Pham is licensed by CC BY NC
The above picture shows the operating system at the center; it accepts input from various input devices such as a mouse, a
keyboard, a digital pen, or a speech recognition, outputs to various output devices such as screen monitor or a printer; acts an
intermediary between applications and apps, and access the internet via network devices such as a router or a web server.
In 1984, Apple introduced the Macintosh computer, featuring an operating system with a graphical user interface, now known as
macOS. Apple has different names for its OS running on different devices such as iOS, iPadOS, watchOS, and tvOS.
In 1986, as a response to Apple, Microsoft introduced the Microsoft Windows Operating Systems, commonly known as Windows,
as a new graphical user interface for their then command-based operating system, known as MS-DOS, which was developed for
IBM’s Disk Operating System or IBM-DOS. By the 1990s, Windows dominated the desktop personal computers market as the top
OS and overtaken Apple’s OS.

Figure 3.2.3 : Tux, Linux’s Mascot. Image by [email protected] Larry Ewing and The GIMP is licensed under Creative
Commons CC0 1.0 Universal Public Domain Dedication
A third personal-computer operating system family that is gaining in popularity is Linux. Linux is a version of the Unix operating
system that runs on a personal computer. Unix is an operating system used primarily by scientists and engineers on larger
minicomputers. These computers, however, are costly, and software developer Linus Torvalds wanted to find a way to make Unix
run on less expensive personal computers: Linux was the result. Linux has many variations and now powers a large percentage of
web servers in the world. It is also an example of open-source software, a topic we will cover later in this chapter.
In 2007, Google introduced Android to support mobile devices such as smartphones and tablets specifically. It is based on the
Linux kernel, and a consortium of developers developed other open-source software. Android quickly became the top OS for
mobile devices and overtook Microsoft.
Operating systems have continuously improved with more and more features to increase speed and performance to process more
data at once and access more memory. Features such as multitasking, virtual memory, and voice input have become standard
features of both operating systems.
All computing devices run an operating system, as shown in the below table. The most popular operating systems are Microsoft’s
Windows, Apple’s operating system, and different Linux versions for personal computers. Smartphones and tablets run operating

3.2.2 https://workforce.libretexts.org/@go/page/9760
systems as well, such as Apple’s iOS and Google’s Android.
Computing devices and Operating system
Operating Systems Desktop Mobile

Microsoft Windows Windows 10 Windows 10

Apple OS Mac OS iOS

Various versions of Linux Ubuntu Android (Google)

According to netmarketshare.com (2020), from August 2019 to August 2020, Windows still retains the desktop's dominant position
with over 87% market share. Still, it is losing in the mobile market share, to Android with over 70% market share, followed by
Apple’s iOS with over 28% market share.

 Sidebar: Why Is Microsoft Software So Dominant in the Business World?

As we learned in chapter 1, almost all businesses used IBM mainframe computers back in the 1960s and 1970s. These same
businesses shied away from personal computers until IBM released the PC in 1981. Initially, business decisions were low-risk
decisions since IBM was dominant, a safe choice. Another reason might be that once a business selects an operating system as
the standard solution, it will invest in additional software, hardware, and services built for this OS. The switching cost to
another OS becomes a hurdle both financially and for the workforce to be retrained.

Utility
Utility software includes software that is specific-purposed and focused on keeping the infrastructure healthy. Examples include
antivirus software to scan and stop computer viruses and disk defragmentation software to optimize files' storage. Over time, some
of the popular utilities were absorbed as features of an operating system.

Application or App Software


The second major category of software is application software. While system software focuses on running the computers,
application software allows the end-user to accomplish some goals or purposes. Examples include word processing, photo editor,
spreadsheet, or a browser. Applications software are grouped in many categories, including:
Killer app
Productivity
Enterprise
Mobile
The “Killer” App

Figure 3.2.4 :VisiCalc. Image by Gortu is licensed under Public Domain

3.2.3 https://workforce.libretexts.org/@go/page/9760
When a new type of digital device is invented, there are generally a small group of technology enthusiasts who will purchase it just
for the joy of figuring out how it works. A “killer” application runs only on one OS platform and becomes so essential that many
people will buy a device on that OS platform just to run that application. For the personal computer, the killer application was the
spreadsheet. In 1979, VisiCalc, the first personal-computer spreadsheet package, was introduced. It was an immediate hit and drove
sales of the Apple II. It also solidified the value of the personal computer beyond the relatively small circle of technology geeks.
When the IBM PC was released, another spreadsheet program, Lotus 1-2-3, was the killer app for business users. Today, Microsoft
Excel dominates as the spreadsheet program, running on all the popular operating systems.
Productivity Software

Along with the spreadsheet, several other software applications have become standard tools for the workplace. These applications,
called productivity software, allow office employees to complete their daily work. Many times, these applications come packaged
together, such as in Microsoft’s Office suite. Here is a list of these applications and their basic functions:
Word processing: This class of software provides for the creation of written documents. Functions include the ability to type
and edit text, format fonts and paragraphs, and add, move, and delete text throughout the document. Most modern word-
processing programs also have the ability to add tables, images, voice, videos, and various layout and formatting features to the
document. Word processors save their documents as electronic files in a variety of formats. The most popular word-processing
package is Microsoft Word, which saves its files in the Docx format. This format can be read/written by many other word-
processor packages or converted to other formats such as Adobe’s PDF.
Spreadsheet: This class of software provides a way to do numeric calculations and analysis. The working area is divided into
rows and columns, where users can enter numbers, text, or formulas. The formulas make a spreadsheet powerful, allowing the
user to develop complex calculations that can change based on the numbers entered. Most spreadsheets also include the ability
to create charts based on the data entered. The most popular spreadsheet package is Microsoft Excel, which saves its files in the
XLSX format. Just as with word processors, many other spreadsheet packages can read and write to this file format.
Presentation: This software class provides for the creation of slideshow presentations that can be shared, printed, or projected
on a screen. Users can add text, images, audio, video, and other media elements to the slides. Microsoft’s PowerPoint remains
the most popular software, saving its files in PPTX format.
Office Suite: Microsoft popularized the idea of the office-software productivity bundle with their release of Microsoft Office.
Some office suites include other types of software. For example, Microsoft Office includes Outlook, its e-mail package, and
OneNote, an information-gathering collaboration tool. The professional version of Office also includes Microsoft Access, a
database package. (Databases are covered more in chapter 4.) This package continues to dominate the market, and most
businesses expect employees to know how to use this software. However, many competitors to Microsoft Office exist and are
compatible with Microsoft's file formats (see table below). Microsoft now has a cloud-based version called Microsoft Office
365. Similar to Google Drive, this suite allows users to edit and share documents online utilizing cloud-computing technology.
Cloud computing will be discussed later in this chapter.

Figure 3.2.5 : Comparison of office application software suites. Image by David Bourgeois, Ph.D. is licensed under CC BY 4.0

3.2.4 https://workforce.libretexts.org/@go/page/9760
 Sidebar: “PowerPointed” to Death

As presentation software, specifically Microsoft PowerPoint, has gained acceptance as the primary method to formally present
information in a business setting, the art of giving an engaging presentation is becoming rare. Many presenters now just read
the bullet points in the presentation and immediately bore those in attendance who can already read it for themselves.
The real problem is not with PowerPoint as much as it is with the person creating and presenting. The book Presentation Zen
by Garr Reynolds is highly recommended to anyone who wants to improve their presentation skills.
New opportunities have been presented to make presentation software more effective. One such example is Prezi. Prezi is a
presentation tool that uses a single canvas for the presentation, allowing presenters to place text, images, and other media on
the canvas and then navigate between these objects as they present.

Enterprise Software

As the personal computer proliferated inside organizations, control over the information generated by the organization began
splintering. For example, the customer service department creates a customer database to track calls and problem reports. The sales
department also creates a database to keep track of customer information. Which one should be used as the master list of
customers? As another example, someone in sales might create a spreadsheet to calculate sales revenue, while someone in finance
creates a different one that meets their department's needs. However, the two spreadsheets will likely come up with different totals
for revenue. Which one is correct? And who is managing all this information? This type of example presents challenges to
management to make effective decisions.
Enterprise Resource Planning
In the 1990s, the need to bring the organization’s information back under centralized control became more apparent. The enterprise
resource planning (ERP) system (sometimes just called enterprise software) was developed to bring together an entire organization
in one software application. Key characteristics of an ERP include:
An integrated set of modules: Each module serves different functions in an organization, such as Marketing, Sales,
Manufacturing.
A consistent user interface: An ERP is a software application that provides a common interface across all modules of the ERP
and is used by an organization’s employees to access information
A common database: All users of the ERP edit and save their information from the data source. This means that there is only
one customer database, there is only one calculation for revenue, etc.
Integrated business processes: All users must follow the same business rules and process throughout the entire organization”:
ERP systems include functionality that covers all of the essential components of a business, such as how organizations track
cash, invoices, purchases, payroll, product development, supply chain.

3.2.5 https://workforce.libretexts.org/@go/page/9760
Figure 3.2.6 : ERP Modules. Image by Shing Hin Yeung is licensed under CC-BY-SA
ERP systems were originally marketed to large corporations, given that they are costly. However, as more and more large
companies began installing them, ERP vendors began targeting mid-sized and even smaller businesses. Some of the more well-
known ERP systems include those from SAP, Oracle, and Microsoft.
To effectively implement an ERP system in an organization, the organization must be ready to make a full commitment, including
the cost to train employees as part of the implementation.
All aspects of the organization are affected as old systems are replaced by the ERP system. In general, implementing an ERP
system can take two to three years and several million dollars.
So why implement an ERP system? If done properly, an ERP system can bring an organization a good return on its investment. By
consolidating information systems across the enterprise and using the software to enforce best practices, most organizations see an
overall improvement after implementing an ERP. Business processes as a form of competitive advantage will be covered in chapter
9.
Customer Relationship Management
A customer relationship management (CRM) system is a software application designed to manage customer interactions, including
customer service, marketing, and sales. It collects all data about the customers. The objectives of a CRM are:
Personalize customer relationship to increase customer loyalty
Improve communication
Anticipate needs to retain existing or acquire new customers
Some ERP software systems include CRM modules. An example of a well-known CRM package in Salesforce

3.2.6 https://workforce.libretexts.org/@go/page/9760
Figure 3.2.7 : Components in the different types of CRM. Image by Bgrigorov is licensed under CC-BY-SA

Supply Chain Management


Many organizations must deal with the complex task of managing their supply chains. At its simplest, a supply chain is a linkage
between an organization’s suppliers, its manufacturing facilities, and its products' distributors. Each link in the chain has a
multiplying effect on the complexity of the process. For example, if there are two suppliers, one manufacturing facility, and two
distributors, then there are 2 x 1 x 2 = 4 links to handle. However, if you add two more suppliers, another manufacturing facility,
and two more distributors, then you have 4 x 2 x 4 = 32 links to manage.

3.2.7 https://workforce.libretexts.org/@go/page/9760
Figure 3.2.8 : A supply and demand network. Image by Andreas Wieland is licensed under CC-BY-SA 3.0
A supply chain management (SCM) system manages the interconnection between these links and the products' inventory in their
various development stages. The Association provides a full definition of a supply chain management system for Operations
Management: “The design, planning, execution, control, and monitoring of supply chain activities to create net value, building a
competitive infrastructure, leveraging worldwide logistics, synchronizing supply with demand, and measuring performance
globally.” 2 Most ERP systems include a supply chain management module.
Mobile Software

A mobile application, commonly called a mobile app, is a software application programmed to run specifically on a mobile device
such as smartphones and tablets.
As we saw in chapter 2, smartphones and tablets are becoming a dominant form of computing, with many more smartphones being
sold than personal computers. This means that organizations will have to get smart about developing software on mobile devices to
stay relevant. With the rise of mobile devices' adoption, the number of apps is exploding in the millions (Forbes.com, 2020), and
there is an app for just about anything a user is looking to do. Examples include apps as a flashlight, a step counter, a plant
identifier, and games.
We will discuss the question of building a mobile app in Chapter 10.

References
There Are Now 8.9 Million Mobile Apps, And China Is 40% Of Mobile App Spending (2020, Feb 28). Retrieved September 4,
2020, from https://www.forbes.com/

This page titled 3.2: Types of Software is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

3.2.8 https://workforce.libretexts.org/@go/page/9760
3.3: Cloud Computing
Historically, for software to run on a computer, an individual copy of the software had to be installed on the computer, either from a
disk or, more recently, after being downloaded from the Internet. The concept of “cloud” computing changes this model.
“The cloud” refers to applications, services, and data stored in data centers, server farms, and storage servers and accessed by users
via the Internet. In most cases, the users don’t know where their data is actually stored. Individuals and organizations use cloud
computing.
You probably already use cloud computing in some forms. For example, if you access your email via your web browser, you are
using a form of cloud computing. If you use Google Drive’s applications, you are using cloud computing. Simultaneously, these are
free versions of cloud computing, big business in providing applications and data storage over the web. Commercial and large
applications can also exist on the cloud, such as the entire suite of CRM from Salesforce is offered via the cloud. Cloud computing
is not limited to web applications: it can also be used for phone or video streaming services.

Advantages of Cloud Computing


No software to install or upgrades to maintain.
Available from any computer that has access to the Internet.
Can scale to a large number of users easily.
New applications can be up and running very quickly.
Services can be leased for a limited time on an as-needed basis.
Your information is not lost if your hard disk crashes or your laptop is stolen.
You are not limited by the available memory or disk space on your computer.

Disadvantages of Cloud Computing


Your information is stored on someone else’s computer
You must have Internet access to use it. If you do not have access, you’re out of luck.
You are relying on a third party to provide these services.
You don’t know how your data is protected from theft or sold by your own cloud service provider.
Cloud computing can greatly impact how organizations manage technology. For example, why is an IT department needed to
purchase, configure, and manage personal computers and software when all that is really needed is an Internet connection?

Using a Private Cloud


Many organizations are understandably nervous about giving up control of their data and applications using cloud computing. But
they also see the value in reducing the need for installing software and adding disk storage to local computers. A solution to this
problem lies in the concept of a private cloud. While there are various private cloud models, the basic idea is for the cloud service
provider to rent a specific portion of their server space exclusive to a specific organization. The organization has full control over
that server space while still gaining some of the benefits of cloud computing.

Virtualization
One technology that is utilized extensively as part of cloud computing is “virtualization.” Virtualization is using software to create
a virtual machine that simulates a computer with an operating system. For example, using virtualization, a single computer that
runs Microsoft Windows can host a virtual machine that looks like a computer with a specific Linux-based OS. This ability
maximizes the use of available resources on a single machine. Companies such as EMC provide virtualization software that allows
cloud service providers to provision web servers to their clients quickly and efficiently. Organizations are also implementing
virtualization to reduce the number of servers needed to provide the necessary services. For more detail on how virtualization
works, see this informational page from VMWare.

This page titled 3.3: Cloud Computing is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

3.3.1 https://workforce.libretexts.org/@go/page/9761
3.4: Software Creation
We just discussed different types of software and now can ask: How is software created? If the software is the set of instructions
that tells the hardware what to do, how are these instructions written? If a computer reads everything as one and zero, do we have
to learn how to write software that way? Thankfully, another software type is written, especially for software developers to write
system software and applications - called programming languages. The people who can program are called computer programmers
or software developers.
Analogous to a human language, a programming language consists of keywords, comments, symbols, and grammatical rules to
construct statements as valid instructions understandable by the computer to perform certain tasks. Using this language, a
programmer writes a program (called the source code). Another software then processes the source code to convert the
programming statements to a machine-readable form, the ones, and zeroes necessary to execute the CPU. This conversion process
is often known as compiling, and the software is called the compiler. Most of the time, programming is done inside a programming
environment; when you purchase a copy of Visual Studio from Microsoft; It provides the developers with an editor to write the
source code, a compiler, and help for many of Microsoft’s programming languages. Examples of well-known programming
languages today include Java, PHP, and C's various flavors (Visual C, C++, C#.)

Figure 3.4.1 : Convert a computer program to an executable. Image by Ly-Huong T. Pham is licensed under CC-BY-NC
Thousands of programming languages have been created since the first programming language in 1883 by a woman named Ada
Lovelace. One of the earlier English-like languages called COBOL has been in use since the 1950s to the present time in services
that we still use today, such as payroll, reservation systems. The C programming language was introduced in the 1970s and
remained a top popular choice. Some new languages such as C#, Swift are gaining momentum as well. Programmers select the
best-matched language with the problem to be solved for a particular OS platform. For example, languages such as HTML and
JavaScript are used to develop web pages.
It is hard to determine which language is the most popular since it varies. However, according to TIOBE Index, one of the
companies that rank the popularity of the programming languages monthly, the top five in August 2020 are C, Java, Python, C++,
and C# (2020). For more information on this methodology, please visit the TIOBE definition page. For those who wish to learn
more about programming, Python is a good first language to learn because not only is it a modern language for web development, it
is simple to learn and covers many fundamental concepts of programming that apply to other languages.
One person can write some programs. However, most software programs are written by many developers. For example, it takes
hundreds of software engineers to write Microsoft Windows or Excel. To ensure teams can deliver timely and quality software with
the least amount of errors, also known as bugs, formal project management methodologies are used, a topic that we will discuss in
chapter 10.

Open-Source vs. Closed-Source Software


When the personal computer was first released, computer enthusiasts immediately banded together to build applications and solve
problems. These computer enthusiasts were happy to share any programs they built and solutions to problems they found; this
collaboration enabled them to innovate more quickly and fix problems.
As software began to become a business, however, this idea of sharing everything fell out of favor for some. When a software
program takes hundreds of hours to develop, it is understandable that the programmers do not want to give it away. This led to a
new business model of restrictive software licensing, which required payment for software to the owner, a model that is still
dominant today. This model is sometimes referred to as closed source, as the source code remains private property and is not made
available to others. Microsoft Windows, Excel, Apple iOS are examples of closed source software.

3.4.1 https://workforce.libretexts.org/@go/page/9762
There are many, however, who feel that software should not be restricted. Like those early hobbyists in the 1970s, they feel that
innovation and progress can be made much more rapidly if we share what we learn. In the 1990s, with Internet access connecting
more and more people, the open-source movement gained steam.
Open-source software is software that has the source code available for anyone to copy and use. For non-programmers, it won’t be
of much use unless the compiled format is also made available for users to use. However, for programmers, the open-source
movement has led to developing some of the world's most-used software, including the Firefox browser, the Linux operating
system, and the Apache webserver.
Some people are concerned that open-source software can be vulnerable to security risks since the source code is available. Others
counter that because the source code is freely available, many programmers have contributed to open-source software projects,
making the code less buggy and adding features, and fixing bugs much faster than closed-source software.
Many businesses are wary of open-source software precisely because the code is available for anyone to see. They feel that this
increases the risk of an attack. Others counter that this openness decreases the risk because the code is exposed to thousands of
programmers who can incorporate code changes to patch vulnerabilities quickly.
In summary, some benefits of the open-source model are:
The software is available for free.
The software source code is available; it can be examined and reviewed before it is installed.
The large community of programmers who work on open-source projects leads to quick bug-fixing and feature additions.
Some benefits of the closed-source model are:
Providing a financial incentive for software developers or companies
Technical support from the company that developed the software.
Today there are thousands of open-source software applications available for download. An example of open-source productivity
software is Open Office Suite. One good place to search for open-source software is sourceforge.net, where thousands of software
applications are available for free download.

Software Licenses
The companies or developers own the software they create. The software is protected by law either through patents, copyright, or
licenses. It is up to the software owners to grant their users the right to use the software through the terms of the licenses.
For closed-source vendors, the terms vary depending on the price the users are willing to pay. Examples include single user, single
installation, multi-users, multi-installations, per network, or machine.
They have specific permission levels for open-source vendors to grant using the source code and set the modified version
conditions. Examples include free to distribute, remix, adapt for non-commercial use but with the condition that the newly revised
source code must also be licensed under identical terms. While open-source vendors don’t make money by charging for their
software, they generate revenues through donations or selling technical support or related services. For example, Wikipedia is a
widely popular and online free-content encyclopedia used by millions of users. Yet, it relies mainly on donations to sustain its staff
and infrastructure.

Reference
TIOBE Index for August 2020. Retrieved September 4, 2020, from https://www.tiobe.com

This page titled 3.4: Software Creation is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

3.4.2 https://workforce.libretexts.org/@go/page/9762
3.5: Summary
The software gives the instructions that tell the hardware what to do. There are two basic categories of software: operating systems
and applications. Operating systems provide access to the computer hardware and make system resources available. Application
software is designed to meet a specific goal. Productivity software is a subset of application software that provides basic business
functionality to a personal computer: word processing, spreadsheets, and presentations. An ERP system is a software application
with a centralized database that is implemented across the entire organization. Cloud computing is a software delivery method that
runs on any computer with a web browser and access to the Internet. Software is developed through a process called programming,
in which a programmer uses a programming language to put together the logic needed to create the program. The software can be
an open-source or a closed-source model, and users or developers are granted different licensing terms.

This page titled 3.5: Summary is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

3.5.1 https://workforce.libretexts.org/@go/page/9763
3.6: Study Questions
Study Questions
1. Give your own definition of software. Explain the key terms in your definition.
2. Identify the key functions of the operating system
3. Match which of the following are operating systems and which are applications: Microsoft Excel, Google Chrome, iTunes,
Windows, Android, Angry Birds.
4. List your favorite software application and explain what tasks it helps you accomplish
5. Explain what is a “killer” app and identify the killer app for the PC
6. List at least three basic categories of mobile apps and give an example of each.
7. Explain what an ERP system does.
8. Explain the difference between open-source software and closed-source software. Give an example of each.
9. Describe what a software license is.
10. Explain the process of creating a software program.

Exercises
1. Go online and find a case study about the implementation of an ERP system. Was it successful? How long did it take? Does the
case study tell you how much money the organization spent?
2. What ERP system does your university or place of employment use? Find out which one they use and see how it compares to
other ERP systems.
3. If you were running a small business with limited funds for information technology, would you consider using cloud
computing? Find some web-based resources that support your decision.
4. Download and install Open Office. Use it to create a document or spreadsheet. How does it compare to Microsoft Office? Does
the fact that you got it for free make it feel less valuable?
5. Go to sourceforge.net and review their most downloaded software applications. Report back on the variety of applications you
find. Then pick one that interests you and report back on what it does, the kind of technical support offered, and the user
reviews.
6. Go online to research the security risks of open-source software. Write a short analysis giving your opinion on the different
risks discussed.
7. What are three examples of programming languages? What makes each of these languages useful to programmers?

This page titled 3.6: Study Questions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

3.6.1 https://workforce.libretexts.org/@go/page/9764
CHAPTER OVERVIEW

4: Data and Databases


 Learning Objectives

Upon successful completion of this chapter, you will be able to:


Explain the differences between data, information, and knowledge;
Define the term database and identify the steps to creating one;
Describe the role of a database management system;
Describe the characteristics of a data warehouse; and
Define data mining and describe its role in an organization.

This chapter explores how organizations use information systems to turn data into information and knowledge to be used for
competitive advantage. We will discuss how different types of data are captured and managed, different types of databases, and
how individuals and organizations use them.
4.1: Introduction to Data and Databases
4.2: Examples of Data
4.3: Structured Query Language
4.4: Designing a Database
4.5: Sidebar- The Difference between a Database and a Spreadsheet
4.6: Big Data
4.7: Data Warehouse
4.8: Data Mining
4.9: Database Management Systems
4.10: Enterprise Databases
4.11: Knowledge Management
4.12: Sidebar- What is data science?
4.13: Summary
4.14: Study Questions

This page titled 4: Data and Databases is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1
4.1: Introduction to Data and Databases
Introduction
You have already been introduced to the first two components of information systems: hardware and software. However, those two
components by themselves do not make a computer useful. Imagine if you turned on a computer, started typing a document, but could not
save a document. Imagine if you opened your music app, but there was no music to play. Imagine opening a web browser, but there were
no web pages. Without data, hardware and software are not very useful! Data is the third component of an information system.

Data, Information, Knowledge, and Wisdom

Figure 4.1.1 : Data to Wisdom. Image by David T. Bourgeois is licensed under CC BY-SA 2.0
Data is raw bits and pieces of information with no context, for example, your driver's license or your first name. The information system
helps organize this information in a designed systematic manner to be useful to the user. The users can be individuals or businesses. This
organized collection of interrelated data is called a database. The two highest levels of data are quantitative or qualitative. To know which
to use depends on the question to be answered and the available resources. Quantitative data is numeric, the result of a measurement,
count, or some other mathematical calculation. A quantitative example would be how many 5th graders attended music camp this summer.
Qualitative data consist of words, descriptions, and narratives. A qualitative example would be a camper wearing a red tee-shirt. A number
can be considered qualitative as well. If I tell you my favorite number is 5, that is qualitative data because it is descriptive, not the result of
a measurement or mathematical calculation.
When using qualitative data and quantitative data, we need to understand the context of its use. There are advantages and disadvantages to
each. This table encapsulates the advantages and disadvantages when gathering data.
Qualitative Data
Advantages Disadvantages

Can give a nuanced understanding of the perspectives and needs of May lend itself to working with smaller populations; may not be representative of
program participants larger demographics
Can help support or explain results indicated in quantitative analysis Data analysis can be time-consuming
Source of detailed or “rich” information that can be used to identify Analysis can be subjective; there is potential for evaluator bias in analysis and
patterns of behavior collection.

Quantitative Data
Advantages Disadvantages

Data collection methods provide respondents with a limited number of response


Clear and specific
options
Accurate and reliable if properly analyzed
Can require complex sampling procedures
It can be easily communicated as graphs and charts
May not accurately describe a complex situation
Many large datasets already exist that can be analyzed
Some expertise with the statistical analysis required

By itself, data is a collection of components waiting to be analyzed. To be useful, it needs to be given context. Users and designers create
meaning as they collect, reference, and organize the data. Information typically involves manipulating raw data to obtain an indication of
magnitude, trends, and patterns in the data for a purpose. Returning to the example above, if I told you that “15, 23, 14, and 85″ are the
numbers of students that had registered for an upcoming camp, that would be information. By adding the context – that the numbers
represent the count of students registering for specific classes – I have added context to data which now is information. Information is data
that has been analyzed, processed, structured, and avails itself to be useful.

4.1.1 https://workforce.libretexts.org/@go/page/9766
Once we collect and understand the data, we put it into context, aggregate it, and analyze it. We then have information, and we can use it
to make decisions for the individual and our organization. We can say that this consumption of information produces knowledge. .
Knowledge can be viewed as information that facilitates action. This knowledge can be used to make decisions, set policies, and even
spark innovation.
The final step up the information ladder is the step from knowledge (knowing a lot about a topic) to wisdom.
Wisdom is experience coupled with understanding and insight. We can say that someone has wisdom when combining their knowledge
and experience to produce a deeper understanding of a topic. It often takes many years to develop wisdom on a particular topic and
requires patience and expertise.

Figure 4.1.2 : Data Shown on Monitors. Image by Gerd Altmann from Pixabay is licensed under CC-BY-SA 2.0

This page titled 4.1: Introduction to Data and Databases is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

4.1.2 https://workforce.libretexts.org/@go/page/9766
4.2: Examples of Data
Data can be anything. Some examples of data are weights, prices, costs, numbers of items sold, names, places. Almost all software
programs require data to do something useful. It can be straightforward, as a name of a place, a person, or a number. An example
would be editing a document in a word processor such as Microsoft Word, the document you are working on is the data. The word-
processing software can manipulate the data: create a new document, duplicate a document, or modify a document. Today we have
a new type of data called biometrics, which are physical or behavioral human characteristics that can digitally identify a person.
Examples would be facial recognition used for passports. Fingerprint authentication is used to unlock smartphones. Iris recognition
uses high-resolution images of the iris. This data is stored for future identification. Many governments and high-security companies
use iris recognition because it is considered to be errorless when identifying individuals.

Databases
Many information systems aim to transform data into information to generate knowledge that can be used for decision-making. To
do this, the system must take or read the data, then put the data into context, and provide tools for aggregation and analysis. A
database is designed for just such a purpose.
A database is an organized, meaningful collection of related information. It is an organized collection because, in a database, all
data is interrelated and associated with other data. All information in a database should be related; separate databases should be
created to manage unrelated information. For example, a database that contains information about employees' payroll should not
also hold information about the company’s stock prices. Digital databases include things created by MS Excel, such as tables to
more complicated databases used every day by people, from checking your balance at the bank to accessing medical records and
online shopping. Databases help us to eliminate redundant information. It ensures more effective ways to access searches. Back in
the day, databases would be a filing cabinet. For this text, we will only consider digital databases.

Figure 4.2.1 : Relational Database. Image by mcmurry julie from Pixabay is licensed under CC-BY-SA 2.0

Relational Databases
Databases can be organized in many different ways and thus take many forms. DBMS (Database Management System) is software
that facilitates the organization and manipulation of data. DBMS functions as an interface between the database and the end-user.
The software is designed to store, define, retrieve and manage the data in the database. Other forms of databases today are
relational databases. Examples of relational databases are Oracle (RDBMS), MySQL, SQL, and PostgreSQL. A relational database
is one in which stores data in an organized fashion of rows and columns, which will create one or more tables of related
information. Each table has a set of fields, which define the nature of the data stored in the table. A record is one instance of a set of
fields in a table. To visualize this, think of an excel spreadsheet, the records as the rows of the table and the fields as the table
columns. In the example below, we have a table of student information, with each row representing a student and each column
representing one piece of information about the student. The relational database model does not scale well. The term scale here

4.2.1 https://workforce.libretexts.org/@go/page/9767
refers to a larger and larger database being distributed on a larger number of computers connected via a network. Some companies
are looking to provide large-scale database solutions by moving away from the relational model to other, more flexible models. For
example, Google now offers the App Engine Datastore, which is based on NoSQL. Developers can use the App Engine Datastore
to develop applications that access data from anywhere in the world. Amazon.com offers several database services for enterprise
use, including Amazon RDS, a relational database service, and Amazon DynamoDB, a NoSQL enterprise solution.
Relational Database Example
Figure 4.2.2 : Relational database table adapted from David Bourgeois, Ph.D. is licensed under CC BY 4.0
Fields (Columns)

First Name Last Name Major Birthdate

Ann Marie Strong Pre-Law 2/27/1997


Records (Rows)
Evan Right Business 12/4/1996

Michelle Smith Math 6/27/1995

This page titled 4.2: Examples of Data is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

4.2.2 https://workforce.libretexts.org/@go/page/9767
4.3: Structured Query Language
Once you have a database designed and loaded with data, how will you do something useful with it? The primary way to work with
a relational database is to use Structured Query Language, SQL (pronounced “sequel,” or stated as S-Q-L). Almost all applications
that work with databases (such as database management systems, discussed below) use SQL to analyze and manipulate relational
data. As its name implies, SQL is a language that can be used to work with a relational database or for streaming processing in a
relational data stream management system. From a simple request for data to a complex update operation, SQL is a mainstay of
programmers and database administrators. To give you a taste of what SQL might look like, here are a couple of examples using
our Student Clubs database.
• The following query will retrieve a list of the first and last names of the club presidents:
SELECT "First Name," "Last Name" FROM "Students" WHERE "Students.ID" =
• The following query will create a list of the number of students in each club, listing the club name and then the number of
members:
SELECT "Clubs.Club Name", COUNT("Memberships.Student ID") FROM "Clubs"
An in-depth description of how SQL works is beyond this introductory text's scope. Still, these examples should give you an idea
of the power of using SQL to manipulate relational data. Many database packages, such as Microsoft Access, allow you to visually
create the query you want to construct and then generate the SQL query for you.

Rows and Columns in a Table


In a relational database, all the tables are related by one or more fields so that it is possible to connect all the tables in the database
through the field(s) they have in common. For each table, one of the fields is identified as a primary key. This key is the unique
identifier for each record in the table. To help you understand these terms further, let’s walk through the process of designing the
following database.

Figure 4.3.1 : Data design flow. Image by David Bourgeois, Ph.D. is licensed under CC BY 4.0

This page titled 4.3: Structured Query Language is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

4.3.1 https://workforce.libretexts.org/@go/page/9768
4.4: Designing a Database
Designing a Database
Suppose a university wants to create a database to track participation in student clubs. After interviewing several people, the design
team learns that implementing the system is to give better insight into how the university funds clubs. This will be accomplished by
tracking how many members each club has and how active the clubs are. The team decides that the system must keep track of the
clubs, their members, and their events. Using this information, the design team determines that the following tables need to be
created:
Clubs: this will track the club name, the club president, and a short description of the club.
Students: student name, e-mail, and year of birth.
Memberships: this table will correlate students with clubs, allowing us to have any given student join multiple clubs.
Events: this table will track when the clubs meet and how many students showed up.
Now that the design team has determined which tables to create, they need to define the specific information that each table will
hold. This requires identifying the fields that will be in each table. For example, Club Name would be one of the fields in the Clubs
table. First Name and Last Name would be fields in the Students table. Finally, since this will be a relational database, every table
should have a field in common with at least one other table (in other words: they should have a relationship with each other).
To properly create this relationship, a primary key must be selected for each table. This key is a unique identifier for each record in
the table. For example, in the Students table, it might be possible to use students’ first names to identify them uniquely. However, it
is more than likely that some students will share the last name (like Mike, Stefanie, or Chris), so a different field should be
selected. A student’s email address might be a good choice for a primary key since e-mail addresses are unique. However, a
primary key cannot change, so this would mean that if students changed their email addresses, we would have to remove them from
the database and then re-insert them – not an attractive proposition. Our solution is to create a value for each student — a user ID
— that will act as a primary key. We will also do this for each of the student clubs. This solution is quite common and is the reason
you have so many user IDs!
You can see the final database design in the figure below:

Figure 4.4.1 : Data design flow. Image by David Bourgeois, Ph.D. is licensed under CC BY 4.0
With this design, not only do we have a way to organize all of the information we need to meet the requirements, but we have also
successfully related all the tables together. Here’s what the database tables might look like with some sample data. Note that the
Memberships table has the sole purpose of allowing us to relate multiple students to multiple clubs.

Figure 4.4.2 : Table: Clubs. Image by David Bourgeois, Ph.D. is licensed under CC BY 4.0

4.4.1 https://workforce.libretexts.org/@go/page/9769
Figure 4.4.3 : Table: Students. Image by David Bourgeois, Ph.D. is licensed under CC BY 4.0

Figure 4.4.4 : Table: Memberships. Image: by David Bourgeois, Ph.D. is licensed under CC BY 4.0

Normalization
When designing a database, one important concept to understand is normalization. In simple terms, to normalize a database means
to design it in a way that:
Reduces redundancy of data between tables easier mapping
Takes out inconsistent data.
Information is stored in one place only.
Gives the table as much flexibility as possible.
In the Student Clubs database design, the design team worked to achieve these objectives. For example, to track memberships, a
simple solution might have been to create a Members field in the Clubs table and then list all of the members' names. However, this
design would mean that if a student joined two clubs, then his or her information would have to be entered a second time. Instead,
the designers solved this problem by using two tables: Students and Memberships.
In this design, when a student joins their first club, we must add the student to the Students table, where their first name, last name,
e-mail address, and birth year are entered. This addition to the Students table will generate a student ID. Now we will add a new
entry to denote that the student is a specific club member. This is accomplished by adding a record with the student ID and the club
ID in the Memberships table. If this student joins a second club, we do not have to duplicate the student’s name, e-mail, and birth
year; instead, we only need to make another entry in the Memberships table of the second club’s ID and the student’s ID.
The Student Clubs database design also makes it simple to change the design without major modifications to the existing structure.
For example, if the design team was asked to add functionality to the system to track faculty advisors to the clubs, we could easily
accomplish this by adding a Faculty Advisors table (similar to the Students table) and then adding a new field to the Clubs table to
hold the Faculty Advisor ID.

Data Types
When defining the fields in a database table, we must give each field a data type. For example, the field Birth Year is a year, so it
will be a number, while First Name will be text. Most modern databases allow for several different data types to be stored. Some of
the more common data types are listed here:

4.4.2 https://workforce.libretexts.org/@go/page/9769
Text: for storing non-numeric data that is brief, generally under 256 characters. The database designer can identify the
maximum length of the text.
Number: for storing numbers. There are usually a few different number types selected, depending on how large the largest
number will be.
Yes/No: a special form of the number data type that is (usually) one byte long, with a 0 for “No” or “False” and a 1 for “Yes” or
“True.”
Date/Time: a special form of the number data type can be interpreted as a number or a time.
Currency: a special form of the number data type that formats all values with a currency indicator and two decimal places.
Paragraph Text: this data type allows for text longer than 256 characters.
Object: this data type allows for data storage that cannot be entered via keyboards, such as an image or a music file.
The importance of properly defining data type is to improve the data's integrity and the proper storing location. We must properly
define the data type of a field, and a data type tells the database what functions can be performed with the data. For example, if we
wish to perform mathematical functions with one of the fields, we must tell the database that the field is a number data type. So if
we have a field storing birth year, we can subtract the number stored in that field from the current year to get age.
Allocation of storage space for the defined data must also be identified. For example, if the First Name field is defined as a text(50)
data type, fifty characters are allocated for each first name we want to store. However, even if the first name is only five characters
long, fifty characters (bytes) will be allocated. While this may not seem like a big deal, if our table ends up holding 50,000 names,
we allocate 50 * 50,000 = 2,500,000 bytes for storage of these values. It may be prudent to reduce the field's size, so we do not
waste storage space.

This page titled 4.4: Designing a Database is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

4.4.3 https://workforce.libretexts.org/@go/page/9769
4.5: Sidebar- The Difference between a Database and a Spreadsheet
When introducing the concept of databases to students, they quickly decide that a database is similar to a spreadsheet. There are
some similarities, but there are some big differences that we will review. A spreadsheet hopes to grow to a database one day.
Let's start with the spreadsheet. It is easy to create, edit and format. It is simple to use for beginners. It is made up of columns and
rows and stores data in an organized fashion similar to a database table. The two leading spreadsheet applications are Google
Sheets and Microsoft Excel. One of the very convenient things about spreadsheets is shared accessibility with multiple users. This
is not the case with a database.
For simple uses, a spreadsheet can substitute for a database quite well. If a simple listing of rows and columns (a single table) is all
that is needed, then creating a database is probably overkill. In our Student Clubs example, if we only needed to track a listing of
clubs, the number of members, and the president's contact information, we could get away with a single spreadsheet. However, the
need to include a listing of events and members' names would be problematic if tracked with a spreadsheet.
When several types of data must be mixed, or when the relationships between these types of data are complex, then a spreadsheet is
not the best solution. A database allows data from several entities (such as students, clubs, memberships, and events) to be related
together into one whole. While a spreadsheet does allow you to define what kinds of values can be entered into its cells, a database
provides more intuitive and powerful ways to define the types of data that go into each field, reducing possible errors and allowing
for easier analysis. Though not good for replacing databases, spreadsheets can be ideal tools for analyzing the data stored in a
database. A spreadsheet package can be connected to a specific table or query in a database and used to create charts or perform
analysis on that data.
A database has many similarities in looks of a spreadsheet utilizing tables that are made up of columns and rows. The database is a
collection of structured raw material. The information is stored on the computer. A spreadsheet is easily editable with its rows and
columns; this is not the case of a database. The database is formatted, so the field (column) is preconfigured. The database is also
relational in that it has the ability to create relationships between records and tables. Spreadsheets and databases can both be edited
by multiple authors. In a database, a log is created as changes are made. This is not the case with a spreadsheet. A spreadsheet is
terrific for small projects, but a database would become more useful as the project grows.

Figure 4.5.1 : Database computers. Image by Gerd Altmann from Pixabay is licensed under CC BY-SA 2.0

Streaming
Streaming is a new easy way to view on-demand audio or video from a remote server. Companies offer audio and video files from
their server that can be accessed remotely by the user. The data is transmitted from their server directly and continuously to your
device. Streaming can be accessed by any device that connects to the internet. There is no need for large memory or having to wait
for a large file to download. Stream technology is becoming very popular because of its convenience and accessibility. An example
of some streaming services is Netflix, iTunes, and YouTube.

Other Types of Databases


The relational database model is the most used today. However, many other database models exist that provide different strengths
than the relational model. In the 1960s and 1970s, the hierarchical database model connected data in a hierarchy, allowing for a

4.5.1 https://workforce.libretexts.org/@go/page/9770
parent/child relationship between data. The document-centric model allowed for more unstructured data storage by placing data
into “documents” that could then be manipulated.
The concept of NoSQL (from the phrase “not only SQL”). NoSQL arose from the need to solve large-scale databases spread over
several servers or even across the world. For a relational database to work properly, only one person must be able to manipulate a
piece of data at a time, a concept known as record-locking. But with today’s large-scale databases (think Google and Amazon), this
is not possible. A NoSQL database can work with data more loosely, allowing for a more unstructured environment,
communicating changes to the data over time to all the servers that are part of the database. Many companies collect data for all
sorts of reasons, from how many times you visit a site to what you are viewing at the site.

This page titled 4.5: Sidebar- The Difference between a Database and a Spreadsheet is shared under a CC BY 3.0 license and was authored,
remixed, and/or curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources
Initiative (OERI)) .

4.5.2 https://workforce.libretexts.org/@go/page/9770
4.6: Big Data
Big Data refers to capturing large complex data sets that conventional database tools do not have the processing power to analyze.
Storing and analyzing that much data is beyond the power of traditional database management tools. Understanding the best tools
and techniques to manage and analyze these large data sets is a problem that governments and businesses alike are trying to solve.
Big data comes from many different areas such as text, images, audio, and videos. Businesses use this data and refer to it as
predictive analytics or user behavior analytics. Companies such as Walmart and Amazon are now collecting big data, to see what
searches their customers are looking at. Think of the number of customers and products these two powerhouses have and the
amount of data generated.

This page titled 4.6: Big Data is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal Desai-
Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

4.6.1 https://workforce.libretexts.org/@go/page/9771
4.7: Data Warehouse
As organizations have begun to utilize databases as the centerpiece of their operations, the need to fully understand and leverage
the data they are collecting has become more and more apparent. However, directly analyzing the data needed for day-to-day
operations is not a good idea; we do not want to tax the company's operations more than we need to. Further, organizations also
want to analyze data in a historical sense: How does the data we have today compare with the same data set this time last month or
last year? From these needs arose the concept of the data warehouse.
The data warehouse concept is simple: extract data from one or more of the organization’s databases and load it into the data
warehouse (which is itself another database) for storage and analysis. However, the execution of this concept is not that simple. A
data warehouse should be designed so that it meets the following criteria:
It uses non-operational data. This means that the data warehouse uses a copy of data from the active databases that the company
uses in its day-to-day operations, so the data warehouse must pull data from the existing databases on a regular, scheduled basis.
The data is time-variant. This means that whenever data is loaded into the data warehouse, it receives a timestamp, which
allows for comparisons between different time periods.
The data is standardized. Because the data in a data warehouse usually comes from several different sources, it is possible that
the data does not use the same definitions or units. For example, our Events table in our Student Clubs database lists the event
dates using the mm/dd/yyyy format (e.g., 01/10/2013). A table in another database might use the format yy/mm/dd (e.g.,
13/01/10) for dates. For the data warehouse to match up the dates, a standard date format would have to be agreed upon, and all
data loaded into the data warehouse would have to be converted to use this standard format. This process is called extraction-
transformation-load (ETL).
There are two primary schools of thought when designing a data warehouse: bottom-up and top-down. The bottom-up approach
starts by creating small data warehouses, called data marts, to solve specific business problems. As these data marts are created,
they can be combined into a larger data warehouse. The top-down approach suggests that we should start by creating an enterprise-
wide data warehouse and then, as specific business needs are identified, create smaller data marts from the data warehouse.

Figure 4.7.1 : Data warehouse process (top down). Image by Soha jamil is licensed under CC BY-SA 4.0

Benefits of Data Warehouses


Organizations find data warehouses quite beneficial for many reasons:
Ability to integrate data from multiple systems formatted with different software and compile it to gain deeper insight.
The process of developing a data warehouse forces an organization to understand the data better than it is currently collecting
and, equally important, what data is not being collected.
A data warehouse provides a centralized view of all data being collected across the enterprise and provides a means for
determining inconsistent data.

4.7.1 https://workforce.libretexts.org/@go/page/9923
Once all data is identified as consistent, an organization can generate one version of the truth. This is important when the
company wants to report consistent statistics about itself, such as revenue or employees' numbers.
By having a data warehouse, snapshots of data can be taken over time. This creates a historical record of data, which allows for
an analysis of trends.
A data warehouse provides tools to combine data, which can provide new information and analysis.

This page titled 4.7: Data Warehouse is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

4.7.2 https://workforce.libretexts.org/@go/page/9923
4.8: Data Mining
Data mining is the process of sorting through big data (measured in terabytes). In the past, there was a lack of data to analyze. The
challenge is an overabundance of data that must be reviewed, which is called data overload. This becomes an issue because the user
needs to evaluate which information is useful and which is not. Many businesses do mining to get detailed insight on their
customers, products and to optimize business decisions. The analysis is executed with sophisticated programs. The programs can
combine multiple databases. The end effect is so complex that companies must find a way to store the data. Data warehouses are
needed. The data warehouse is where the information is stored and processed from the data mining. The price for a simple
warehouse could start at $10 million.
Companies like Google, Netflix, Amazon, and Facebook are big users of data mining. They seek to find out who their consumer is
and how best to keep them and sell them more products. They also review their products. The means used are reviewing data and
finding trends, patterns, and associations to make decisions. Generally, data mining is accomplished through automated means
against extensive data sets, such as a data warehouse.
Examples of data mining include:
An analysis of sales from a large grocery chain might determine that milk is purchased more frequently the day after it rains in
cities with a population of less than 50,000.
A bank may find that loan applicants whose bank accounts show particular deposit and withdrawal patterns are not good credit
risks.
A baseball team may find those collegiate baseball players with specific statistics in hitting, pitching, and fielding for more
successful major league players.
In some cases, a data-mining project is begun with a hypothetical result in mind. For example, a grocery chain may already have
some idea that the buying patterns change after it rains and want to get a deeper understanding of exactly what is happening. In
other cases, there are no presuppositions, and a data-mining program is run against large data sets to find patterns and associations.

This page titled 4.8: Data Mining is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

4.8.1 https://workforce.libretexts.org/@go/page/9924
4.9: Database Management Systems
A database looks like one or more files. For the data in the database to be read, changed, added, or removed, a software program
must access it. The software creates a database by building tables, forms, reports, and other important variables. Many software
applications have this ability: iTunes can read its database to give you a listing of its songs (and play the songs); your mobile-phone
software can interact with your list of contacts. Companies of all sizes use this software to enable themselves to streamline the data
they have collected to be useful for multiple purposes such as marketing, customer service, and sales. Database management
systems help businesses to collect complex data and customize it for their own use. When selecting Database Management
Software (DBMS,) the company needs to know what they want to utilize and establish goals. Questions that need to be answered
are; What software can you use to create a database, change a database’s structure, or analyze? For example, Apache
OpenOffice.org Base can create, modify, and analyze databases in open-database (ODB) format. Microsoft’s Access DBMS is used
to work with databases in its own Microsoft Access Database format. Both Access and Base have the ability to read and write to
other database formats as well.

Figure 4.9.1 : Open Office database management system. Image by David Bourgeois, Ph.D. is licensed under CC BY 4.0
Microsoft Access and Open Office Base are examples of personal database-management systems. These systems are primarily used
to develop and analyze single-user databases. These databases are not meant to be shared across a network or the Internet but are
instead installed on a particular device and work with a single user at a time.

This page titled 4.9: Database Management Systems is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong
T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

4.9.1 https://workforce.libretexts.org/@go/page/9925
4.10: Enterprise Databases
Small and large organizations utilize enterprise databases for managing when collecting large complex data. An enterprise database
is robust enough to handle multiple users' queries successfully simultaneously and can handle a range of 100 to 10,000 users at a
time. (Technopedia, 2020). Computers have become networked and are now joined worldwide via the Internet, and a class of
databases has emerged that can be accessed by two, ten, or even a million people. These databases are sometimes installed on a
single computer to be accessed by a group of people at a single location or a small company. They can also be installed over several
servers worldwide, meant to be accessed by millions in large companies. These relational enterprise database packages are built
and supported by companies such as Oracle, Microsoft, and IBM. The open-source MySQL is also an enterprise database. Open-
source databases are free and can be shared, storing vital information in software that the organization can control. An open-source
database allows users to create a system based on their unique requirements and business needs. The source code can be
customized to match any user preference. Open-source databases address the need to analyze data from a growing number of new
applications at a lower cost. The deluge of social media and the Internet of Things (IoT) has ushered an age of massive data that
needs to be collected and analyzed. The data only has value if an enterprise can analyze it to find useful patterns or real-time
insights. The data contains vast amounts of information that can overload a traditional database. The flexibility and cost-
effectiveness of open source database software have revolutionized database management systems. (Omnisci, 2020).

 Sidebar: What Is Metadata?


The term metadata can be understood as “data about data.” For example, when looking at one of Year of Birth's values in the
Students table, the data itself may be “1992″. The metadata about that value would be the field name Year of Birth, the last
updated time, and the data type (integer). Another example of metadata could be for an MP3 music file, like the one shown in
the image below; information such as the song's length, the artist, the album, the file size, and even the album cover art is
classified as metadata. When a database is being designed, a “data dictionary” is created to hold the metadata, defining its
fields and structure.

Data Governance
Data governance is the process of taking data and managing the availability, integrity, and usability in enterprise systems. Proper
data governance ensures the data is consistent, trustworthy, and secured. We are in a time when organizations must pay close
attention to privacy regulations and increasingly need to rely more on data analytics to optimize decision making and optimize
operations. Data governance can be used at both the micro and macro levels. When we refer to micro, the focus is on the individual
organization to ensure high data quality throughout the lifecycle to achieve optimal business objectives. The macro-level refers to
cross-border flows by countries which are called international data governance.

References
Omnisci (2020). Definition of an Open Source Database. Retrieved September 1, 2020, from https://www.omnisci.com/technical-
glossary/open-source-
database#:~:text=An%20open%20source%20database%20has,is%20protected%20to%20prevent%20copying.
Technopedia, (2020) Definition of Enterprise Database. Retrieved September 1, 2020, from
https://www.techopedia.com/definition/31683

This page titled 4.10: Enterprise Databases is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

4.10.1 https://workforce.libretexts.org/@go/page/9926
4.11: Knowledge Management
We end the chapter with a discussion on the concept of knowledge management (KM). All companies accumulate knowledge over
the course of their existence. Some of this knowledge is written down or saved, but not in an organized fashion. Much of this
knowledge is not written down; instead, it is stored inside its employees' heads. Knowledge management is the process of
formalizing the capture, indexing, and storing of the company’s knowledge to benefit from the experiences and insights that the
company has captured during its existence.

Privacy Concerns

Figure 4.11.1 : Cybersecurity. Image by Pete Linforth from Pixabay is licensed under CC BY-SA 2.0
The increasing power of data mining has caused concerns for many, especially in the area of privacy. It is becoming easier in
today’s digital world than ever to take data from disparate sources and combine them to do new forms of analysis. In fact, a whole
industry has sprung up around this technology: data brokers. These firms combine publicly accessible data with information
obtained from the government and other sources to create vast warehouses of data about people and companies that they can then
sell. This subject will be covered in detail in chapter 12 – the chapter on the ethical concerns of information systems.

This page titled 4.11: Knowledge Management is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

4.11.1 https://workforce.libretexts.org/@go/page/9928
4.12: Sidebar- What is data science?
Sidebar: What is data science?
Data science takes structured and unstructured data and uses scientific methods, processes, algorithms, and systems to extract
knowledge and insight. It begins by procuring data from many sources such as web servers, logs, databases, APIs (application
program interface), and online repositories. Once the acquisition has happened, the data must be cleaned and pipeline data. This is
done by sorting and organizing relevant and usable data; this is the transformation process. Data Modeling is next; the goal is to
create the best modeling that suits the company's needs when using the data. This can be done using metrics, algorithms, and
analytics. The goal is to progress to AI and deep learning or machine learning. Data science problem solves company issues using
data.
Structured Data - Is data that is found in a fixed field within a record or file. It includes data contained in relational databases
and spreadsheets. Such as:
Date
Time
Census Data
Facebook “Likes”
Unstructured Data - Is information that is not organized and does not have a pre-defined model. Such as:
Body of emails
Tweets
Facebook Status
Video Transcripts

What is data analytics?


Data Analytics takes raw data gathered from data mining and analyzes the information to uncover relationships and patterns to find
insight into the data when using it. Companies use these analytics to optimize problem-solving and assist in decision-making. The
information is helpful to understand who your consumer is as well as marketing your company or product. This is all helpful to
create efficiency and streamline operations. Data continuously being collected can then be adjusted as new criteria happen. Today's
data analytics are deeper, larger in abundance, and retrieved quicker than yesteryear. The information is more accurate and detailed,
which accelerates successful problem-solving.

Figure 4.12.1 : Analytic information. Image by xresch from Pixabay is licensed CC BY-SA 2.0

Business Intelligence and Business Analytics


This is now a new trend. With tools such as data warehousing and data mining at their disposal, businesses learn how to use the
information to their advantage. The term business intelligence is used to describe how organizations use to take data they are
collecting and analyze it to obtain a competitive advantage. Besides using data from their internal databases, firms often purchase
information from data brokers to understand their industries' big-picture understanding. Business analytics is the term used to
describe internal company data to improve business processes and practices.

This page titled 4.12: Sidebar- What is data science? is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong
T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

4.12.1 https://workforce.libretexts.org/@go/page/9929
4.13: Summary
Summary
In this chapter, we learned about the role that data and databases play in the context of information systems. Data is made up of
small facts and information without context. If you give data context, then you have information. Knowledge is gained when
information is consumed and used for decision-making. A database is an organized collection of related information. Relational
databases are the most widely used type of database, where data is structured into tables, and all tables must be related to each other
through unique identifiers. A database management system (DBMS) is a software application used to create and manage databases
and take the form of a personal DBMS, used by one small business or person versus an enterprise DBMS that multiple users can
use. A data warehouse is a special form of database that takes data from other databases in an enterprise and organizes it for
analysis. Data mining is the process of looking for patterns and relationships in large data sets. Many businesses use databases, big
data, data warehouses, and data-mining techniques to produce business intelligence and gain a competitive advantage.

This page titled 4.13: Summary is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

4.13.1 https://workforce.libretexts.org/@go/page/9930
4.14: Study Questions
Study Questions
1. What is the difference between data, information, and knowledge?
2. Explain in your own words the difference between hardware and software components of information systems.
3. What is the difference between quantitative data and qualitative data? In what situations could the number 63 be considered
qualitative data?
4. What are the characteristics of a relational database?
5. When would using a personal DBMS make sense?
6. What is the difference between a spreadsheet and a database? List three differences between them.
7. Describe what the term normalization means.
8. What is Big Data?
9. Name a database you interact with frequently. What would some of the field names be?
10. Describe the benefits and what open-source data is.
11. Name three advantages of using a data warehouse.
12. What is data mining?

Exercises
1. Review the design of the Student Clubs database earlier in this chapter. Reviewing the lists of data types given, what data types
would you assign to each of the fields in each of the tables. What lengths would you assign to the text fields?
2. Review structured and unstructured data and list five reasons to use each.
3. Using Microsoft Access, download the database file of comprehensive baseball statistics from the website
4. SeanLahman.com. (If you don’t have Microsoft Access, you can download an abridged version of the file here that is
compatible with Apache Open Office). Review the structure of the tables included in the database. Come up with three different
data-mining experiments you would like to try, and explain which fields in which tables would have to be analyzed.
5. Do some original research and find two examples of data mining. Summarize each example and then write about what the two
examples have in common.
6. Conduct some independent research on the process of business intelligence. Using at least two scholarly or practitioner sources,
write a two-page paper giving examples of how business intelligence is being used.
7. Conduct some independent research on the latest technologies being used for knowledge management. Using at least two
scholarly or practitioner sources, write a two-page paper giving examples of software applications or new technologies being
used in this field.

This page titled 4.14: Study Questions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

4.14.1 https://workforce.libretexts.org/@go/page/9931
CHAPTER OVERVIEW

5: Networking and Communication


 Learning Objectives

Upon successful completion of this chapter, you will be able to:


Understand how multiple networks are used in everyday life.
Define how topologies and devices are connected in a small to medium-sized business network.
Understand the basic characteristics of a network that supports communication in a small to medium-sized business.
Describe trends in networking that will affect the use of networks in small to medium-sized businesses.

Today’s computing and smart devices are expected to be always-connected devices to support the way we learn, communicate, do
business, work, and play, in any place, on any devices, and at any time. In this chapter, we review the history of networking, how
the Internet works, and the use of multiple networks in organizations today.
5.1: Introduction to Networking and Communication
5.2: A Brief History of the Internet
5.3: Networking Today
5.4: How has the Human Network Influenced you?
5.5: Providing Resources in a Network
5.6: LANs, WANs, and the Internet
5.7: Network Representations
5.8: The Internet, Intranets, and Extranets
5.9: Internet Connections
5.10: The Network as a Platform Converged Networks
5.11: Reliable Network
5.12: The Changing Network Environment Network Trends
5.13: Technology Trends in the Home
5.14: Network Security
5.15: Summary
5.16: Study Questions

This page titled 5: Networking and Communication is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong
T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1
5.1: Introduction to Networking and Communication
We are at a basic turning point with many innovations to expand and engage our capacity to communicate. The globalization of the
Web has succeeded faster than anybody has envisioned. The way social, commercial, political, and individual motivation happens
is quickly changing to keep up with the advancement of this worldwide network. Within our improvement network, innovators will
utilize the Web as a beginning point for their efforts, creating modern items and administrations particularly planned to require
advantage of the network capabilities. As designers thrust the limits of what is conceivable, the capabilities of the interconnected
systems that shape the Web will expand part within these projects' victory.
This chapter presents a brief history of the Internet and the stage of information systems upon which our social and commerce
connections progressively depend. The fabric lays the foundation for investigating the administrations, innovations, and issues
experienced by network experts as they plan, construct, and keep up the present-day network.

This page titled 5.1: Introduction to Networking and Communication is shared under a CC BY 3.0 license and was authored, remixed, and/or
curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative
(OERI)) .

5.1.1 https://workforce.libretexts.org/@go/page/9773
5.2: A Brief History of the Internet
In the Beginning: ARPANET
The story of the Internet and networking can be traced back to the late 1950s. The US was in the Cold War's depths with the USSR,
and each nation closely watched the other to determine which would gain a military or intelligence advantage. In 1957, the Soviets
surprised the US with the launch of Sputnik, propelling us into the space age. In response to Sputnik, the US Government created
the Advanced Research Projects Agency (ARPA), whose initial role was to ensure that the US was not surprised again. From
ARPA, now called DARPA (Defense Advanced Research Projects Agency), the Internet first sprang. ARPA was the center of
computing research in the 1960s, but there was just one problem: many computers could not talk to each other. In 1968, ARPA sent
out a request for a communication technology proposal that would allow different computers located around the country to be
integrated into one network. Twelve companies responded to the request, and a company named Bolt, Beranek, and Newman
(BBN) won the contract and developed the first protocol for the network (Roberts, 1978). They began work right away and
completed the job just one year later: in September 1969, the ARPANET was turned on. The first four nodes were at UCLA,
Stanford, MIT, and the University of Utah.

The Internet and the World Wide Web


Over the next decade, the ARPANET grew and gained popularity. During this time, other networks also came into existence.
Different organizations were connected to different networks. This led to a problem: the networks could not talk to each other. Each
network used its own proprietary language or protocol (see sidebar for the definition of protocol) to send information back and
forth. This problem was solved using the transmission control protocol/Internet protocol (TCP/IP). TCP/IP was designed to allow
networks running on different protocols to have an intermediary protocol that would allow them to communicate. So as long as a
network supporting TCP/IP, users could communicate with all other networks running TCP/IP. TCP/IP quickly became the
standard protocol and allowed networks to communicate with each other. We first got the term Internet from this breakthrough,
which means “an interconnected network of networks.”
As we moved into the 1980s, computers were added to the Internet at an increasing rate. These computers were primarily from
government, academic, and research organizations. Much to the engineers' surprise, the early popularity of the Internet was driven
by the use of electronic mail (see sidebar below). Using the Internet in these early days was not easy. To access information on
another server, you had to know how to type in the commands necessary to access it and know the name of that device. That all
changed in 1990 when Tim Berners-Lee introduced his World Wide Web project, which provided an easy way to navigate the
Internet through the use of linked text (hypertext). The World Wide Web gained even more steam with the release of the Mosaic
browser in 1993, which allowed graphics and text to be combined to present information and navigate the Internet. The Mosaic
browser took off in popularity and was soon superseded by Netscape Navigator, the first commercial web browser, in 1994. The
chart below shows the growth in internet users globally.

5.2.1 https://workforce.libretexts.org/@go/page/9774
Figure 5.2.1 : Graph of "Internet users per 100 inhabitants 1997 to 2017", years on the x-axis, number of users on the y-axis,
according to the International Telecommunication Union (ITU). Image by Jeff Ogden (W163) and Jim Scarborough (Ke4roh) is
licensed CC BY-SA
According to the International Telecommunications Union (ITU, 2020), over 53.6% or 4.1 billion people worldwide are using the
internet, by the end of 2019.
The Internet has evolved from Web 1.0 to 2.0 (discussed in Chapter 1) to the many popular social media websites today.

Sidebar: “Killer” Apps for the Internet


When the personal computer was created, it was a great little toy for technology hobbyists and armchair programmers. As soon as
the spreadsheet was invented, businesses took notice, and the rest is history. The spreadsheet was the killer app for the personal
computer: people bought PCs to run spreadsheets.
The Internet was originally designed as a way for scientists and researchers to share information and computing power among
themselves. However, as soon as electronic mail was invented, it began driving demand for the Internet.
We are seeing this again today with social networks, such as Facebook, Instagram. Many who weren’t convinced to have an online
presence now feel left out without a social media account.
These killer apps and widespread adoption of the internet have driven explosive growth for information systems globally.

 Sidebar: The Internet and the World Wide Web Are Not the Same Things

Many times, the terms “Internet” and “World Wide Web,” or even just “the web,” are used interchangeably. However, they are
not the same thing at all!
The Internet is an interconnected network of networks. Many services run across the Internet: electronic mail, voice and video,
file transfers, and, yes, the World Wide Web. The World Wide Web is simply one piece of the Internet. It is made up of web
servers with HTML pages being viewed on devices with web browsers.

References
ITU estimate of global population using the internet. Retrieved September 6, 2020, from https://www.itu.int/en/ITU-
D/Statistics/Pages/stat/default.aspx
Roberts, Lawrence G., The Evolution of Packet Switching, (1978, November). Retrieved on September 6, 2020, from
www.ismlab.usf.edu/dcom/Ch10_Roberts_EvolutionPacketSwitching_IEEE_1978.pdf

This page titled 5.2: A Brief History of the Internet is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong
T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

5.2.2 https://workforce.libretexts.org/@go/page/9774
5.3: Networking Today
Networks in Our Daily Lives
Among all of the fundamentals for human presence, the need to interact with others ranks underneath our need to maintain life.
Communication is nearly as imperative to us as our dependence on air, water, nourishment, and shelter.
Today, networking systems have enabled people to connect from anywhere. Individuals can communicate and collaborate
immediately with others. News ideas and discoveries are shared with the world in seconds. People can indeed interface and play
with others without the physical barriers of seas and landmasses from wherever they locate.

Figure 5.3.1 : Global Networking. Image by Gerd Altmann from Pixabay is licensed CC BY 2.0

Technology Then and Now


Envision a world without the Internet. No more Google, YouTube, texting, Facebook, Wikipedia, web-based gaming, Netflix,
iTunes, and simple access to current data. No more social media, staying away from lines by shopping on the web, or rapidly
looking into telephone numbers and guide headings to different areas at the snap of a finger. How unique would our lives be
without the entirety of this? That was the world we lived in only 15 to 20 years back, as discussed in Chapter 1. Throughout the
years, information systems have gradually extended and been repurposed to improve personal satisfaction for individuals all over
the place.

No Boundaries
Progressions in systems administration advancements are maybe the most noteworthy changes on the planet today. They assist with
making a world where national fringes, geographic separations, and physical confinements become less important, introducing
ever-lessening obstacles.

Figure 5.3.2 : Registered trademark of Cisco Systems, Inc.


Cisco Systems Inc. alludes to this as the human network. The human network fixates on the effect of the Internet and networks on
individuals and organizations.

This page titled 5.3: Networking Today is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

5.3.1 https://workforce.libretexts.org/@go/page/9775
5.4: How has the Human Network Influenced you?
Networks Support the Way We Learn
Networks have changed how we learn. Access to top-notch guidance is not, at this point, confined to understudies living in the
vicinity to where that guidance is being conveyed.
Online distance learning has evacuated geographic hindrances and improved opportunities for students. Vigorous and dependable
networks bolster and improve student learning encounters. They convey learning material in a wide scope of arrangements,
including intelligent exercises, appraisals, and criticism.

Networks Support the Way We Communicate


The globalization of the Internet has introduced new types of correspondence that engage people to make data that a worldwide
crowd can access.

Figure 5.4.1 : Silver iMac near iPhone on brown wooden table. Image by Domenico Loia on Unsplash is licensed CC BY SA 2.0
A few types of communication include:
Messaging: Texting empowers moment constant correspondence between at least two individuals. WhatsApp and Skype are
examples of messaging tools that have gained huge popularity.
Internet-based life: Social media comprises intelligent sites where individuals and networks make and offer client-created
content with companions, family, peers, and the world. Facebook, Twitter, and LinkedIn are among the biggest social media
platforms at this time.
Joint effort tools: Without the limitations of area or time region, cooperation instruments permit people to speak with one
another, frequently across a continuous intelligent video. The expansive circulation of information systems implies that
individuals in remote areas can contribute on an equivalent premise with individuals in the core of largely populated places. An
example of that would be online gaming, where several players are connected to the same server.
Online journals: Blogs, which is a shortened form of "weblogs." In contrast to business sites, sites give anybody a way to
impart their musings to a worldwide crowd without specialized information on website composition.
Wikis: Wikis are website pages that gatherings of individuals can alter and see together. Like an individual diary, an individual
often writes a blog, and a wiki collects creations from many people. All things considered, it might be dependent upon
increasingly broad surveys and altering. Numerous organizations use wikis as their inner joint effort apparatus.
Podcasting: Podcasting permits individuals to convey their sound chronicles to a wide crowd. The sound document is put on a
site (or blog or wiki) where others can download it and play the account on their PCs, workstations, and other cell phones.
Distributed (P2P) File Sharing: Peer-to-Peer document sharing permits individuals to impart records to one another without
putting away and downloading them from a local server. The client joins the P2P arrangement by just introducing the P2P
programming. Everybody has not grasped P2P document sharing. Numerous individuals are worried about disregarding the
laws of copyrighted materials.
Napster, which was released in 1999, was the first generation of P2P systems. Some well-known P2P systems are Xunlei,
Bittorrent, and Gnutella.

5.4.1 https://workforce.libretexts.org/@go/page/9776
Networks Support the Way We Work
In the business world, information systems were first utilized by organizations for internal uses and to oversee budgetary data,
client data, and representative finance frameworks. These business systems advanced to empower the transmission of a wide range
of data administrations, including email, video, informing, and communication.
The utilization of systems has been increasingly used to prepare workers for their effectiveness and efficiencies. Internet learning
opportunities can diminish tedious and exorbitant travel yet still guarantee that all representatives are sufficiently prepared to play
out their occupations in a protected and gainful way.

Networks Support the Way We Play


The Internet is utilized for customary types of amusement. We tune in to listen to music, see or view movies, read whole books and
download material for future disconnected access. Live games and shows can be experienced as they are occurring or recorded and
viewed on request.
Networked systems empower the making of new types of amusement, for example, internet games. Online multiplayer games have
become very popular because they allow friends to play virtually when they can’t meet in person.
Indeed, even offline activities are improved utilizing network joint effort administrations. Worldwide, people with the same
interests have interacted with each other quickly. We share normal encounters and pastimes well past our nearby neighborhood,
city, or locale. Sports fans share opinions and realities about their preferred teams. Gatherers show valued assortments and get
expert input about them.
Whatever type of entertainment we appreciate, systems are improving our experience. How would you play on the Internet?

This page titled 5.4: How has the Human Network Influenced you? is shared under a CC BY 3.0 license and was authored, remixed, and/or
curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative
(OERI)) .

5.4.2 https://workforce.libretexts.org/@go/page/9776
5.5: Providing Resources in a Network
Networks of Many Sizes
Networks come in all sizes. They can go from basic networks consisting of two PCs to networks interfacing with many gadgets.
Basic networks introduced in homes empower sharing of assets, for example, printers, archives, pictures, and music between a
couple of nearby PCs.
Worldwide internet users expect always to stay connected to the internet. They expect their connected devices to do the following:
Stay connected to the internet to complete their work.
Have the ability to send and receive data fast.
Have the ability to send small and large quantities of data globally via any device connected to the internet.
Home office networks and small office networks are regularly set up by people who work from home or remote offices. They need
to associate with a corporate network or other concentrated assets. Moreover, numerous independently employed business people
utilize home office and little office networks to publicize and sell items, request supplies and speak with clients.
The Internet is the biggest network presently. Indeed, the term Internet implies a network of networks. The internet is the global
worldwide network that connects millions of computers around the world. A computer can connect to another computer in a
different country via the internet.

Clients and Servers


All PCs associated with a network are named hosts. Hosts are also called end devices.
Servers are PCs with programming that empower them to give data, similar to emails or website pages, to other network devices
called clients. Each assistance requires separate server programming. For instance, a server requires web server programming to
give web administrations to the network. A PC with server programming can offer types of assistance at the same time to one or
numerous customers. Furthermore, a solitary PC can run numerous sorts of server programming. It might be vital for one PC to go
about as a document server, a web server, and an email server in a home or private company.
Clients are PCs with programming introduced that empower them to ask for and show the server's data. A case of client
programming is an internet browser, similar to Chrome or Firefox. A solitary PC can likewise run different kinds of custom
programming. For instance, a client can browse email and view a site page while texting and tuning in to Internet radio.

Peer-to-Peer
Client and server programming ordinarily run on discrete PCs, yet it is also feasible for one PC to simultaneously complete the two
jobs. In private companies and homes, hosts work as servers or clients on the network. This sort of system is known as a shared
network. An example of that would be several users connected to the same printer from their individual devices.

This page titled 5.5: Providing Resources in a Network is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-
Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

5.5.1 https://workforce.libretexts.org/@go/page/9777
5.6: LANs, WANs, and the Internet
Overview of Network Components
The link between the sender and the receiver can be as simple as a single cable connection between these two devices or more
sophisticated as a set of switches and routers between them.

Figure 5.6.1 : Lan-wan Networks. Image by Stuart Gray is licensed CC BY-SA


The network framework contains three classes of network segments:
Devices
Media
Services
Devices and media are the physical components, or equipment, of the network. Equipment is regularly the noticeable segment of
the network stage, for example, a PC, switch, remote passageway, or the cabling used to associate the devices.
Administrations incorporate a significant number of the basic network applications individuals utilize each day, similar to email
facilitating administrations and web facilitating administrations. Procedures give the usefulness that coordinates and moves the
messages through the network. Procedures are more subtle to us yet are basic to the activity of networks.

End Devices
An end device is either the source or destination of a message transmitted over the network. Each end device is identified by an IP
address and a physical address. Both addresses are needed to communicate over a network. IP addresses are unique logical IP
addresses that are assigned to every device within a network. If a device moves from one network to another, then the IP address
has to be modified.
Physical addresses, also known as MAC (Media Access Control) addresses, are unique addresses assigned by the device
manufacturers. These addresses are permanently burned into the hardware.

Intermediary Network Devices


Some devices act as intermediaries between devices. They are called delegated devices. These delegate devices give availability
and guarantee that information streams over the network.
Routers utilize the destination end device address, related to data about the network interconnections, to decide how messages
should take through the network.

Network Media
A medium called network media carries the act of transport data. The medium gives the channel over which the message makes a
trip from source to destination.
Present-day organizations basically utilize three sorts of media to interconnect devices and give the pathway over which
information can be transmitted.

5.6.1 https://workforce.libretexts.org/@go/page/9778
These media are:
Metallic wires within cables (Copper) - information is encoded into electrical driving forces.
Glass or plastic fibers (fiber optic cable) - information is encoded as beats of light.
Wireless transmission - information is encoded utilizing frequencies from the electromagnetic range.
Various sorts of network media have various highlights and advantages. Not all network media have similar qualities, nor are they
all appropriate for the same purpose.

Figure 5.6.2 : Network Cables. Image by blickpixel from Pixabay is licensed CC BY SA

Figure 5.6.3 : Fiber Optic Cable. Image by blickpixel from Pixabay is licensed CC BY SA

Bluetooth
While Bluetooth is not generally used to connect a device to the Internet, it is an important wireless technology that has enabled
many functionalities that are used every day. When created in 1994 by Ericsson, it was intended to replace wired connections
between devices. Today, it is the standard method for connecting nearby devices wirelessly. Bluetooth has a range of approximately
300 feet and consumes very little power, making it an excellent choice for various purposes.

Figure 5.6.4 : Bluetooth combo wordmark 2011. Image by House is licensed under Public Domain
Some applications of Bluetooth include: connecting a printer to a personal computer, connecting a mobile phone and headset,
connecting a wireless keyboard and mouse to a computer, and connecting a remote for a presentation made on a personal computer.

This page titled 5.6: LANs, WANs, and the Internet is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong
T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

5.6.2 https://workforce.libretexts.org/@go/page/9778
5.7: Network Representations
To draw a diagram of a network, symbols are utilized by network professionals to represent the different devices and connections
which make up a network.
A diagram gives a simple method to see how devices in a huge network are associated. This kind of "picture" of a network is
known as a topology diagram. The capacity to perceive the legitimate portrayals of the physical systems administration segments is
basic to have the option to imagine the association and activity of a network.
Notwithstanding these portrayals, particular phrasing is utilized while discussing how every one of these devices and media
interfaces with one another. Significant terms to recall are:
Network Interface Card: A NIC or LAN connector gives the physical association with the PC or opposite end device's
network. The media that are associating the PC to the network administration device plug legitimately into the NIC.
Physical Port: A connector or outlet on a network administration device where the media is associated with an end device or
another network administration device.
Interface: Specialized ports on a network administration device that associate with singular networks. Since switches are
utilized to interconnect networks, the ports on a network allude to network interfaces.

Topology Diagrams
Understanding topology diagrams are required for anybody working with a network. They give a visual guide of how the network
is associated.
There are two sorts of Topology diagrams:
Physical topology and Logical topology diagrams. The physical topology diagrams identify the physical location of
intermediary devices and cable installation.
The Logical topology diagrams identify devices, addressing schemes, and ports.
With physical topology, it is quite self-explanatory. It is how they are interconnected with cables and wires physically. The logical
topology is how connected devices are seen to the user.

Types of Networks
Networks foundations can fluctuate extraordinarily regarding:
Size of the territory secured
Number of users connected
Number and kinds of administrations accessible
Territory of obligation
The two most normal sorts of system frameworks:
Local Area Network (LAN): A network framework that gives access to clients and end devices in a little topographical zone,
commonly an enterprise, small business, home, or small business network possessed and oversaw by an individual or IT
department.
Wide Area Network (WAN): A network foundation that gives access to different networks over a wide topographical region,
commonly possessed and overseen by a broadcast communications specialist co-op.
Different kinds of networks include:
Metropolitan Area Network (MAN): A network foundation that traverses a physical region bigger than a LAN yet littler than
a WAN (e.g., a city). Keep an eye on are ordinarily worked by a solitary substance, for example, a huge association.
Wireless LAN (WLAN): Like a LAN, it remotely interconnects clients and focuses on a little geological region.
Storage Area Network (SAN): A network foundation intended to help record servers and give information stockpiling,
recovery, and replication.

Local Area Networks


LANs are a network foundation that traverses a little topographical territory. Explicit highlights of LANs include:

5.7.1 https://workforce.libretexts.org/@go/page/9996
LANs interconnect end devices in a restricted region, for example, a home, school, place of business, or grounds.
A solitary association or person normally directs a LAN. The managerial control that oversees the security and access control
arrangements is upheld on the network level.
LANs give rapid data transfer capacity to inward end gadgets and delegate devices.

Figure 5.7.1 : Local Area Network. Image by T.seppelt, derivative work from File:Ethernet.png, including content of the Open Clip
Art Library, by © 2007 Nuno Pinheiro & David Vignoni & David Miller & Johann Ollivier Lapeyre & Kenneth Wimer & Riccardo
Iaconelli / KDE / LGPL 3, User:George Shuklin and the Tango Project! is licensed CC BY-SA

Wide Area Networks


WANs are a network foundation that traverses a wide topographical zone. WANs are ordinarily overseen by specialist organizations
(SP) or Internet Service Providers (ISP).
Explicit highlights of WANs include:
WANs interconnect LANs over wide geological zones, for example, between urban areas, states, territories, nations, or the
mainland.
Numerous specialist organizations typically manage WANs.
WANs ordinarily give more slow speed joins between LANs

Figure 5.7.2 : LAN WAN scheme. Image by Gateway_firewall.svg: Harald Mühlböck derivative work: Ggia is licensed CC BY-SA

5.7.2 https://workforce.libretexts.org/@go/page/9996
This page titled 5.7: Network Representations is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

5.7.3 https://workforce.libretexts.org/@go/page/9996
5.8: The Internet, Intranets, and Extranets
The Internet
The Internet is an overall assortment of interconnected networks (internetworks or web for short).
A portion of the LAN models is associated with one another through a WAN association. WANs are then associated with one
another. The WAN association lines speak to all the assortments of ways we interface networks. WANs can connect through copper
wires, fiber optic cables, and wireless transmissions.
No individual or group doesn't own the Internet. Guaranteeing compelling correspondence over this various framework requires the
use of steady and generally perceived advances and norms, just as the collaboration of many network organization offices. Some
associations have been produced to keep up the structure and normalization of Internet conventions and procedures. These
organizations incorporate the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers
(ICANN), and the Internet Architecture Board (IAB), in addition to numerous others.
Have you ever wondered how your smartphone can function the way it does? Have you ever wondered how you can search for
information on the web and find it within milliseconds? The world’s largest implementation of client/server computing and
internetworking is the Internet.
The world’s largest implementation of client/server computing and internetworking is the Internet. The internet is also a system,
which is the most extensive public way of communicating. The internet began in the 20th century; it initially started as a network
for the U.S Department of Defense to globally connect university professors and scientists. Most small businesses and homes have
access to the internet by subscribing to an internet service provider (ISP), a commercial organization with a permanent connection
to the internet, which sells temporary connections to retail subscribers. For example, AT&T, NetZero, and T-Mobile. A DSL
(Digital subscriber line) operates over existing telephone lines to carry data, voice, and video transmission rates. The base of the
internet is TCP/IP networking protocol suite. When two users on the internet exchange messages, each message is decomposed into
packets using the TCP/IP protocol.
Have you ever wondered what happens when you type a URL in the browser and press enter? The browser checks a DNS record in
the cache to find the corresponding IP address to the domain. First, you type in a specific URL into your browser. The browser then
checks the cache for a DNS record to find the website's corresponding IP address. If the URL is not in the cache, ISP’s (Internet
Service Provider)’s DNS server starts a DNS query to find the server's IP address that hosts the website. The browser then starts a
TCP connection with the server. Then, the browser sends an HTTP request to the webserver. After that, the server handles the
request and sends an HTTP response back. Finally, the browser shows the HTML content. For example, www.Wikipedia.org/ has
an IP address, that specific IP address could be searched starting with http:// on a browser/ The DNS contains a list of URLs,
including their IP addresses.
The DNS (Domain Name System) changes domain names into IP addresses. The domain name is the English name of the thing,
and that has 32-bits which are unique and numeric to that English name. To access a computer on the internet, they only need to
specify the domain name.

Intranets and Extranets


There are two different terms which are like the term Internet: Intranets and Extranets.
Intranet is a term frequently used to describe a private association of LANs and WANs that has a place with an association. It is
intended to be available only for approved individuals, workers, or others of an organization.
An extranet is a term used to describe the case when an organization wants to give secure and safe access to people who work for
another organization yet expect access to the association's information. Examples of extranets include:
An organization that is giving access to outside providers and temporary workers.
An emergency clinic gives a booking system to specialists so they can make arrangements for their patients.
A nearby office of training gives spending plans and staff data to the schools in its region.

This page titled 5.8: The Internet, Intranets, and Extranets is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-
Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

5.8.1 https://workforce.libretexts.org/@go/page/9997
5.9: Internet Connections
Internet Access Technologies
There is a wide range of approaches to associate users and associations with the Internet.
Home clients (telecommuters) and workplaces regularly require an association with an Internet Service Provider (ISP) to access the
Internet. Association alternatives change significantly among ISP and topographical areas. Notwithstanding, companies incorporate
a broadband link, broadband computerized endorser line (DSL), remote WANs, and versatile administrations.
Associations commonly expect access to other corporate destinations and the Internet. Quick associations are required to help
business administrations, including IP telephones, video conferencing, and server farm stockpiling.
Business-class interconnections are normally given by specialist organizations (SP). Well-known business-class administrations
incorporate business DSL, rented lines, and Metro Ethernet.

Home and Small Office Internet Connections


Regular connection choices for little office and home office users:
Cable: Typically offered by digital TV specialist co-ops, the Internet information signal is carried on a similar link that conveys
satellite TV. It gives a high transmission capacity, consistently on, association with the Internet.
DSL: Digital Subscriber Lines gives a high data transmission, consistently on, association with the Internet. DSL runs over a
phone line when all is said in done, small office and home office clients associate utilizing Asymmetrical DSL (ADSL), which
implies that the download speed is quicker than the upload speed.
Cellular: For a Cell phone network to connect, it utilizes cellular internet access. Any place you can get a phone signal, you can
get cell Internet. Execution will be restricted by the telephone's abilities and the cell tower to which it is associated. The fourth
generation of broadband cellular network technology is 4G, which most people are familiar with because it is on smartphones.
5G is upcoming and expected to be faster than and succeed 4G by 100 times, which will have the ability to transmit a lot more
data at a much faster pace than 4G.
Satellite: Internet access through satellite is a genuine advantage in those territories that would somehow or another have no
Internet availability by any means. Satellite dishes require a clear line of sight to the satellite.
Dial-up telephone: An economical choice that utilizes any telephone line and a modem. The low transmission capacity
supported by a dial-up modem association is normally not adequate for huge information transfer. However, it is still a valuable
choice wherever other options are not available such as in rural areas or remote locations where phones are the only means of
communication.
Fiber optic links are increasingly becoming more available to home and small businesses. This empowers an ISP to give higher
data transmission speeds and bolster more administrations, for example, Internet, telephone, and TV.

Business Internet Connections


Corporate connection choices contrast from home client alternatives. Organizations may require higher transmission capacity,
devoted data transmission, and oversaw administrations. Business connection options include:
Dedicated Leased Line: Leased lines are really saved circuits inside the specialist organization's system that interface
geologically isolated workplaces for private voice or potentially information organizing. The circuits are ordinarily leased at a
month-to-month or yearly rate. They can be costly.
Ethernet WAN: Ethernet WANs broaden LAN access into the WAN. Ethernet is a LAN innovation you will find out about in a
later section. The advantages of Ethernet are currently being reached out into the WAN.
DSL: Business DSL is accessible in different organizations. A famous decision is Symmetric Digital Subscriber Lines (SDSL)
which is like the purchaser rendition of DSL. However, it gives transfers and downloads at similar paces.
Satellite: Like small office and home office clients, satellite help can give an association when a wired arrangement isn't
accessible.
The decision of connection shifts relying upon topographical area and specialist organization accessibility.

5.9.1 https://workforce.libretexts.org/@go/page/9998
Figure 5.9.1 : Devices connection. Image by BroadVoice is licensed CC BY 1.0

 Sidebar: An Internet Vocabulary Lesson

Networking communication is full of some very technical concepts based on some simple principles. Learn the terms below,
and you will be able to hold your own in a conversation about the Internet.
Packet: The fundamental unit of data transmitted over the Internet. When a device intends to send a message to another
device (for example, your PC sends a request to YouTube to open a video), it breaks the message down into smaller pieces,
called packets. Each packet has the sender’s address, the destination address, a sequence number, and a piece of the overall
message to be sent.
Hub: A simple network device connects other devices to the network and sends packets to all the devices connected to it.
Bridge: A network device that connects two networks and only allows packets through that are needed.
Switch: A network device that connects multiple devices and filters packets based on their destination within the connected
devices.
Router: A device that receives and analyzes packets and then routes them towards their destination. In some cases, a router
will send a packet to another router; it will send it directly to its destination in other cases.
IP Address: Every device that communicates on the Internet, whether it be a personal computer, a tablet, a smartphone, or
anything else, is assigned a unique identifying number called an IP (Internet Protocol) address. Historically, the IP-address
standard used has been IPv4 (version 4), which has the format of four numbers between 0 and 255 separated by a period.
For example, the domain Saylor.org has an IP address of 107.23.196.166. The IPv4 standard has a limit of 4,294,967,296
possible addresses. As the use of the Internet has proliferated, the number of IP addresses needed has grown to the point
where IPv4 addresses will be exhausted. This has led to the new IPv6 standard, which is currently being phased in. The
IPv6 standard is formatted as eight groups of four hexadecimal digits, such as 2001:0db8:85a3:0042:1000:8a2e:0370:7334.
The IPv6 standard has a limit of 3.4×1038 possible addresses. For more detail about the new IPv6 standard, see this
Wikipedia article.
Domain name: If you had to try to remember the IP address of every web server you wanted to access, the Internet would
not be nearly as easy to use. A domain name is a human-friendly name for a device on the Internet. These names generally
consist of a descriptive text followed by the top-level domain (TLD). For example, Wikipedia's domain name is
Wikipedia.org; Wikipedia describes the organization, and .org is the top-level domain. In this case, the .org TLD is
designed for nonprofit organizations. Other well-known TLDs include .com , .net , and .gov . For a complete list and
description of domain names, see this Wikipedia article.
DNS: DNS stands for “domain name system,” which acts as the directory on the Internet. A DNS server is queried when a
request to access a device with a domain name is given. It returns the IP address of the device requested, allowing for
proper routing.
Packet-switching: When a packet is sent from one device out over the Internet, it does not follow a straight path to its
destination. Instead, it is passed from one router to another across the Internet until it reaches its destination. In fact,
sometimes, two packets from the same message will take different routes! Sometimes, packets will arrive at their
destination out of order. When this happens, the receiving device restores them to their proper order. For more details on
packet switching, see this interactive web page.
Protocol: In computer networking, a protocol is the set of rules that allow two (or more) devices to exchange information
back and forth across the network.

5.9.2 https://workforce.libretexts.org/@go/page/9998
This page titled 5.9: Internet Connections is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

5.9.3 https://workforce.libretexts.org/@go/page/9998
5.10: The Network as a Platform Converged Networks
Traditional Separate Networks
Consider a school which was built thirty years ago. A few study halls were cabled for the data network, phone network, and video
network for TVs in those days and these different networks couldn't speak with one another.
Each network utilized various innovations to convey the correspondence signal. Each network had its own arrangement of rules
and measures to guarantee successful correspondence.

The Converging Network


Today, the separate data, phone, and video networks are converging. In contrast to traditional networks, merged networks are
equipped for conveying information, voice, and video between a wide range of sorts of devices over a similar system foundation.
This network foundation utilizes a similar arrangement of rules, understandings, and implementation standards.

This page titled 5.10: The Network as a Platform Converged Networks is shared under a CC BY 3.0 license and was authored, remixed, and/or
curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative
(OERI)) .

5.10.1 https://workforce.libretexts.org/@go/page/9999
5.11: Reliable Network
Network Architecture
Networks must help a wide scope of applications and services, just as they work over a wide range of cables and devices, making
up the physical infrastructure. In this specific situation, the term network architecture alludes to the technologies that help the
foundation and the programmed services and rules, or protocols, that move data over the network.
As networks advance, we are finding that there are four fundamental qualities that the underlying architectures need to deliver to
meet users desires:
Fault Tolerance
Scalability
Quality of Service (QoS)
Security

Fault Tolerance
The desire is that the Internet is consistently accessible to a great many clients who depend on it. This requires a network
architecture that is worked to tolerate flaws. A fault-tolerant network restrains the effect of failure, with the goal that the least
number of devices are influenced. It is additionally worked in a manner that permits brisk recuperation when such a disappointment
happens. These networks rely upon various ways between the source and goal of a message. If a path fails, the messages can be
instantly sent over an alternate link. Having numerous ways to a goal is known as redundancy.
One way dependable networks give repetition is by executing a packet-switched network. Packet switching parts traffic into
packets that are steered over a shared network. For example, a solitary message, an email, or a video stream, is broken into multiple
message blocks, called packets. Every packet has the important addressing information of the source and goal of the message. The
routers inside the network switch the packets dependent on the state of the network at that point. This implies all the packets in a
solitary message could take totally different ways to the goal.

Scalability
A scalable network can grow rapidly to help new users and applications without affecting the service's performance being
conveyed to existing users.
Another network can be effortlessly added to a current network. Furthermore, networks are versatile because the designers observe
acknowledged protocols and standards. This permits software and hardware vendors to improve items and administrations without
stressing over structuring another arrangement of rules for working inside the network.

Quality of Service
Quality of Service (QoS) is additionally a consistently expanding requirement of networks today. New applications accessible to
users over internetworks, for example, voice and live video transmissions, make better standards for the quality of the delivered
services. Have you at any point attempted to watch a video with steady breaks and stops? As information, voice, and video content
keep on combining into a similar system, QoS turns into an essential instrument for overseeing blockage and guaranteeing
dependable conveyance of substance to all users.
Congestion happens when the interest for bandwidth surpasses the amount which is accessible. Network bandwidth is estimated in
the number of bits transmitted in a solitary second or bits per second (bps). When synchronous correspondences have endeavored
over the network, the interest for network bandwidth can surpass its accessibility, making network congestion.
When traffic volume is more prominent than what can be shipped over the network, devices queue or hold the packets in memory
until assets become accessible to transmit them.
With a QoS strategy set up, the router can deal with data and voice traffic progression, offering priority to voice communications if
the network encounters congestion.

5.11.1 https://workforce.libretexts.org/@go/page/10000
Security
Vital individual and business resources are the network infrastructure, services, and the data contained on network-attached
devices.
Two kinds of network security worries must be addressed: network infrastructure security and information security.
Ensuring a network infrastructure incorporates the physical securing of devices that give network connectivity and forestalling
unapproved access to the management software that resides on them.
Data security ensures that the data contained inside the packets being transmitted over the network and the data put away on
network-attached devices. To accomplish the objectives of network security, there are three essential requirements:
Confidentiality: Data secrecy implies that just the planned and approved recipients can access and read information.
Integrity: Data honesty affirms that the data has not been adjusted in transmission, from root to goal.
Availability- Data accessibility implies confirmation of timely and solid access to information services for approved users.

This page titled 5.11: Reliable Network is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

5.11.2 https://workforce.libretexts.org/@go/page/10000
5.12: The Changing Network Environment Network Trends
New Trends
As new technologies and end-user devices come to market, businesses and purchasers must keep on acclimating to this ever-
evolving condition. The job of the network is changing to empower the connections between individuals, devices, and data. There
are a few new networking trends that will impact organizations and purchasers. A portion of the top trends include:
Bring Your Own Device (BYOD)
Video communications
Online collaboration
Cloud computing

Bring Your Own Device


The idea of any device, to any content, in any way, is a significant worldwide trend that requires huge changes to the manner in
which devices are utilized. This trend is known as Bring Your Own Device (BYOD).
BYOD is about end users having the opportunity to utilize individual tools in order to get to data and convey over a business or
campus network. With the development of customer devices and the related drop in cost, representatives and students can be relied
upon to have probably the most progressive computing and networking tools for individual use. These individual tools can be
laptops, e-books, tablets, cell phones, and tablets. These can be devices bought by the organization or school, bought by the
individual, or both.
BYOD implies any device, with any possession, utilized anyplace. For instance, previously, a student who expected to get access to
the campus network or the Internet needed to utilize one of the school's PCs. These devices were commonly constrained and seen
as instruments just for work done in the study hall or in the library. Expanded availability through portable and remote access to the
campus network gives students a lot of adaptability and opens doors of learning for the student.

Online Collaboration
People want to connect with the network, for access to data applications, in addition to team up with each other.
Collaboration is characterized as "the demonstration of working with another or others on a joint venture." Collaboration tools, give
representatives, students, instructors, clients, and accomplices an approach to quickly interface, connect, and accomplish their
targets.
For businesses, collaboration is a basic and vital need that associations are utilizing to sustain their competition. Collaboration is
additionally a need in training. Students need to work together to help each other in learning, to create group abilities utilized in the
workplace, and to cooperate on group based projects.

Video Communication
Another trend in networking that is basic to the correspondence and joint effort exertion is video. Video is being utilized for
interchanges, cooperation, and amusement. Video calls can be made to and from anyplace with an Internet connection.

5.12.1 https://workforce.libretexts.org/@go/page/10001
Figure 5.12.1 : A video call showing a group of people on the screen. Image by photo by Chris Montgomery on Unsplash is
licensed under CC BY SA 2.0
Video conferencing is an incredible asset for speaking with others from a distance, both locally and worldwide. Video is turning
into a basic necessity for successful joint effort as associations stretch out across geographic and social limits.

Cloud Computing
Cloud computing is another worldwide trend changing how we access and store information. Cloud computing permits us to store
individual files, even backup our whole hard disk drive on servers over the Internet. Applications, for example, word processing,
and photograph editing, can be accessed utilizing the Cloud.
When it comes to businesses, cloud computing expands IT's capabilities without requiring interest in new infrastructure, preparing
new faculty, or permitting new software. These services are accessible on request and conveyed economically to any device on the
planet without trading off security or capacity.
There are four essential Clouds: Public Clouds, Private Clouds, Hybrid Clouds, and Custom Clouds. Snap each Cloud to find out
additional.
Cloud computing is conceivable because of data centers. A data center is an office used to house PC frameworks and related parts.
A data center can consume one room of a building, at least one story, or the whole thing. Data centers are commonly over the top
expensive to manufacture and keep up. Therefore, just huge associations utilize secretly fabricated data centers to house their
information and offer users assistance. Smaller associations that can't afford to keep up their own private data center can lessen the
general expense of ownership by renting server and capacity services from a bigger data center association in the Cloud.

This page titled 5.12: The Changing Network Environment Network Trends is shared under a CC BY 3.0 license and was authored, remixed,
and/or curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative
(OERI)) .

5.12.2 https://workforce.libretexts.org/@go/page/10001
5.13: Technology Trends in the Home
Networking trends are not just influencing how we work or study, and they are also changing pretty much every part of the home.
The most up-to-date home trends incorporate smart home technology, a technology that is coordinated into habitual appliances,
permitting them to interconnect with different devices, making them progressively 'smart' or automated. For instance, envision
having the option to set up a dish and spot it in the broiler for cooking before going out for the afternoon. Envision if the stove
knew of the dish it was cooking and was associated with your 'schedule of occasions' so it could figure out what time you will be
eating and change start times and length of cooking accordingly. It could even modify cooking times and temperatures dependent
on plan changes. Furthermore, a cell phone or tablet connection permits the user to interface with the broiler straightforwardly to
make any ideal changes. When the dish is "accessible," the stove sends an alarm message to a predefined end-user device that the
dish is done and warming.
This situation isn't long-off. Actually, smart home technology is being created for all rooms inside a house. It will turn out to be a
greater degree of reality as home networking and high-speed Internet technology become progressively far-reaching. New home
networking technologies are being grown day by day to meet these sorts of developing technology needs.

Powerline Networking
Powerline networking is a rising trend for home networking that utilizes existing electrical wiring to connect devices.
The idea of "no new wires" signifies the capacity to connect a device to the network where there is an electrical outlet. This spares
the expense of introducing data cables and with no extra expense to the electrical bill. Utilizing similar wiring that conveys power,
powerline networking sends information by sending data on specific frequencies.
Utilizing a standard powerline adapter, devices can connect with the LAN any place there is an electrical outlet. Powerline
networking is beneficial when wireless access points can't be utilized or can't arrive at all to the devices in the home. Powerline
networking isn't intended to fill in for committed cabling in data networks. But it is an alternative when data network cables or
wireless communications are not a reasonable choice.

Wireless Broadband
Connecting with the Internet is indispensable in savvy home innovation. DSL and cable are basic advances used to connect homes
and private companies to the Internet. Nonetheless, remote access might be another choice in numerous regions.
Another remote answer for home and independent companies is wireless broadband. This uses the equivalent cell innovation to get
to the Internet with an advanced mobile phone or tablet. A radio wire is introduced outside the house, giving either remote or wired
availability for home devices. In numerous zones, home wireless broadband is contending legitimately with DSL and cable
services.

Wireless Internet Service Provider (WISP)


Wireless Internet Service Provider (WISP) is an ISP that connects subscribers of an assigned passage or problem area utilizing
comparable remote innovations found in-home wireless local area networks (WLANs). WISPs are all the more usually found in
provincial situations where DSL or cable services are not accessible.
Though a different transmission tower might be introduced for the antenna, the antenna is usually connected to a current raised
structure, such as a water tower or a radio pinnacle. A little dish or radio wire is introduced on the subscriber's rooftop in the WISP
transmitter's scope. The subscriber's entrance unit is associated with the wired system inside the home. From the home user's point
of view, the arrangement isn't vastly different from DSL or cable service. The principle distinction is that the home's connection to
the ISP is remote rather than a physical link.

Sidebar: Why Doesn’t My Cell Phone Work When I Travel Abroad?


As mobile phone technologies have evolved, providers in different countries have chosen different communication standards for
their mobile phone networks. In the US, both of the two competing standards exist GSM (used by AT&T and T-Mobile) and
CDMA (used by the other major carriers). Each standard has its pros and cons, but the bottom line is that phones using one
standard cannot easily switch to the other.

5.13.1 https://workforce.libretexts.org/@go/page/10002
In the US, this is not a big deal because mobile networks exist to support both standards. But when you travel to other countries,
you will find that most of them use GSM networks, with the one big exception being Japan, which has standardized on CDMA. It
is possible for a mobile phone using one type of network to switch to the other type of network by switching out the SIM card,
which controls your access to the mobile network. However, this will not work in all cases. If you are traveling abroad, it is always
best to consult with your mobile provider to determine the best way to access a mobile network.

This page titled 5.13: Technology Trends in the Home is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-
Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

5.13.2 https://workforce.libretexts.org/@go/page/10002
5.14: Network Security
Security Threats
Network security is an indispensable piece of computer networking today, whether or not the network is restricted to a home
domain with a solitary connection with the Internet or as extensive as an organization with many users. The network security that is
executed must consider the environment, just as the system's devices and prerequisites. It must have the option to keep the data
secure while considering the quality of service anticipated from the network.
Ensuring a network is secure includes technologies, protocols, devices, tools, and techniques to keep data secure and moderate
threat vectors. Threat vectors might be external or internal. Numerous external network security threats today are spread over the
Internet.
The most widely recognized external threats to networks include:
Viruses, worms, and Trojan horses- malignant programming and subjective code running on a client device
Spyware and adware - software installed on a user device that covertly gathers data about the user Zero-day attacks, likewise
called zero-hour attacks - an assault that happens on a principal day that a defenselessness gets known
Hacker attacks- an assault by an educated individual to user devices or network assets
Denial of service attacks- assaults intended to slow or crash applications and procedures on a network device
Data interception and theft - an assault to catch private data from an association's network
Identity theft- an assault to take the login qualifications of a user to get to private information
It is similarly critical to think about internal threats. There have been numerous examinations showing that the most well-known
data breaches happen due to the network's internal users. This can be credited to lost or taken devices, inadvertent abuse by
workers, and in the business condition, even malignant representatives. With the advancing BYOD systems, corporate information
is considerably more powerless. Accordingly, it is critical to address both outside and interior security dangers when building up a
security strategy.

Security Solutions
No single arrangement can shield the network from the many threats that exist. Consequently, security ought to be implemented in
various layers, utilizing more than one security arrangement. If one part of the security fails to recognize and shield the network,
others will stand.
A home network security execution is typically rather essential. It is commonly executed on the interfacing end devices, just as
connected with the Internet, and can even depend on contracted services from the ISP.
Conversely, the network security implementation for a corporate network, for the most part, comprises numerous segments
incorporated with the network to screen and channel traffic.
In a perfect world, all segments cooperate, which limits maintenance and improves overall security.
Network security parts for a home or little office network should at least incorporate the following:
Antivirus and antispyware: These are utilized to shield end devices from getting contaminated with vindictive software.
Firewall filtering: This is utilized to prevent unapproved access to the network. This may incorporate a host-based firewall
system that is actualized to forestall unapproved access to the end device or an essential separating service on the home router
to keep unapproved access from the outside world into the network.
Bigger networks and corporate networks frequently have other security necessities:
Dedicated firewall systems: These are utilized to develop further firewall abilities that can channel a lot of traffic with greater
granularity.
Access control lists (ACL): These are utilized to channel access and traffic sending additionally.
Intrusion prevention systems (IPS): These are utilized to distinguish quick-spreading dangers, for example, zero-day or zero-
hour assaults.
Virtual Private Networks (VPN): These are utilized to give secure access to telecommuters.
Networks security necessities must consider the network condition, just like the different applications and processing prerequisites.
Both home situations and organizations must have the option to secure their data yet consider the quality of service that is

5.14.1 https://workforce.libretexts.org/@go/page/10003
anticipated from every innovation. Furthermore, the security arrangement executed must be versatile to the developing and
changing trends of the network.
The study of network security dangers and relief strategies begins with a concise understanding of the underlying switching and
routing infrastructure utilized to organize network services.

This page titled 5.14: Network Security is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

5.14.2 https://workforce.libretexts.org/@go/page/10003
5.15: Summary
Summary
Networks and the Internet have changed how we impart, learn, work, and even play.
Networks come in all sizes. They can run from basic networks consisting of two PCs to networks connecting a large number of
devices.
The Internet is the biggest network presently. Truth be told, the term Internet implies a 'network of networks.'
The Internet offers the types of assistance that empower us to interface and speak with our families, companions, work, and
interests.
The network foundation is the stage that underpins the network. It gives the steady and dependable channel over which
correspondence can happen. It comprises network parts, including end devices, halfway devices, and network media.
Networks must be dependable. This implies the network must be tolerant to flaws, adaptable, give quality of service, and guarantee
the network's data and assets. Network security is a basic piece of PC networking, whether or not the network is restricted to a
home situation with a solitary connection with the Internet or as extensive as an enterprise with many users. No single arrangement
can shield the network from the assortment of dangers that exist. Consequently, security ought to be executed in numerous layers,
utilizing more than one security arrangement.
The network infrastructure can change significantly based on size, many users, and the sorts of upheld administrations. The
network infrastructure must develop and change by how the network is utilized. The routing and switching stage is the
establishment of any networked framework.

This page titled 5.15: Summary is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

5.15.1 https://workforce.libretexts.org/@go/page/10004
5.16: Study Questions
Study Questions
1. Identify the first four locations hooked up to the ARPANET
2. Describe the difference between the Internet and the World Wide Web
3. List three of your favorite Web 2.0 apps or websites
4. Identify the killer app for the Internet
5. List a few home internet connections
6. List a few business internet connections
7. Describe the difference between a LAN and a WAN
8. Describe the difference between an intranet and an extranet
9. Explain what a network topology is
10. Explain what powerline networking is

Exercises
1. Give an example of each of the following terms:
Wireless LAN (WLAN)
Wide-area network (WAN)
Intranet
Local-area network (LAN)
Extranet
2. Give an example for each of the following:
Fault tolerance
Scalability
Quality of service (QoS)
Security
3. Create a google account at - google.com, create a new document using google docs, share the document with others and explore
document sharing via your google account.
4. Find the IP address of your computer. Explain the steps how you find it.
5. Identify your or your school’s Internet service provider.
6. Pretend that you are planning a trip to three foreign countries in the next month. Consult your wireless carrier to determine if
your mobile phone would work properly in those countries. Identify if there are costs and other alternatives to have your phone
work properly.

This page titled 5.16: Study Questions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

5.16.1 https://workforce.libretexts.org/@go/page/10005
CHAPTER OVERVIEW

6: Information Systems Security


 Learning Objectives

Upon completion of this chapter, you will be able to:


Identify the information security triad
Explain the motivations of the threat actors
Define the potential impact of network security attacks
Describe the functions of a Security Operations Center (SOC)
Explain security policies

We discuss the information security triad of confidentiality, integrity, and availability. We will review different types of threats and
associated costs for individuals, organizations, and nations. We will discuss different security tools and technologies, how security
operation centers can secure organizations’ resources and assets, and a primer on personal information security.
6.1: Introduction
6.2: The Information Security Triad- Confidentiality, Integrity, Availability (CIA)
6.3: Tools for Information Security
6.4: Threat Impact
6.5: Fighters in the War Against Cybercrime- The Modern Security Operations Center
6.6: Security vs. Availability
6.7: Summary
6.8: Study Questions

This page titled 6: Information Systems Security is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1
6.1: Introduction
As computers and other digital devices have become essential to business and commerce, they have also increasingly become a
target for attacks. For a company or an individual to use a computing device with confidence, they must first be assured that the
device is not compromised in any way and that all communications will be secure. This chapter reviews the fundamental concepts
of information systems security and discusses some of the measures that can be taken to mitigate security threats. The chapter
begins with an overview focusing on how organizations can stay secure. Several different measures that a company can take to
improve security will be discussed. Finally, you will review a list of security precautions that individuals can take to secure their
personal computing environment.

This page titled 6.1: Introduction is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

6.1.1 https://workforce.libretexts.org/@go/page/9780
6.2: The Information Security Triad- Confidentiality, Integrity, Availability (CIA)
The Information Security Triad: Confidentiality, Integrity, Availability (CIA)

Figure 6.2.1 : The Information Security triad: CIA. Image by John M. Kennedy T., is licensed under CC BY-SA

Confidentiality
Protecting information means you want to restrict access to those who are allowed to see it. This is sometimes referred to as NTK,
Need to Know, and everyone else should be disallowed from learning anything about its contents. This is the essence of
confidentiality. For example, federal law requires that universities restrict access to private student information. Access to grade
records should be limited to those who have authorized access.

Integrity
Integrity is the assurance that the information being accessed has not been altered and truly represents what is intended. Just as
people with integrity mean what they say and can be trusted to represent the truth consistently, information integrity means
information truly represents its intended meaning. Information can lose its integrity through malicious intent, such as when
someone who is not authorized makes a change to misrepresent something intentionally. An example of this would be when a
hacker is hired to go into the university’s system and change a student’s grade.
Integrity can also be lost unintentionally, such as when a computer power surge corrupts a file or someone authorized to make a
change accidentally deletes a file or enters incorrect information.

Availability
Information availability is the third part of the CIA triad. Availability means information can be accessed and modified by anyone
authorized to do so in an appropriate time frame. Depending on the type of information, an appropriate timeframe can mean
different things. For example, a stock trader needs information to be available immediately, while a salesperson may be happy to
get sales numbers for the day in a report the next morning. Online retailers require their servers to be available twenty-four hours a
day, seven days a week. Other companies may not suffer if their web servers are down for a few minutes once in a while.
You'll learn about who, what, and why of cyber-attacks in this chapter. Different people commit cybercrime for different purposes.
Security Operations Centers are designed to fight cybercrime. Jobs in a Security Operations Center (SOC) can be obtained by
earning certifications, seeking formal education, and using employment services to gain internship experience and job
opportunities.

The Danger
In chapter 5, we discussed various security threats and possible solutions. Here are a few scenarios to illustrate how hackers trick
users.
Hijacked People
Melanie stopped at her favorite coffee shop to grab her drink for the afternoon. She placed her order, paid the clerk, and waited to
fulfill orders' backup while the baristas worked furiously. Melanie took her phone out, opened the wireless client, and linked to
what she thought was the free wireless network for the coffee shop.

6.2.1 https://workforce.libretexts.org/@go/page/9781
Sitting in the corner of the store, however, a hacker had just set up a free, wireless hotspot "rogue" posing as the wireless network
of the coffee shop. The hacker hijacked her session when Melanie logged on to her bank's website and accessed her bank accounts.
Hijacked Companies
Jeremy, an employee of a large, publicly-held corporation's finance department, receives an email from his CEO with an enclosed
file in Adobe’s PDF format. The PDF regards earnings for the organization in the third quarter. Jeremy does not recall his
department making the PDF. His interest is triggered, and he opens his attachment.
The same scenario plays out around the company as thousands of other workers are enticed to click on the attachment successfully.
As the PDF opens, ransomware is mounted on the workers' computers, including Jeremy’s, and the process of storing and
encrypting corporate data begins. The attackers' target is financial gain, as they keep the company's ransom data until they get paid.
The consequences for opening an attachment in a spam mail or from an unfamiliar address could be disastrous, as with Jeremy’s
case.
Targeted Nations
Some of today's malware is so sophisticated and expensive to create that security experts believe that it could be created only by a
nation-state or group of nations. This malware can be designed to attack vulnerable infrastructures, such as the water network or
electric grid.
This was the aim of the Stuxnet worm, infecting USB drives. The movie World War 3.0 Zero Days tells a story of a malicious
computer worm called Stuxnet. Stuxnet has been developed to penetrate Programmable Logic Controllers (PLCs) from vendors-
supported nuclear installations. The worm was transmitted into the PLCs from infected USB drives and ultimately damaged
centrifuges at these nuclear installations.
Threat Actors
Threat actors include amateurs, hacktivists, organized crime groups, state-funded groups, and terrorist organizations. Threat actors
are individuals or a group of individuals conducting cyber-attacks on another person or organization. Cyberattacks are intentional,
malicious acts intended to harm another individual or organization. The major motivations behind cyberattacks are money, politics,
competition, and hatred.
Known as script kiddies, amateurs have little or no skill. They often use existing tools or instructions to start attacks found on the
Internet. Some are only curious, while others seek to show off their abilities by causing damage. While they use simple methods,
the outcomes can often be catastrophic.
Hacktivists
A hacktivist can act independently or as a member of an organized group. Hacktivists are hackers who rage against many social
and political ideas. Hacktivists openly demonstrate against organizations or governments by publishing articles and images, leaking
classified information, and crippling web infrastructure through distributed denial of service ( DDoS) attacks with illegal traffic. A
denial of service (DoS) attack is one of the most powerful cyberattacks in which the attacker bombards the target with traffic
requests that overwhelm the target server in an attempt to crash it. A distributed denial of service (DDoS) attack is a more
sophisticated version of DoS in which a set of distributed computer systems attacks the target.
Financial Gain
The financial gain motivates much of the hacking activity that constantly threatens our security. Cyber Criminals are people who
utilize technology for their own malicious intentions, such as stealing personal information to make a profit. Such cybercriminals
want access to our bank accounts, personal data, and everything else they can use for cash flow generation.
Trade Secrets and Global Politics
In the past few years, several reports have been seen of nation-states hacking other nations or otherwise intervening with internal
policies. National states are also keen to use cyberspace for industrial spying. Intellectual property theft can give a country a
considerable advantage in international trade.
Defending against the consequences of state-sponsored cyberespionage and cyber warfare will continue to be a priority for
cybersecurity professionals.

6.2.2 https://workforce.libretexts.org/@go/page/9781
How Secure is the Internet of Things
The Internet of Things ( IoT) is rapidly expanding all around us. The internet of things is a network of physical objects that collect
and share data over the internet. We're now beginning to enjoy the IoT rewards. There is a constant creation of new ways of using
connected things. The IoT helps people link items so they can enhance their quality of life. Smart security systems, smart kitchen
appliances, smartwatches, and smart heating systems are few examples of the IoT products available today.
For starters, many people now use connected wearable devices to monitor their fitness activities. How many devices do you
currently own that link to the Internet or your home network?
How safe are those devices? For instance, who wrote the software to support the embedded hardware (aka firmware)? Has the
programmer been paying attention to the security flaws? Are your home thermostats connected to the internet? Your Electronic
Video Recorder (DVR)? When there are security bugs, can the firmware be patched in the system to fix the vulnerability? The new
firmware will not update many computers on the Internet. For updating with patches, some older devices were not even developed.
These two conditions put the users of such devices to face threats and security risks.

Reference
World War 3 Zero Days (Official Movie Site) - Own It on DVD or Digital HD. Retrieved September 6, 2020, from
www.zerodaysfilm.com

This page titled 6.2: The Information Security Triad- Confidentiality, Integrity, Availability (CIA) is shared under a CC BY 3.0 license and was
authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational
Resources Initiative (OERI)) .

6.2.3 https://workforce.libretexts.org/@go/page/9781
6.3: Tools for Information Security
To ensure the confidentiality, integrity, and availability of information, organizations can choose from various tools. Each of these
tools can be utilized as a part of an overall information-security policy, which will be discussed in the next section.

Authentication
The most common way to identify people is through physical appearance, but how do we identify someone sitting behind a
computer screen or at the ATM? Tools for authentication are used to ensure that the person accessing the information is, indeed,
who they present themselves to be.
Authentication can be accomplished by identifying someone through one or more of three factors: something they know, something
they have, or something they are. For example, the most common form of authentication today is the user ID and password. In this
case, the authentication is done by confirming something that the user knows (their ID and password). But this form of
authentication is easy to compromise (see sidebar), and stronger forms of authentication are sometimes needed. Identifying
someone only by something they have, such as a key or a card, can also be problematic. When that identifying token is lost or
stolen, the identity can be easily stolen. The final factor, something you are, is much harder to compromise. This factor identifies a
user through physical characteristics, such as an eye-scan or fingerprint. Identifying someone through their physical characteristics
is called biometrics.
A more secure way to authenticate a user is to do multi-factor authentication. Combining two or more of the factors listed above
makes it much more difficult for someone to misrepresent themselves. An example of this would be the use of an RSA SecurID
token. The RSA device is something you have and will generate a new access code every sixty seconds. To log in to an information
resource using the RSA device, you combine something you know, a four-digit PIN, with the device's code. The only way to
properly authenticate is by both knowing the code and having the RSA device.

Figure 6.3.1 : An RSA SecurID SID800 token with USB connector. Image by Alexander Klink is licensed CC BY

Access Control
Once a user has been authenticated, the next step is to ensure that they can access the appropriate information resources. This is
done through the use of access control. Access control determines which users are authorized to read, modify, add, and/or delete
information. Several different access control models exist. Here we will discuss two: the access control list (ACL) and role-based
access control (RBAC).
For each information resource that an organization wishes to manage, a list of users who have the ability to take specific actions
can be created. This is an access control list or ACL. For each user, specific capabilities are assigned, such as reading, writing,
deleting, or adding. Only users with those capabilities are allowed to perform those functions. If a user is not on the list, they have
no ability even to know that the information resource exists.
ACLs are simple to understand and maintain. However, they have several drawbacks. The primary drawback is that each
information resource is managed separately. If a security administrator wanted to add or remove a user to a large set of information
resources, it would not be easy. And as the number of users and resources increases, ACLs become harder to maintain. This has led
to an improved method of access control, called role-based access control, or RBAC. With RBAC, instead of giving specific users
access rights to an information resource, users are assigned to roles, and then those roles are assigned access. This allows the
administrators to manage users and roles separately, simplifying administration and, by extension, improving security.

6.3.1 https://workforce.libretexts.org/@go/page/9782
Figure 6.3.2 : Comparison of ACL and RBAC. Image by David Bourgeois is licensed CC BY 4.0

Encryption
An organization often needs to transmit information over the Internet or transfer it on external media such as a USB. In these cases,
even with proper authentication and access control, an unauthorized person can access the data. Encryption is a process of encoding
data upon its transmission or storage so that only authorized individuals can read it. This encoding is accomplished by a computer
program, which encodes the plain text that needs to be transmitted; then, the recipient receives the ciphertext and decodes it
(decryption). For this to work, the sender and receiver need to agree on the method of encoding so that both parties can
communicate properly. Both parties share the encryption key, enabling them to encode and decode each other’s messages. This is
called symmetric key encryption. This type of encryption is problematic because the key is available in two different places.

Figure 6.3.3 : Symmetric/private key encryption. Image by Phayzfaustyn is licensed CC0 1.0
An alternative to symmetric key encryption is public-key encryption. In public-key encryption, two keys are used: a public key and
a private key. To send an encrypted message, you obtain the public key, encode the message, and send it. The recipient then uses
the private key to decode it. The public key can be given to anyone who wishes to send the recipient a message. Each user needs
one private key and one public key to secure messages. The private key is necessary to decrypt something sent with the public key.

Figure 6.3.4 : Public key encryption. Image by David Bourgeoi Ph.D. is licensed CC BY 4.0

6.3.2 https://workforce.libretexts.org/@go/page/9782
Sidebar: Password Security
The security of a password depends on its strengths to guard against brute-force guesses. Strong passwords reduce overall breaches
of security because it is harder for criminals to guess.
Password policies and technologies have evolved to combat security threats, from short to long passwords, from single-factor
authentication to multi-factor authentications. Most companies now have specific requirements for users to create passwords and
how they are authenticated.
Below are some of the more common policies that organizations should put in place.
Require complex passwords that make it hard to guess. For example, a good password policy requires the use of a minimum
of eight characters, and at least one upper-case letter, one special character, and one number.
Change passwords regularly. Users must change their passwords regularly. Users should change their passwords every sixty
to ninety days, ensuring that any passwords that might have been stolen or guessed will not be used against the company.
Train employees not to give away passwords. One of the primary methods used to steal passwords is to figure them out by
asking the users or administrators. Pretexting occurs when an attacker calls a helpdesk or security administrator and pretends to
be a particular authorized user having trouble logging in. Then, by providing some personal information about the authorized
user, the attacker convinces the security person to reset the password and tell him what it is. Another way that employees may
be tricked into giving away passwords is through email phishing.
Train employees not to click on a link. Phishing occurs when a user receives an email that looks as if it is from a trusted
source, such as their bank or their employer. In the email, the user is asked to click a link and log in to a website that mimics the
genuine website and enter their ID and password, which the attacker then captures.

Backups
Another essential tool for information security is a comprehensive backup plan for the entire organization. Not only should the data
on the corporate servers be backed up, but individual computers used throughout the organization should also be backed up. A
good backup plan should consist of several components.
A full understanding of the organizational information resources. What information does the organization actually have?
Where is it stored? Some data may be stored on the organization’s servers, other data on users’ hard drives, some in the cloud,
and some on third-party sites. An organization should make a full inventory of all of the information that needs to be backed up
and determine the best way to back it up.
Regular backups of all data. The frequency of backups should be based on how important the data is to the company,
combined with the company's ability to replace any data that is lost. Critical data should be backed up daily, while less critical
data could be backed up weekly.
Offsite storage of backup data sets. If all of the backup data is being stored in the same facility as the original copies of the
data, then a single event, such as an earthquake, fire, or tornado, would take out both the original data and the backup! It is
essential that part of the backup plan is to store the data in an offsite location.
Test of data restoration. Regularly, the backups should be put to the test by having some of the data restored. This will ensure
that the process is working and will give the organization confidence in the backup plan.
Besides these considerations, organizations should also examine their operations to determine what effect downtime would have on
their business. If their information technology were to be unavailable for any sustained period of time, how would it impact the
business?
Additional concepts related to backup include the following:
Universal Power Supply (UPS). A UPS is a device that provides battery backup to critical components of the system, allowing
them to stay online longer and/or allowing the IT staff to shut them down using proper procedures to prevent the data loss that
might occur from a power failure.
Alternate or “hot” sites. Some organizations choose to have an alternate site where their critical data replica is always kept up
to date. When the primary site goes down, the alternate site is immediately brought online to experience little or no downtime.
As information has become a strategic asset, a whole industry has sprung up around the technologies necessary for implementing a
proper backup strategy. A company can contract with a service provider to back up all of their data or purchase large amounts of
online storage space and do it themselves. Most large businesses now use technologies such as storage area networks and archival
systems.

6.3.3 https://workforce.libretexts.org/@go/page/9782
Firewalls
Another method that an organization should use to increase security on its network is a firewall. A firewall can exist as hardware or
software (or both). A hardware firewall is a device connected to the network and filters the packets based on a set of rules. A
software firewall runs on the operating system and intercepts packets as they arrive at a computer. A firewall protects all company
servers and computers by stopping packets from outside the organization’s network that does not meet a strict set of criteria. A
firewall may also be configured to restrict the flow of packets leaving the organization. This may be done to eliminate the
possibility of employees watching YouTube videos or using Facebook from a company computer.
Some organizations may choose to implement multiple firewalls as part of their network security configuration, creating one or
more sections of their partially secured network. This segment of the network is referred to as a DMZ, borrowing the term
demilitarized zone from the military. It is where an organization may place resources that need broader access but still need to be
secured.

Figure 6.3.5 : Network configuration with firewalls, IDS, and a DMZ. Image by David Bourgeois is licensed CC BY 4.0

Intrusion Detection Systems


Another device that can be placed on the network for security purposes is an intrusion detection system or IDS. An IDS does not
add any additional security; instead, it provides the functionality to identify if the network is being attacked. An IDS can be
configured to watch for specific types of activities and then alert security personnel if that activity occurs. An IDS also can log
various types of traffic on the network for analysis later. An IDS is an essential part of any good security setup.
Sidebar: Virtual Private Networks
Using firewalls and other security technologies, organizations can effectively protect many of their information resources by
making them invisible to the outside world. But what if an employee working from home requires access to some of these
resources? What if a consultant is hired to work on the internal corporate network from a remote location? In these cases, a virtual
private network (VPN) is called for.
A VPN allows a user outside of a corporate network to detour around the firewall and access the internal network from the outside.
A combination of software and security measures lets an organization allow limited access to its networks while at the same time
ensuring overall security.

Physical Security
An organization can implement the best authentication scheme globally, develop the best access control, and install firewalls and
intrusion prevention. Still, its security cannot be complete without the implementation of physical security. Physical security is the
protection of the actual hardware and networking components that store and transmit information resources. To implement physical
security, an organization must identify all of the vulnerable resources and ensure that these resources cannot be physically tampered
with or stolen. These measures include the following.
Locked doors: It may seem obvious, but all the security in the world is useless if an intruder can walk in and physically remove
a computing device. High-value information assets should be secured in a location with limited access.
Physical intrusion detection: High-value information assets should be monitored through the use of security cameras and other
means to detect unauthorized access to the physical locations where they exist.
Secured equipment: Devices should be locked down to prevent them from being stolen. One employee’s hard drive could
contain all of your customer information, so it must be secured.
Environmental monitoring: An organization’s servers and other high-value equipment should always be kept in a monitored
room for temperature, humidity, and airflow. The risk of server failure rises when these factors go out of a specified range.
Employee training: One of the most common ways thieves steal corporate information is to steal employee laptops while
employees are traveling. Employees should be trained to secure their equipment whenever they are away from the office.

6.3.4 https://workforce.libretexts.org/@go/page/9782
Security Policies
Besides the technical controls listed above, organizations also need to implement security policies as a form of administrative
control. In fact, these policies should really be a starting point in developing an overall security plan. A good information-security
policy lays out the guidelines for employee use of the information resources of the company. It provides the company recourse in
the case that an employee violates a policy.
A security policy should be guided by the information security triad discussed above. It should lay out guidelines and processes for
employees to follow to access all resources to maintain the three categories' integrity: confidentiality, integrity, and availability.
Policies require compliance and need to be enforceable; failure to comply with a policy will result in disciplinary action. SANS
Institute’s Information Security Policy Page (2020) lists many templates for different types of security policies. One example of a
security policy is how remote access should be managed, which can be found here.
A security policy should also address any governmental or industry regulations that apply to the organization. For example, if the
organization is a university, it must be aware of the Family Educational Rights and Privacy Act (FERPA), which restricts who has
access to student information. Health care organizations are obligated to follow several regulations, such as the Health Insurance
Portability and Accountability Act (HIPAA).
Sidebar: Mobile Security

As mobile devices such as smartphones and tablets proliferate, organizations must be ready to address the unique security concerns
that these devices use. One of the first questions an organization must consider is whether to allow mobile devices in the
workplace.
Many employees already have these devices, so the question becomes: Should we allow employees to bring their own devices and
use them as part of their employment activities? Or should we provide the devices to our employees? Creating a BYOD (“Bring
Your Own Device”) policy allows employees to integrate themselves more fully into their job and bring higher employee
satisfaction and productivity. It may be virtually impossible to prevent employees from having their own smartphones or iPads in
the workplace in many cases. If the organization provides the devices to its employees, it gains more control over the use of the
devices, but it also exposes itself to the possibility of an administrative (and costly) mess.
Mobile devices can pose many unique security challenges to an organization. Probably one of the biggest concerns is the theft of
intellectual property. It would be a straightforward process for an employee with malicious intent to connect a mobile device either
to a computer via the USB port or wirelessly to the corporate network and download confidential data. It would also be easy to take
a high-quality picture using a built-in camera secretly.
When an employee has permission to access and save company data on their device, a different security threat emerges: that device
now becomes a target for thieves. Theft of mobile devices (in this case, including laptops) is one of the primary methods that data
thieves use.
So, what can be done to secure mobile devices? It will start with a good policy regarding their use. Specific guidelines should
include password policy, remote access, camera usage, voice recording, among others.
Besides policies, there are several different tools that an organization can use to mitigate some of these risks. For example, if a
device is stolen or lost, geolocation software can help the organization find it. In some cases, it may even make sense to install
remote data-removal software, which will remove data from a device if it becomes a security risk.

Usability
When looking to secure information resources, organizations must balance the need for security with users’ need to access and use
these resources effectively. If a system’s security measures make it difficult to use, then users will find ways around the security,
which may make the system more vulnerable than it would have been without the security measures! Take, for example, password
policies. If the organization requires an extremely long password with several special characters, an employee may resort to writing
it down and putting it in a drawer since it will be impossible to memorize.

Reference:
Security Policy Templates. Retrieved September 6, 2020, from SANS Institute’s Information Security www.sans.org/information-
security-policy/

6.3.5 https://workforce.libretexts.org/@go/page/9782
This page titled 6.3: Tools for Information Security is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong
T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

6.3.6 https://workforce.libretexts.org/@go/page/9782
6.4: Threat Impact
Chapter 5 discussed the different security threats and solutions. However, users need to safeguard their personal information as
well.

Personally identifiable information (PII)


According to the FBI's Internet Crime Complaint Center (IC3), $13.3 Billion of total losses has been reported from 2016 to 2020
(IC3, 2020). Examples of crime types include phishing, personal data breach, identity theft, credit card fraud. The age of the victim
ranges from 20 to 60 years old. For a detailed report, see the 2020 Internet crime report. The true number may be even higher since
many victims did not report for a variety of reasons.
Personally identifiable information (PII) is any information that can be used to identify a person positively. Particular PII Examples
include:
Name
Social Security number
Birthday
Credit card information
Bank
Account Numbers
Government ID
Address (street, email, telephone numbers)
One of the cybercriminals' most lucrative targets is acquiring PII lists that can then be sold on the dark web. The dark web can only
be accessed through special software, and cybercriminals use it to shield their activities. Stolen PII can be used to build fraudulent
accounts, such as short-term loans and credit cards.
Protected Health Information (PHI) is a subset of PII. The medical community produces and manages PHI-containing electronic
medical records (EMRs). In the U.S., the Health Insurance Portability and Transparency Act ( HIPAA) governs PHI handling. In
the European Union, a similar law is called data security.

Lost Competitive Advantage


In cyberspace, companies are constantly concerned about corporate hacking. Another major concern is the loss of trust that occurs
when a firm cannot protect its customers' personal data. The loss of competitive advantage may result from this loss of confidence
rather than from stealing trade secrets by another firm or country.

Reference:
2020 IC3 Report. Retrieved April 6, 2021, from https://www.ic3.gov/Media/PDF/AnnualReport/2020_IC3Report.pdf

This page titled 6.4: Threat Impact is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

6.4.1 https://workforce.libretexts.org/@go/page/9783
6.5: Fighters in the War Against Cybercrime- The Modern Security Operations
Center
Besides the tools and practices discussed earlier to protect ourselves, companies also have increased their investment to fight
against cybercrime. One such investment is a dedicated center called Security Operations Center to safeguard companies from
internal and external threats.

Elements of a SOC
Defending against today's threats requires a formalized, structured, and disciplined approach that is carried out by Security
Operations Centers professionals who work closely with other groups such as IT or networking staff. SOCs offers a wide variety of
services tailored to meet customer needs, from monitoring and compliance to comprehensive threat detection and hosted protection.
SOCs may be wholly in-house, owned and run by a company, or security providers, such as Cisco Systems Inc.'s Managed Security
Services, may be contracted to elements of a SOC. The key elements of a SOC are individuals, processes, and technology.
A great way to fight against threats is through Artificial Intelligence (AI) and machine learning. AI and machine learning use multi-
factor authentication, malware scanning, and fighting spam and phishing to fight against threats.

Process in the SOC


SOC professionals monitor all suspicious activities and follow a set of rules to verify if it is a true security incident before
escalating to the next level severity for the incident for appropriate security experts to take appropriate actions.
The SOC has four principal functions:
Use network data to check the security warnings
Evaluate accidents that have been checked and determine how to proceed
Deploy specialists to evaluate risks at the highest possible level.
Provide timely communication by SOC management to the company or clients

Technologies deployed in the SOC include:


Event collection, correlation, and analysis
Security monitoring
Security control
Log management
Vulnerability assessment
Vulnerability tracking
Threat intelligence

Enterprise and Managed Security


The organization will benefit from the implementation of an enterprise-level SOC for medium and large networks. The SOC could
be a complete solution within the company. Yet many larger organizations will outsource at least part of the SOC operations to a
security solution provider such as Cisco Systems Inc.

This page titled 6.5: Fighters in the War Against Cybercrime- The Modern Security Operations Center is shared under a CC BY 3.0 license and
was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open
Educational Resources Initiative (OERI)) .

6.5.1 https://workforce.libretexts.org/@go/page/9784
6.6: Security vs. Availability
Much of the business networks will still be up and running. Security staff recognizes that network stability must be maintained for
the company to achieve its goals.
Any company or industry has a small tolerance for downtime on networks. Usually, this tolerance is based on calculating downtime
costs with the cost of insuring against downtime.
For example, using a router as a single point of failure could be tolerable in a small retail business with only one location.
However, if a large portion of that company's sales is from online shoppers, the owner may want to have a redundancy degree to
ensure there is always a connection.
Desired uptime is also expressed in the number of down-minutes in a year. For example, an uptime of "five nines" means the
network is up by 99.999 percent of the time or down by no more than 5.256 minutes a year. "Four nines" would be a 52.56-minute
downtime per capita.

Availability % Downtime

99.8% 17.52 hours

99.9% (“three nines”) 8.76 hours

99.99% (“four nines”) 52.56 minutes

99.999% (“five nines”) 5.256 minutes

99.9999% (“six nines”) 31.5 seconds

99.99999% (“seven nines”) 3.15 seconds

But security cannot be so powerful that it interferes with employee needs or business functions. This is often a tradeoff between
good security and allowing companies to work efficiently.

This page titled 6.6: Security vs. Availability is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

6.6.1 https://workforce.libretexts.org/@go/page/9785
6.7: Summary
Summary
People, businesses, and even nations can all fall victim to cyberattacks. There are different types of attackers, including amateurs
attacking for fun and prestige, hacktivists hacking for a political cause, and professional hackers attacking for profit. Besides,
nations that attack other nations to gain an economic advantage by intellectual property theft or harm or destroy another country's
properties. The vulnerable networks are PC and server business networks and the thousands of computers on the Internet of Things.
Fight against cyberattacks requires people, processes, and technology to follow best practices and good security policies. There are
tools that users can employ to protect personally identifiable information. There are policies that companies can require of their
customers and employees to protect their resources. Companies can also invest in dedicated Security Operations Centers (SOCs)
for cybercrime prevention, identification, and response.

This page titled 6.7: Summary is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

6.7.1 https://workforce.libretexts.org/@go/page/10023
6.8: Study Questions
Study Questions
1. Briefly define the three components of the information security triad
2. Explain what authentication means
3. Give two examples of a complex password
4. Give three examples of threat actors
5. Name two motivations of hacktivists to commit cybercrime
6. List five ways to defend against cyber attacks
7. List three examples of PII
8. Briefly explain the role of SOC
9. Explain the purpose of security policies
10. Explain how information availability related to a successful organization

Exercises
1. Research and analyze cybersecurity incidents to come up with scenarios of how organizations can prevent an attack.
2. Discuss some IoT (Internet of Things) application vulnerabilities with non-techie and techie technology users, then compare
and contrast their different perspectives and reactions to IoT vulnerabilities.
3. Describe one multi-factor authentication method that you have experienced and discuss the pros and cons of using multi-factor
authentication.
4. Identify the password policy at your place of employment or study. Assess if it is a good policy or not. Explain.
5. Take inventory of possible security threats that your home devices may be exposed to. List them and discuss their potential
effects and what you plan to do about them.
6. Recall when you last back up your data. Discuss the method you use. Define a backup policy for your home devices.
7. Research the career of a SOC professional. Report what certificate training it requires to become SOC professionals, what the
demand is for this career, and their salary range.

This page titled 6.8: Study Questions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

6.8.1 https://workforce.libretexts.org/@go/page/10024
SECTION OVERVIEW

2: Information Systems for Strategic Advantage


7: Leveraging Information Technology (IT) for Competitive Advantage
7.1: Introduction
7.2: The Productivity Paradox
7.3: Competitive Advantage
7.4: Using Information Systems for Competitive Advantage
7.5: Investing in IT for Competitive Advantage
7.6: Summary
7.7: Study Questions

8: Business Processes
8.1: Introduction
8.2: What Is a Business Process?
8.3: Summary
8.4: Study Questions

9: The People in Information System


9.1: Introduction
9.2: The Creators of Information Systems
9.3: Information-Systems Operations and Administration
9.4: Managing Information Systems
9.5: Emerging Roles
9.6: Career Path in Information Systems
9.7: Information-Systems Users – Types of Users
9.8: Summary
9.9: Study Questions

10: Information Systems Development


10.1: Introduction
10.2: Systems Development Life Cycle (SDLC) Model
10.3: Software Development
10.4: Implementation Methodologies
10.5: Summary
10.6: Study Questions
10.7: Summary

This page titled 2: Information Systems for Strategic Advantage is shared under a CC BY 3.0 license and was authored, remixed, and/or curated
by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1
CHAPTER OVERVIEW

7: Leveraging Information Technology (IT) for Competitive Advantage


 Learning Objectives

Upon successful completion of this chapter, you will be able to:


Describe Porter’s competitive forces model and how information technology impacts competitive advantage.
Describe Porter’s value chain model and its relationship to IT.
Describe information systems that can provide businesses with a competitive advantage.
Describe the collaborative systems that workers can use to contribute to their organization.
Distinguish between a structured and an unstructured decision and its connection to IT.
Discuss the challenges associated with a sustainable competitive advantage.

This chapter examines the impact that information systems have on organizations, how they can use IT to develop and sustain
competitive advantages and improve operational effectiveness in their value chain and decision-making processes. We will discuss
seminal works by Brynjolfsson, Carr, and Porter related to IT and competitive advantage.
7.1: Introduction
7.2: The Productivity Paradox
7.3: Competitive Advantage
7.4: Using Information Systems for Competitive Advantage
7.5: Investing in IT for Competitive Advantage
7.6: Summary
7.7: Study Questions

This page titled 7: Leveraging Information Technology (IT) for Competitive Advantage is shared under a CC BY 3.0 license and was authored,
remixed, and/or curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources
Initiative (OERI)) .

1
7.1: Introduction
For over fifty years, since the microprocessor's invention, computing technology has been a part of the business. From UPC
scanners and computer registers at your local neighborhood store to huge inventory databases used by companies like Amazon,
information technology has become the backbone of commerce. Organizations have spent trillions of dollars on information
technologies. But has all this investment in IT made a difference? Do computers increase productivity? Are companies that invest
in IT more competitive? This chapter will look at the value IT can bring to an organization and try to answer these questions. We
will begin by highlighting two important works from the past two decades.

This page titled 7.1: Introduction is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

7.1.1 https://workforce.libretexts.org/@go/page/9787
7.2: The Productivity Paradox
In 1991, Erik Brynjolfsson wrote an article, published in the Communications of the ACM, entitled “The Productivity Paradox of
Information Technology: Review and Assessment” By reviewing studies about the impact of IT investment on productivity,
Brynjolfsson was able to conclude that the addition of information technology to business had not improved productivity at all –
the “productivity paradox.” He concluded that this paradox resulted from our inability to unequivocally document any contribution
after so much effort due to the lack of quantitative measures.
In 1998, Brynjolfsson and Lorin Hitt published a follow-up paper entitled “ Beyond the Productivity Paradox. ” In this paper, the
authors utilized new data that had been collected and found that IT did, indeed, provide a positive result for businesses. Further,
they found that sometimes the true advantages in using technology were not directly relatable to higher productivity but to “softer”
measures, such as the impact on organizational structure. They also found that the impact of information technology can vary
widely between companies.

IT Doesn’t Matter
Just as a consensus was forming about IT's value, the Internet stock market bubble burst; two years later, in 2003, Harvard
professor Nicholas Carr wrote his article “IT Doesn’t Matter” in the Harvard Business Review. In this article, Carr asserts that as
information technology has become more ubiquitous, it has also become less of a differentiator. In other words: because
information technology is so readily available and the software used so easily copied, businesses cannot hope to implement these
tools to provide any competitive advantage. IT is essentially a commodity, and it should be managed like one: low cost, low risk. IT
management should see themselves as a utility within the company and work to keep costs down. For IT, providing the best service
with minimal downtime is the goal. As you can imagine, this article caused quite an uproar, especially from IT companies. Many
articles were written in defense of IT; many others in support of Carr.
The best thing to come out of the article and the subsequent book was that it opened up discussion on IT's place in a business
strategy and exactly what role IT could play in competitive advantage. It is that question that we want to address in the rest of this
chapter.

References
Brynjolfsson, E. and Hitt, L. (1998). Beyond the Productivity Paradox. Communications of the ACM. Retrieved August 16, 2020,
from https://doi.org/10.1145/280324.280332
Brynjolfsson, E. (1992). The Productivity Paradox of Information Technology: Review and Assessment. Center for Coordination
Science MIT Sloan School of Management Cambridge, MA. Retrieved from August 16, 2020, from
http://ccs.mit.edu/papers/CCSWP130/ccswp130.html
Carr, Nicholas G (2003) IT Doesn’t Matter. Retrieved August 20 from https://hbr.org/2003/05/it-doesnt-matter

This page titled 7.2: The Productivity Paradox is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

7.2.1 https://workforce.libretexts.org/@go/page/9788
7.3: Competitive Advantage
What do Walmart, Apple, and McDonald’s have in common?

Figure 7.3.1 : Image Competitive landscape by PaulaD.MezaD is licensed CC BY-SA 4.0


All three businesses have a Competitive advantage. What does it mean when a company has a competitive advantage? What are the
factors that play into it? According to Michael Porter in his book “Competitive Advantage: Creating and Sustaining Superior
Performance,” a company is said to have a competitive advantage over its rivals when it can sustain profits that exceed the
industry's average. Porter identified two basic types of competitive advantage:
Cost advantage: When the firm can deliver the same benefits as competitors but at a lower cost. McDonald's and Walmart both
utilize economies of scale to maintain their cost advantage.
Differentiation advantage: When a firm can deliver benefits that exceed those of competing products. Apple’s innovative
products that complement each other and share the same operating system offer a unique product that gives consumers a sense
of exclusivity, and their trade-in programs build consumer loyalty.
The question, then, is: How can information technology be a factor in achieving Competitive advantage? We will explore this
question by using:
Two analysis tools from Porter’s book “Competitive advantage: Creating and Sustaining Superior Performance:
The value chain
The Five Forces model.
Porter’s analysis in his 2001 article “Strategy and the Internet.”

The Value Chain


In his book, Porter analyzes the basis of competitive advantage and describes how a company can achieve it using the value chain
as a framework. A value chain is a step-by-step business model transforming a product or service from an idea (i.e., materials) to
reality ( i.e., products or services). Value chains help increase a business’s efficiency so the business can deliver the most value(i.e.,
profit) for the least possible cost. Each step (or activity) in the value chain contributes to a product or service's overall value. While
the value chain may not be a perfect model for every type of company, it does provide a way to analyze just how a company is
producing value.

7.3.1 https://workforce.libretexts.org/@go/page/9789
Figure 7.3.2 : Porter’s Value Chain. Image by David Bourgeois is licensed CC BY 4.0
The value chain is made up of two sets of activities: primary activities and support activities. We will briefly examine these
activities and discuss how information technology can create value by contributing to cost advantage or differentiation advantage,
or both.
The primary activities are the functions that directly impact the creation of a product or service, its sales, and after-sales service.
The goal of the primary activities is to add more value than they cost. The primary activities are:
Inbound logistics: Purchasing, Receiving, and storing raw materials. Information technology can make these processes more
efficient, such as with supply-chain management systems, which allow the suppliers to manage their own inventory. Starbucks
has company-appointed coffee buyers that select the finest quality coffee beans from producers in Latin America, Africa, and
Asia.
Operations: Any part of a business involved in converting the raw materials into the final products or services is part of
operations. From manufacturing to business process management (covered in chapter 8), information technology can provide
more efficient processes and increase innovation through information flows.
Outbound logistics: These functions include order processing and warehousing required to get the product out to the customer.
As with inbound logistics, IT can improve processes, such as allowing for real-time inventory checks. IT can also be a delivery
mechanism itself.
Marketing/Sales: The functions that will entice buyers to purchase the products (advertising, salesforce) are part of sales and
marketing. Information technology is used in almost all aspects of this activity. From online advertising to online surveys, IT
can innovate product design and reach customers like never before. The company website can be a sales channel itself.
Service: The functions a business performs after the product has been purchased, such as installation, customer support,
complaint resolution, and repair to maintain and enhance its value, are part of the service activity. Service can be enhanced via
technology as well, including support services through websites and knowledge bases.
The support activities are the functions in an organization that support and cut across all primary activities. The support activities
are:
Firm infrastructure: Organizational functions such as finance, accounting, ERP Systems (covered in chapter 9), and quality
control, all of which depend on information technology.

Technology development: Technological advances and innovations support the primary activities. These advances are then
integrated across the company to add value in different departments. Information technology would fall specifically under this
activity.
Procurement: Acquiring the raw materials used in the creation of products and services is called procurement. Business-to-
business e-commerce can be used to improve the acquisition of materials.
A value chain is a powerful tool in analyzing and breaking down a company into its relevant activities that result in higher prices
and lower cost, by understanding how these activities are connected and the company’s strategic objectives, companies can identify
their core competencies and insight into how information technology can be used to achieve a competitive advantage.
Look at this example of the Starbucks value chain model analysis that includes a short video by Prableen Bajpai: Analyzing
Starbucks Value Chain Model.

7.3.2 https://workforce.libretexts.org/@go/page/9789
Porter’s Five Forces
Porter recognized that other factors could impact a company’s profit in addition to competition from their rivals. He developed the
“five forces'' model as a framework for analyzing the competition in an industry and its strengths and weaknesses. The model
consists of five elements, each of which plays a role in determining an industry's average profitability.

Figure 7.3.3 : Porter’s Five Forces. Image by Grahams Child is licensed CC BY-SA 3.0
In 2001, Porter wrote an article entitled ”Strategy and the Internet,” in which he takes this model and looks at how the Internet(and
IT) impacts an industry's profitability. Although the model's details differ from one industry to another, its general structure of the
five forces is universal. Let’s have a look at how the internet plays a role in Porter’s five forces model:
Threat of New Entrants: The easier it is to enter an industry, the tougher it will be to profit in that industry. The Internet has an
overall effect of making it easier to enter industries. Traditional barriers such as the need for a physical store and sales force to
sell goods and services are drastically reduced. Dot-coms multiplied for that very reason: All a competitor has to do is set up a
website. The geographical reach of the internet enables distant competitors to compete more directly with a local firm. For
example, a manufacturer in Northern California may now have to compete against a manufacturer in the Southern United
States, where wages are lower.
Threat of Substitute Products: How easily can a product or service be replaced with something else? The more types of
products or services there can meet a particular need, and the less profitability will be in an industry. For example, the advent of
the mobile phone has replaced the need for pagers. The Internet has made people more aware of substitute products, driving
down industry profits in those industries being substituted. Any industry in which digitized information can replace material
goods such as books, music, software is at particular risk ( Think, for example, Amazon’s Kindle and Spotify).
Bargaining Power of Suppliers: Companies can more easily find alternative suppliers and compare prices more easily. When a
sole supplier exists, then the company is at the mercy of the supplier. For example, if only one company makes the controller
chip for a car engine, that company can control the price, at least to some extent. The Internet has given companies access to
more suppliers, driving down prices. On the other hand, suppliers now also have the ability to sell directly to customers. As
companies use IT to integrate their supply chain, participating suppliers will prosper by locking customers and increasing
switching costs.
Bargaining Power of Customers: A company that is the sole provider of a unique product has the ability to control pricing.
But the Internet has given customers access to information about products and more options (small and big business) to choose
from.
Threat of Substitute Products: The more competitors in an industry, the bigger a factor price becomes. The visibility of
internet applications on the Web makes proprietary systems more difficult to keep secret. It is straightforward to copy
technology, so innovations will not last that long. For example, Sony Reader was released in 2006, followed by Amazon Kindle
in 2007, and just two years later, Barnes and Noble Nook, which was the best-selling unit in the US before iPad (with built-in
reading app iBooks) hit the market in 2010. (Wikipedia: E-Reader, 2020)
According to this model, the company's average profitability depends on the five forces' collective strength. If the five forces are
intense, for example, in the airline industry, almost no company makes a huge profit. If the forces are mild, for example, the soft
drink industry, there is room for higher profits. The Internet provides better opportunities for companies to establish strategic

7.3.3 https://workforce.libretexts.org/@go/page/9789
advantage by boosting efficiency in various ways, as we will see in the next section. However, the internet also tends to dampen
suppliers' bargaining power and increase the threat of substitute products by making it easier for buyers and sellers to do business.
Thus, the Internet (and, by extension, information technology in general) has the overall impact of increasing competition and
lowering profitability. This is the great paradox of the internet.
While the Internet has certainly produced many big winners, the overall winners have been the consumers, who have been given an
ever-increasing market of products and services and lower prices.

References
Bajpai, P (2020). Analyzing Starbucks Value Chain Model. Retrieved August 16, 2020, from
https://www.investopedia.com/articles/investing/103114/starbucks-example-value-chain-model.asp
Porter, M. (2001). Strategy and the Internet. Harvard Business Review. Retrieved August 20, 2020, from
http://hbswk.hbs.edu/item/2165.html

This page titled 7.3: Competitive Advantage is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

7.3.4 https://workforce.libretexts.org/@go/page/9789
7.4: Using Information Systems for Competitive Advantage
Information Systems support or shape a business unit’s organizational strategy to provide a competitive advantage. Any
information system - Business Process Management (BPM), Electronic Data Interchange (EDI), Management Information System
(MIS), Decision Support System (DSS), Transaction Processing Systems (TPS) - that helps a business deliver a product or service
at a lower cost that is differentiated that focuses on a specific market segment or is innovative is a strategic information system.
Companies typically have several different types of information systems; each type serves a different level of decision-making -
operational (workers), tactical (middle and senior managers), and strategic (executives).

Figure 7.4.1 : A four-level pyramid model of different types of Information Systems based on the different levels of hierarchy in an
organization. Image by By Compo is licensed CC BY-SA 3.0
Let’s look at a few examples.

Electronic Data Interchange (EDI)


Typically, a paper-based exchange of purchase orders and invoices takes a week to process. Using EDI, the process can be
completed within hours!. By integrating suppliers and distributors via EDI, a company can improve speed, efficiency, and security,
thus vastly reducing the resources required to manage relevant business information. Cleo, TrueCommerce EDI, Jitterbit,
GoAnywhere MFT are some of the many EDI software that can be used in conjunction with a data integration platform.
EDI can take the role of supply chain management and the standard format of information exchange used by many systems
discussed below.

7.4.1 https://workforce.libretexts.org/@go/page/9790
Figure 7.4.2 : Comparison of Process with and without EDI. Image by David Bourgeois is licensed CC BY 4.0

Transaction Processing Systems (TPS)


Transaction processing systems (TPS) are computerized information systems developed to process large amounts of data for
routine business transactions such as payroll, order processing, airline reservations, employee records, accounts payable, and
receivable. TPS eliminates the tedium of necessary repetitive transactions that take time and labor and makes them efficient and
accurate, although people must still input data to computerized systems. Transaction processing systems are boundary-spanning
systems that allow the organization to interact with external environments. TPS examples include ATMs, credit card authorizations,
online bill payments, and self-checkout stations at retail stores. IT enables all of this to happen in real-time.

Business Process Management (BPM)


Business process management is the automated integration of process information targeted to streamline operations, reduce costs
and improve customer service (Ken Vollmer, BPMInstitute.org). Unlike EDI, BPM is used both internally and externally, between
applications within a business and between companies. Large financial institutions like Bank of America use BPM to link, integrate
and automate different applications - Credit card, bank account, loans - thus resulting in a delivery time for financial transactions
from weeks to minutes.

Management Information Systems (MIS)


Management Information systems(MIS): users, hardware, and software that support decision making. MIS collects and stores its
key data and produces information that managers need for analysis, control, and decision-making. For example, input from the
sales of different products can be used to analyze trends of performing well and those that are not. Managers use this analysis to
make semi-structured decisions such as changes to future inventory orders and manufacturing schedules.
MIS, IS, and IT sound very similar and are often confused. MIS is a type of IS that is more organization-based and focused on
leveraging IT to increase business value(i.e., Profit). IT or IT management is the technical management of an IT department which
can include MIS.

Decision Support Systems (DSS)


A decision support system (DSS) is a computerized information system that supports business or organizational decision-making
activities by sifting through and analyzing a huge amount of data and producing comprehensive information reports. As technology

7.4.2 https://workforce.libretexts.org/@go/page/9790
continues to advance, DSS is not limited to just huge mainframe computers - DSS applications can be loaded on most desktops,
laptops, and even mobile devices. For example, GPS route planning determines the fastest and best route between two points:
analyzing and comparing multiple options and factoring in traffic conditions.
Marketing executives at a furniture company(like Living Spaces) could run DSS models that use sales data and demographic
assumptions to develop forecasts of the types of furniture that would appeal to the fastest-growing population groups.
DSSs can exist at different levels of decision-making within the organization, from executives to senior managers, and helps people
make decisions about a wide variety of problems, ranging from highly structured decisions to unstructured decisions.
A structured decision is usually one that is repetitive and routine and is based directly on the inputs. For example, a company
decides whether or not to withdraw funds from an international account depending on the current exchange rate. EDI and TPS
typically handle structured decisions. Structured decisions are good candidates for automation, but we don’t necessarily build
decision-support systems for them.
An unstructured decision has a lot of unknowns and relies on knowledge and/or expertise. An information system can support
these decisions by providing the decision-makers with information-gathering tools and collaborative capabilities. An example
of an unstructured decision might be what types of a new product should be created and what market should be targeted.
Decision support systems work best when the decision-maker(s) are making semi-structured decisions. A semi-structured decision
is one in which most of the factors needed for making the decision are known, but human experience and other outside factors may
still play a role. A good example of a semi-structured decision would be diagnosing a medical condition. Farmers using
crop=planning tools to determine the best time to plant, fertilize and reap is another example.
DSSs can be as simple as a spreadsheet that allows for the input of specific variables and then calculates required outputs such as
inventory management. Another DSS might assist in determining which products a company should develop. Input into the system
could include market research on the product, competitor information, and product development costs. The system would then
analyze these inputs based on the specific rules and concepts programmed into them. Finally, the system would report its results,
with recommendations and/or key indicators to decide.
A DSS can be looked at as a tool for competitive advantage in that it can give an organization a mechanism to make wise decisions
about products and innovations.

Collaborative Systems
As organizations began to implement networking technologies, information systems emerged that allowed employees to collaborate
differently. Tools such as document sharing and video conferencing allowed users to brainstorm ideas together and collaborate
without the necessity of physical, face-to-face meetings.
Broadly speaking, any software that allows multiple users to interact on a document or topic could be considered collaborative.
Electronic mail, a shared Word document, social networks, and discussion boards would fall into this broad definition. However,
many software tools have been created that are designed specifically for collaborative purposes. These tools offer a broad spectrum
of collaborative functions. They can exist as stand-alone systems or integrated with any of the information systems above. Here is
just a shortlist of some collaborative tools available for businesses today:
Cloud Services refer to a wide variety of services delivered on-demand to companies and customers over the internet without the
need for internal infrastructure or hardware.
Cloud Services
One of the first true “groupware” collaboration tools.
Provides a full suite of collaboration software, including integrated
e-mail
Obsolete with the advent of newer, easier-to-use technologies like
IBM Lotus Notes Google Drive and Microsoft SharePoint.

Code hosting platform for collaboration amongst


programmers/developers of computer software
Used primarily for version control – to track changes in source code
GitHub during software development.

7.4.3 https://workforce.libretexts.org/@go/page/9790
Web-based document management and collaboration tool
Integrates with Office 365, which educators, students, office workers
are familiar with.
Microsoft SharePoin Sharepoint was covered in more detail in Chapter 5

Formerly known as Google Apps for Work


Software as a Service (SaaS) product that groups all cloud-based
productivity and collaboration tools developed by Google.
The innovative interface allows real-time document editing and
sharing
G Suite Allows collaboration of other products, like Office 365.
Another SaaS that you may be familiar with is Dropbox

Online Video Conferencing Services allows two or more people in different geographical locations to meet and collaborate.
Online Video Conferencing Services
Most popular online video conferencing and meeting platform due
to its user-friendly interface.
Great for small and large businesses as it can support up to 100p
participants in online meetings
Wide variety of options such as screen share, whiteboard, live chat
and messaging, recording, and breakout rooms.
Zoom Collaboration and Interaction from a variety of devices(computers,
tablets, smartphones, etc.)
Google Chrome and Linux OS support

Business communications platform that combines video and audio


Allows participants to interact with each other’s computer desktops
Top of the line security features, making it excellent for business
Cisco Webex with legitimate security concerns

Microsoft’s online meeting platform


Can support up to 250 participants for online meetings
Combines instant messaging, video conferencing, calling, and
document collaboration in a single integrated app.
Skype for Business Skype that you use at home is good for small businesses and can
support up to 50 participants.

With the explosion of the worldwide web, the distinction between these different systems has become fuzzy. Information systems
are available to automate practically any business aspect - from managing inventory to sales and customer service. " Information
Technology(IT)" is now the category to designate any software-hardware-communications structures that today work as a virtual
nervous system of society at all levels.

This page titled 7.4: Using Information Systems for Competitive Advantage is shared under a CC BY 3.0 license and was authored, remixed,
and/or curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative
(OERI)) .

7.4.4 https://workforce.libretexts.org/@go/page/9790
7.5: Investing in IT for Competitive Advantage
In 2008, Brynjolfsson and McAfee published a study in the Harvard Business Review on IT's role in competitive advantage, titled
“Investing in the IT That Makes a Competitive Difference.” Their study confirmed that IT could play a role in competitive
advantage if deployed wisely. In their study, they draw three conclusions:
First, the data show that IT has sharpened differences among companies instead of reducing them. This reflects that while
companies have always varied widely in their ability to select, adapt, and exploit innovations, technology has accelerated and
amplified these differences.
Second, good management matters: Highly qualified vendors, consultants, and IT departments might be necessary for the
successful implementation of enterprise technologies themselves, but the real value comes from the process innovations that can
now be delivered on those platforms. Fostering the right innovations and propagating them widely are executive responsibilities
that can’t be delegated.
Finally, the competitive shakeup brought on by IT is not nearly complete, even in the IT-intensive US economy. We expect to
see these altered competitive dynamics in other countries, as well, as their IT investments grow.

Artificial Intelligence (AI)


Let's watch this short video by The Royal Society, What is Artificial Intelligence? that explains what AI is and its role and impact
in society.

What is arti cial intelligence? | The Roya…


Roya…

Figure 7.5.1 : Technology with AI at its heart has the power to change the world, but what exactly is Artificial Intelligence? (The
Royal Society; The Royal Society via https://youtu.be/nASDYRkbQIY)
In the tech-driven and ever-changing business landscape, successful leveraging and implementing IT has become the solution for
maintaining competitive advantage and growth. One such solution is artificial intelligence (AI). AI (or machine intelligence) is
intelligence demonstrated by machines - machines' ability to operate like a human brain - to learn patterns, provide insights and
even predict future occurrences based on inputted data/information. For example, AI can give companies a competitive edge in
marketing by providing insights into how to market, who to market to, when, and how to market. AI offers insights that are
objective and data-driven. Amazon uses AI to follow user’s behavior on their website - what type of products they buy, how long
they spend on a product page, etc. The AI system quickly learns to generate tailored recommendations to each user's taste and
preference based on their activity. Another advantage of AI is in cybersecurity and fraud protection. AI technologies can use user
behavior data to identify and flag any activity that is out of the ordinary for any user (such as credit card use outside your home
state). AI systems are very versatile in that they can handle all three types of decisions - structured, semi-structured, and
unstructured.

Global Competition
Many companies today are operating in a global environment. In addition to multinational corporations, many companies now
export or import and face competition from products created in countries where labor and other costs are low or where natural
resources are abundant. Electronic commerce facilitates global trading by enabling even small companies to buy from or sell to
businesses in other countries. Amazon, Netflix, Apple, Samsung, LG, and many more have customers and suppliers worldwide.

7.5.1 https://workforce.libretexts.org/@go/page/9791
References
McAfee, A. and Brynjolfsson, E. 2008). Investing in the IT That Makes a Competitive Difference. Harvard Business Review.
Retrieved August 16, 2020, from https://hbr.org/2008/07/investing-in-the-it-that-makes-a-competitive-difference
The Royal Society. (2018). What is Artificial Intelligence? YouTube. [video file: 2:31 minutes] Closed Captioned.

This page titled 7.5: Investing in IT for Competitive Advantage is shared under a CC BY 3.0 license and was authored, remixed, and/or curated
by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

7.5.2 https://workforce.libretexts.org/@go/page/9791
7.6: Summary
Summary
Information systems can and have been used strategically for competitive advantage by many US companies, including Walmart,
Amazon, Netflix, and Apple. Acquiring a competitive advantage is hard, and sustaining it can be just as difficult because of
technology's innovative nature. Organizations that want to gain a market edge must understand how they want to differentiate
themselves and then use all the elements of information systems (hardware, software, data, people, and process) to accomplish that
differentiation.
IT is not a panacea; just purchasing and installing the latest technology will not, by itself, make a company more successful.
Instead, the combination of the right technologies, employee training, infrastructure, and good management, together, will give a
company the best chance of a positive result.

This page titled 7.6: Summary is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

7.6.1 https://workforce.libretexts.org/@go/page/9792
7.7: Study Questions
Study Questions
1. List the five forces in Porter’s Competitive forces model.
2. What does it mean for a business to have a competitive advantage?
3. What are the primary activities and support activities of the value chain?
4. What has been the overall impact of the Internet on industry profitability? Who has been the true winner?
5. List two examples of how Amazon.com used Porter’s five forces model to gain a competitive advantage.
6. Give an example of how the internet impacted Barnes and Noble's online(bn.com) profitability.
7. List and Compare the different information systems. How are they the same? How are they better?
8. Give an example of a semi-structured decision and explain what inputs would be necessary to assist in making the decision.
9. What does a collaborative information system do?
10. How can IT play a role in competitive advantage, according to the 2008 article by Brynjolfsson and McAfee?

Exercises
1. Discuss the idea that an information system by itself can rarely provide a sustainable competitive advantage.
2. Review the Zoom website. What features of Zoom would contribute to good collaboration? What makes Zoom a better
collaboration tool than something like Skype or Google Hangouts?
3. Think of a semi-structured decision that you make in your daily life and build your own DSS using a spreadsheet to help you
make that decision.
4. Give an example of AI that you see used in your daily life. Describe one way it can be improved or combined with another
information system to gain an advantage.

This page titled 7.7: Study Questions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

7.7.1 https://workforce.libretexts.org/@go/page/10025
CHAPTER OVERVIEW

8: Business Processes
 Learning Objectives

Upon successful completion of this chapter, you will be able to:


Define the term business process
Identify different systems needed to support business processes in an organization
Explain the value of an enterprise resource planning(ERP) system
Explain how business process management and business process engineering work; and
Understand how information technology combined with business processes can bring an organization competitive
advantage.

Business processes are the essence of what a business does, and information systems play an important role in making them work.
This chapter will discuss business process management, business process reengineering, and ERP systems.
8.1: Introduction
8.2: What Is a Business Process?
8.3: Summary
8.4: Study Questions

This page titled 8: Business Processes is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1
8.1: Introduction
In the last seven chapters, we have gone through the first four components of an information system (IS). In this chapter, we will
discuss the fifth component of information systems, which is a process. People build information systems to solve problems faced
by people. Have you wondered how organizations use IS to run their organizations, help their people communicate and
collaborate? That is the role of Business Processes in an organization. This chapter will answer those questions and describe how
business processes can be used for strategic advantage.

This page titled 8.1: Introduction is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

8.1.1 https://workforce.libretexts.org/@go/page/9794
8.2: What Is a Business Process?
What Is a Business Process?
We have all heard the term process before, but what exactly does it mean? A business process is a series of related tasks that are
completed in a stated sequence to accomplish a business goal. This set of ordered tasks can be simple or complicated. However, the
steps involved in completing these tasks can be documented or illustrated in a flow chart. If you have worked in a business setting,
you have participated in a business process. Anything from a simple process for making a sandwich at Subway to building a space
shuttle utilizes one or more business processes.
Processes are something that businesses go through every day to accomplish their mission. The better their processes, the more
effective the business. Some businesses see their processes as a strategy for achieving competitive advantage. A process that
uniquely achieves its goal can set a company apart. A process that eliminates costs can allow a company to lower its prices (or
retain more profit).

Documenting a Process
Every day, we will conduct many processes without even thinking about them: getting ready for work, using an ATM, reading our
email, etc. But as processes grow more complex, they need to be documented.
For businesses, it is essential to do this because it allows them to ensure control over how activities are undertaken in their
organization. It also allows for standardization: McDonald’s has the same process for building a Big Mac in its restaurants.
The simplest way to document a process is to create a list. The list shows each step in the process; each step can be checked off
upon completion. For example, a simple process, such as how to create an account on Amazon, might look like a checklist such as::
Go to www.amazon.com.
Click on “Hello Sign in Account” on the top right of the screen
Select “start here” after the question “new customers?”
Select “Create your Amazon account.”
Enter your name, email, password
Select “Create Your Amazon account.”
Check your email to verify your new Amazon account
For processes that are not so straightforward, documenting the process as a checklist may not be sufficient. Some processes may
need to be documented as paths to be followed depending on certain conditions being met. For example, here is the process for
determining if an article for a term needs to be added to Wikipedia:
Search Wikipedia to determine if the term already exists.
If the term is found, then an article is already written, so you must think of another term. Repeat step 1.
If the term is not found, then look to see if there is a related term.
If there is a related term, then create a redirect.
If there is not a related term, then create a new article.
This procedure is relatively simple – in fact, it has the same number of steps as the previous example – but because it has some
decision points, it is more difficult to track with a simple list. In these cases, it may make more sense to use a diagram to document
the process to illustrate both the above steps and the decision points:

8.2.1 https://workforce.libretexts.org/@go/page/9795
Figure 8.2.1 : Process diagram for determining if a new term should be added to Wikipedia. Image by David Bourgeois, Ph.D. is
licensed Public Domain
Documenting Business Processes
To standardize a process, organizations need to document their processes and continuously keep track of them to ensure accuracy.
As processes change and improve, it is important to know which processes are the most recent. It is also important to manage the
process to be easily updated, and changes can be tracked!
The requirement to manage the documentation process is made easy by software tools such as document management, project
management, or Business Process Modeling (BPM) software (discussed later in this chapter). Examples include Microsoft Project,
IBM’s Business Process Manager. It includes standardized notations and common techniques such as:
Versions and timestamps: BPM will keep multiple versions of documents. The most recent version of a document is easy to
identify and will be served up by default.
Approvals and workflows: When a process needs to be changed, the system will manage both access to the documents for
editing and the document's routing for approvals.
Communication: When a process changes, those who implement the process need to be aware of the changes. The system will
notify the appropriate people when a change to a document is approved.
Techniques to model the processes. Standard graphical representations such as a flow chart, Gantt chart, Pert diagram, or
Unified Modeling Language can be used, which we will touch upon in Chapter 10.
Of course, these systems are not only used for managing business process documentation, and they have continued to evolve. Many
other types of documents are managed in these systems, such as legal documents or design documents.

Enterprise Resource Planning (ERP) Systems


An ERP system is a software application with a centralized database that can be used to run an entire company.

Figure 8.2.2 : Enterprise systems modules. Image by Shing Hin Yeung, is licensed under CC by-SA 3.0
Let’s look at an ERP and associated modules as illustrated in Fig 8.2.
It is a software application: The system is a software application, which means that it has been developed with specific logic
and rules. It has to be installed and configured to work specifically for an individual organization.

8.2.2 https://workforce.libretexts.org/@go/page/9795
It has a centralized database: The inner circle of Fig 8.2 indicates that all data in an ERP system is stored in a single, central
database. This centralization is key to the success of an ERP – data entered in one part of the company can be immediately
available to other parts of the company. Examples of types of data are shown: business intelligence, eCommerce, assets
management, among others.
It can be used to run an entire company: An ERP can be used to manage an entire organization’s operations, as shown in the
outermost circle of Fig 8.2. Each function is supported by a specific ERP module, reading clockwise from the top: Procurement,
Production, Distribution, Accounting, Human Resource, Corporate performance and government, Customer services, Sales.
Companies can purchase some or all available modules for an ERP representing different organization functions, such as
finance, manufacturing, and sales, to support their continued growth.
When an ERP vendor designs a module, it has to implement the associated business processes' rules. A selling point of an ERP
system is that it has best practices built right into it. In other words, when an organization implements an ERP, it also gets improved
best practices as part of the deal.
For many organizations, implementing an ERP system is an excellent opportunity to improve their business practices and upgrade
their software simultaneously. But for others, an ERP brings them a challenge: Is the process embedded in the ERP really better
than the process they are currently utilizing? If they implement this ERP, and it happens to be the same one that all of their
competitors have, will they become more like them, making it much more difficult to differentiate themselves?
This has been one of the criticisms of ERP systems: they commoditize business processes, driving all businesses to use the same
processes, thereby losing their uniqueness. The good news is that ERP systems also have the capability to be configured with
custom processes. For organizations that want to continue using their own processes or even design new ones, ERP systems offer
ways to support this through customizations.
But there is a drawback to customizing an ERP system: organizations have to maintain the changes themselves. Whenever an
update to the ERP system comes out, any organization that has created a custom process will be required to add that change to their
ERP. This will require someone to maintain a listing of these changes and retest the system every time an upgrade is made.
Organizations will have to wrestle with this decision: When should they go ahead and accept the best-practice processes built into
the ERP system, and when should they spend the resources to develop their own processes? It makes the most sense only to
customize those processes that are critical to the competitive advantage of the company.
Some of the best-known ERP vendors are SAP, Microsoft, and Oracle.

Registered trademark of SAP


Adopting an ERP is about adopting a standard business process across the entire company. The benefits are many, so are the risks
of adopting an ERP system. Organizations can spend up to millions of dollars and a few years to fully implement an ERP. Hence,
adopting an ERP is a strategic decision to decide how a company wants to run its organization based on a set of business rules and
processes to deliver competitive advantages.

Business Process Management (BPM)


Organizations that are serious about improving their business processes will also create structures to manage those processes. BPM
can be thought of as an intentional effort to plan, document, implement, and distribute an organization’s business processes with
information technology support.
BPM is more than just automating some simple steps. While automation can make a business more efficient, it cannot provide a
competitive advantage. On the other hand, BPM can be an integral part of creating that advantage, as we saw in Chapter 7.
Not all of an organization’s processes should be managed this way. An organization should look for processes essential to the
business's functioning and those that may be used to bring a competitive advantage. The best processes to look at include
employees from multiple departments, those who require decision-making that cannot be easily automated, and processes that
change based on circumstances.
Let’s examine an example. Suppose a large clothing retailer is looking to gain a competitive advantage through superior customer
service. As part of this, they create a task force to develop a state-of-the-art returns policy that allows customers to return any
clothing article, no questions asked. The organization also decides that to protect the competitive advantage that this returns policy
will bring, they will develop their own customization to their ERP system to implement this returns policy. As they prepare to roll

8.2.3 https://workforce.libretexts.org/@go/page/9795
out the system, they invest in training for all of their customer-service employees, showing them how to use the new system and
process returns. Once the updated returns process is implemented, the organization will measure several key indicators about
returns that will allow them to adjust the policy as needed. For example, if they find that many customers are returning their high-
end clothing after wearing them once, they could implement a change to the process that limits – to, say, fourteen days – the time
after the original purchase that an item can be returned. As changes to the returns policy are made, the changes are rolled out via
internal communications, and updates to the system's returns processing are made. In our example, the system would no longer
allow an item to be returned after fourteen days without an approved reason.
If done properly, business process management will provide several key benefits to an organization, contributing to competitive
advantage. These benefits include:
Empowering employees: When a business process is designed correctly and supported with information technology,
employees will implement it on their own authority. In our returns policy example, an employee would be able to accept returns
made before fourteen days or use the system to make determinations on what returns would be allowed after fourteen days.
Built-in reporting: By building measurement into the programming, the organization can keep up to date on key metrics
regarding their processes. In our example, these can improve the returns process and, ideally, reduce returns.
Enforcing best practices: As an organization implements processes supported by information systems, it can implement the
best practices for that business process class. In our example, the organization may require that all customers returning a
product without a receipt show a legal ID. This requirement can be built into the system so that the return will not be processed
unless a valid ID number is entered.
Enforcing consistency: By creating a process and enforcing it with information technology, it is possible to create consistency
across the organization. In our example, all stores in the retail chain can enforce the same returns policy. And if the returns
policy changes, the change can be instantly enforced across the entire chain.

Business Process Re-engineering (BPR)


As organizations look to manage their processes to gain a competitive advantage, they also need to understand that their existing
ways of doing things may not be the most effective or efficient. A process developed in the 1950s will not be better just because it
is now supported by technology.
In 1990, Michael Hammer’s article (1990) “Reengineering Work: Don’t Automate, Obliterate.” discusses how simply automating a
bad process does not make it better. Instead, companies should “blow up” their existing processes and develop new processes that
take advantage of the new technologies and concepts. Instead of automated outdated processes that do not add value, companies
should use modern IT technologies to re-engineer their processes to achieve significant performance improvements radically.
Business process reengineering is not just taking an existing process and automating it. BPR fully understands the process's goals
and then dramatically redesigns it from the ground up to achieve dramatic improvements in productivity and quality. But this is
easier said than done. Most of us think about making small, local improvements to a process; complete redesign requires thinking
on a larger scale.
Hammer provides some guidelines for how to go about doing business process reengineering. You can read an excerpt from the
July-August 1990 HBR issue (accessible with a free account at HBR, at the time of this writing). A summary of the guidelines is
below:
Organize around outcomes, not tasks. This means to design the process so that, if possible, one person performs all the steps.
Instead of repeatedly repeating one step in the process, the person stays involved in the process from start to finish. For
example, Mutual Benefit LIfe’s use of one person(a case manager) to perform all tasks required for a completed insurance
application from paperwork, medical checks, risk checks to policy pricing.
Have those who use the outcomes of the process perform the process. Using information technology, many simple tasks are
now automated to empower the person who needs the process's outcome to perform it. Hammer's example is purchasing:
instead of having every department in the company use a purchasing department to order supplies, have the supplies ordered
directly by those who need the supplies using an information system.
Subsume information-processing work into the real work that produces the information. When one part of the company
creates information (like sales information or payment information), it should be processed by that department. There is no need
for one part of the company to process information created in another part of the company. An example of this is Ford's
redesigned accounts payable process where receiving processes the information about goods received rather than sending it to
accounts payable.

8.2.4 https://workforce.libretexts.org/@go/page/9795
Treat geographically dispersed resources as though they were centralized. With the communications technologies in place
today, it becomes easier than ever to not worry about physical location. A multinational organization does not need separate
support departments (such as IT, purchasing, etc.) for each location.
Link parallel activities instead of integrating their results. Departments that work in parallel should share data and
communicate with each other during their activities instead of waiting until each group is done and then comparing notes.
Put the decision points where the work is performed, and build controls into the process. The people who do the work
should have decision-making authority, and the process itself should have built-in controls using information technology. The
workers become self-managing and self-controlling, and the manager’s role changes to supporter and facilitator.
Capture information once at the source. Requiring information to be entered more than once causes delays and errors. With
information technology, an organization can capture it once and then make it available whenever needed.
These principles may seem like common sense today, but in 1990 they took the business world by storm. Ford and Mutual Benefit
Life’s successful attempt at reengineering a core business process have become textbook examples of Business process
Reengineering.
Organizations can improve their business processes by many orders of magnitude without adding new employees, simply changing
how they did things (see sidebar). For examples of how modern businesses of this century undergo process reengineering to
competitive advantage, read this blog by Carly Burdova on minit.
Unfortunately, business process reengineering got a bad name in many organizations. This was because it was used as an excuse for
cost-cutting that really had nothing to do with BPR. For example, many companies used it as an excuse for laying off part of their
workforce. Today, however, many BPR principles have been integrated into businesses and are considered part of good business
process management.

Sidebar: Re-engineering the College Bookstore


The process of purchasing the correct textbooks on time for college classes has always been problematic. And now, with online
bookstores such as Amazon and Chegg competing directly with the college bookstore for students’ purchases, the college
bookstore is under pressure to justify its existence.
But college bookstores have one big advantage over their competitors: they have access to students’ data. In other words, once a
student has registered for classes, the bookstore knows exactly what books that student will need for the upcoming term. To
leverage this advantage and take advantage of new technologies, the bookstore wants to implement a new process that will make
purchasing books through the bookstore advantageous to students. Though they may not compete on price, they can provide other
advantages, such as reducing the time it takes to find the books and guaranteeing that the book is the correct one for the class. To
do this, the bookstore will need to undertake a process redesign.
The process redesign's goal is simple: capture a higher percentage of students as customers of the bookstore. The before and after
the reengineering is shown in Figure 8.2.3.

Figure 8.2.3 : College bookstore process redesign. Image by David Bourgeois, Ph.D. is licensed CC BY 4.0
The Before process steps are:
1. The students get a booklist from each instructor
2. Go to the bookstore to search for the books on the list
3. If they are available, then students can purchase them

8.2.5 https://workforce.libretexts.org/@go/page/9795
4. If they are not available, then the students will order the missing books
5. The students purchase the missing books
6. Students may need to do step 3 if it is not yet done
After diagramming the existing process and meeting with student focus groups, the bookstore develops a new process. In the newly
redesigned process:
1. The bookstore utilizes information technology to reduce the amount of work the students need to do to get their books by
sending the students an email with a list of all the books required for their upcoming classes along with purchase options( new,
used, or rental)
2. By clicking a link in this email, the students can log into the bookstore, confirm their books, and pay for their books online.
3. The bookstore will then deliver the books to the students.
The new re-engineered process delivers the business goal of capturing a larger percentage of students as customers of the bookstore
using technology to provide a valuable value-added service to students to make it convenient and faster.

ISO Certification
Many organizations now claim that they are using best practices when it comes to business processes. To set themselves apart and
prove to their customers (and potential customers) that they are indeed doing this, these organizations seek out an ISO 9000
certification.
ISO is an acronym for International Standard Organization, representing a global network of national standards bodies

Registered trademark of International Standard Organization. Image by International Organization for Standardization is licensed
CC-by-SA 4.0 International
This body defines quality standards that organizations can implement to show that they are, indeed, managing business processes in
an effective way. The ISO 9000 certification is focused on quality.
To receive ISO certification, an organization must be audited and found to meet specific criteria. In its most simple form, the
auditors perform the following review:
Tell me what you do (describe the business process).
Show me where it says that (reference the process documentation).
Prove that this is what happened (exhibit evidence in documented records).
Over the years, this certification has evolved, and many branches of the certification now exist. The ISO 9000 family addresses
various aspects of quality management. ISO certification is one way to separate an organization from others regarding its quality
and services and meet customer expectations.

References
Hammer, Michael (1990). Reengineering work: don't automate, obliterate. Harvard Business Review 68.4: 104–112

This page titled 8.2: What Is a Business Process? is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

8.2.6 https://workforce.libretexts.org/@go/page/9795
8.3: Summary
Summary
The advent of information technologies has had a huge impact on how organizations design, implement and support business
processes. From document management to project management to ERP systems, information systems are tied into organizational
processes. Using business process management, organizations can empower employees and leverage their processes for
competitive advantage. Using business process reengineering, organizations can vastly improve their effectiveness and the quality
of their products and services. Integrating information technology with business processes is one-way information systems can
bring an organization a lasting competitive advantage.

This page titled 8.3: Summary is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

8.3.1 https://workforce.libretexts.org/@go/page/9796
8.4: Study Questions
Study Questions
1. What does the term business process mean?
2. What are three examples of business processes ( from a job you have had or an organization you have observed?
3. What is the value of documenting a business process?
4. What is an ERP system? How does an ERP system enforce best practices for an organization?
5. What is one of the criticisms of ERP systems?
6. What is business process reengineering? How is it different from incrementally improving a process?
7. Why did BPR get a bad name?
8. List the guidelines for redesigning a business process.
9. What is business process management? What role does it play in allowing a company to differentiate itself?
10. What does ISO certification signify?

Exercises
1. Think of a business process that you have had to perform in the past. How would you document this process? Would a diagram
make more sense than a checklist? Document the process both as a checklist and as a diagram.
2. Review the return policies at your favorite retailer and then answer this question: What information systems do you think need
to be in place to support their return policy.
3. If you were implementing an ERP system, in which cases would you be more inclined to modify the ERP to match your
business processes? What are the drawbacks of doing this?
4. Which ERP is the best? Do some original research and compare three leading ERP systems to each other. Write a two- to three-
page paper that compares their features.
5. Research a company that chooses to implement an ERP. Write a report to describe it.
6. Research a failed implementation of an ERP. Write a report to describe why.
7. Research and write a report on how a company can obtain an ISO quality management certification.

This page titled 8.4: Study Questions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

8.4.1 https://workforce.libretexts.org/@go/page/9798
CHAPTER OVERVIEW

9: The People in Information System


 Learning Objectives

Upon successful completion of this chapter, you will be able to:


Describe each of the different roles that people play in the design, development, and use of information systems;
Understand the different career paths available to those who work with information systems;
Explain the importance of where the information-systems function is placed in an organization;
Describe the different types of users of information systems.

This chapter will provide an overview of the different types of people involved in information systems. This includes people(and
machines) who create information systems, those who operate and administer information systems, those who manage or support
information systems, those who use information systems, and IT's job outlook.
9.1: Introduction
9.2: The Creators of Information Systems
9.3: Information-Systems Operations and Administration
9.4: Managing Information Systems
9.5: Emerging Roles
9.6: Career Path in Information Systems
9.7: Information-Systems Users – Types of Users
9.8: Summary
9.9: Study Questions

This page titled 9: The People in Information System is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-
Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1
9.1: Introduction
In this text's opening chapters, we focused on the technology behind information systems: hardware, software, data, and
networking. In the last chapter, we discussed business processes and the key role they can play in a business's success. In this
chapter, we will be discussing the last component of an information system: people.

Figure 9.1.1 : People in Information systems. Image by Karen Arnold - PublicDomainPicutres is licensed CCO-PD
People are involved in information systems in just about every way you can think of: people imagine information systems, develop
information systems, support information systems, and, perhaps most importantly, people use information systems.

This page titled 9.1: Introduction is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

9.1.1 https://workforce.libretexts.org/@go/page/9801
9.2: The Creators of Information Systems
The first group of people we are going to look at plays a role in designing, developing, and building information systems. These
people are generally very technical and have a background in programming and mathematics. Just about everyone who works in
creating information systems has a minimum of a bachelor’s degree in computer science or information systems. However, that is
not necessarily a requirement. We will be looking at the process of creating information systems in more detail in chapter 10.

Systems Analyst
The systems analyst's role is unique in that it straddles the divide between identifying business needs and imagining a new or
redesigned computer-based system to fulfill those needs. This individual will work with a person, team, or department with
business requirements and identify the specific details of a system that needs to be built. Generally, this will require the analyst to
understand the business itself, the business processes involved, and the ability to document them well. The analyst will identify the
different stakeholders in the system and work to involve the appropriate individuals.
Once the requirements are determined, the analyst will begin translating these requirements into an information-systems design. A
good analyst will understand what different technological solutions will work and provide several different alternatives to the
requester, based on the company’s budgetary constraints, technology constraints, and culture. Once the solution is selected, the
analyst will create a detailed document describing the new system. This new document will require that the analyst understand how
to speak in systems developers' technical language.
A systems analyst generally is not the one who does the actual development of the information system. The design document
created by the systems analyst provides the detail needed to create the system and is handed off to a programmer (or team of
programmers) to do the actual creation of the system. In some cases, however, a systems analyst may create the system that he or
she designed. This person is sometimes referred to as a programmer-analyst.
In other cases, the system may be assembled from off-the-shelf components by a person called a systems integrator. This is a
specific type of systems analyst that understands how to get different software packages to work with each other.
To become a systems analyst, you should have a background in business and systems design. You also must have strong
communication and interpersonal skills plus an understanding of business standards and new technologies. Many analysts first
worked as programmers and/or had experience in the business before becoming systems analysts. The best systems analysts have
excellent analytical skills and are creative problem solvers.

Computer Programmer (or Software developer)


A computer programmer or software developer is responsible for writing the code that makes up computer software. They write,
test, debug and create documentation for computer programs. In the case of systems development, programmers generally attempt
to fulfill the design specifications given to them by a systems analyst. Many different programming styles exist: a programmer may
work alone for long stretches of time or may work in a team with other programmers. A programmer needs to understand complex
processes and the intricacies of one or more programming languages. They are usually referred to by the programming language
they most often use: Java programmer or Python programmer. Good programmers are very proficient in mathematics and excel at
logical thinking.

Computer Engineer
Computer engineers design the computing devices that we use every day. There are many types of computer engineers who work
on various types of devices and systems. Some of the more prominent engineering jobs are as follows:
Hardware engineer: A hardware engineer designs hardware components, such as microprocessors. A hardware engineer is
often at the cutting edge of computing technology, creating something brand new. Other times, the hardware engineer’s job is to
engineer an existing component to work faster or use less power. Many times, a hardware engineer’s job is to write code to
create a program that will be implemented directly on a computer chip.
Software engineer: Software engineers do not actually design devices; instead, they create new programming languages and
operating systems, working at the lowest hardware levels to develop new kinds of software to run on the hardware.
Systems engineer: A systems engineer takes the components designed by other engineers and makes them all work together.
For example, to build a computer, the motherboard, processor, memory, and hard disk all have to work together. A systems

9.2.1 https://workforce.libretexts.org/@go/page/9802
engineer has experience with many different hardware and software types and knows how to integrate them to create new
functionality.
Network engineer: A network engineer’s job is to understand the networking requirements and then design a communications
system to meet those needs, using the networking hardware and software available.
There are many different types of computer engineers, and often the job descriptions overlap. While many may call themselves
engineers based on a company job title, there is also a professional designation of “professional engineer,” which has specific
requirements behind it. In the US, each state has its own set of requirements for using this title, as do different countries around the
world. Most often, it involves a professional licensing exam.

References
Careers in IT. Retrieved November 13, 2020, from https://www.itcareerfinder.com/it-careers/mobile-application-developer.html

This page titled 9.2: The Creators of Information Systems is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-
Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

9.2.2 https://workforce.libretexts.org/@go/page/9802
9.3: Information-Systems Operations and Administration
Another group of information-systems professionals is involved in the day-to-day operations and administration of IT. These
people must keep the systems running and up-to-date so that the rest of the organization can make the most effective use of these
resources.

Computer Operator
A computer operator is a person who keeps large computers running. This person’s job is to oversee the mainframe computers and
data centers in organizations. Some of their duties include keeping the operating systems up to date, ensuring available memory
and disk storage, and overseeing the computer's physical environment. Since mainframe computers have increasingly been replaced
with servers, storage management systems, and other platforms, computer operators’ jobs have grown broader and include working
with these specialized systems.

Database Administrator
A database administrator (DBA) is the person who manages the databases for an organization. This person operates and maintains
databases, including database recovery and backup procedures, used as part of applications or the data warehouse. They are
responsible for securing the data and ensuring that only users who are approved to access the data can do so. The DBA also
consults with systems analysts and programmers on projects requiring access to or creating databases.
Database Architect: Database architects design and create secure databases that meet the needs of an organization. They work
closely with software designers, design analysts, and others to create comprehensive databases that may be used by hundreds, if
not thousands, of people. Most organizations do not staff a separate database architect position. Instead, they require DBAs to
work on both new and established database projects.
Database Analyst: Some organizations create a separate position, Database Analyst, who looks at databases from a higher
level. He analyzes database design and the changing needs of an organization, recommends additions for new projects, and
designs the tables and relationships.
Oracle DBA: A DBA that specializes in Oracle database. Oracle DBA’s handle capacity planning, evaluate database server
hardware, and manage all aspects of an Oracle database, including installation, configuration, design, and data migration.

Help-Desk/Support Analyst
Most midsize to large organizations have their own information-technology help desk and are the most visible IT roles. The help
desk is the first line of support for computer users in the company. Computer users who are having problems or need information
can contact the help desk for assistance. Often, a help-desk worker is a junior-level employee who does not necessarily know how
to answer all of the questions that come his or her way. In these cases, help-desk analysts work with senior-level support analysts or
have a computer knowledgebase at their disposal to help them investigate the problem at hand. The help desk is a great place to
break into IT because it exposes you to all of the company's different technologies. A successful help-desk analyst has conflict
resolutions, active listening skills, problem-solving abilities, and a wide range of technical knowledge across hardware, software,
and networks.

Trainer
A computer trainer conducts classes to teach people specific computer skills. For example, if a new ERP system is installed in an
organization, one part of the implementation process is to teach all users how to use the new system. A trainer may work for a
software company and be contracted to come in to conduct classes when needed; a trainer may work for a company that offers
regular training sessions, or a trainer may be employed full time for an organization to handle all of their computer instruction
needs. To be successful as a trainer, you need to be able to communicate technical concepts well and have a lot of patience!

Quality Support Engineers


A quality engineer establishes and maintains a company’s quality standards and tests systems to ensure efficiency, reliability, and
performance. They are also responsible for creating documentation that reports issues and errors relating to the computer and
software systems.

9.3.1 https://workforce.libretexts.org/@go/page/9803
This page titled 9.3: Information-Systems Operations and Administration is shared under a CC BY 3.0 license and was authored, remixed, and/or
curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative
(OERI)) .

9.3.2 https://workforce.libretexts.org/@go/page/9803
9.4: Managing Information Systems
The management of information-systems functions is critical to the success of information systems within the organization. Here
are some of the jobs associated with the management of information systems.

Chief Information Officer(CIO)


The CIO, or chief information officer, is the head of the information-systems function. This person aligns the plans and operations
of the information systems with the strategic goals of the organization. This includes tasks such as budgeting, strategic planning,
and personnel decisions for the information-systems function. This is a high-profile position as the CIO is also the face of the
organization's IT department. This involves working with senior leaders in all parts of the organization to ensure good
communication and planning.
Interestingly, the CIO position does not necessarily require a lot of technical expertise. While helpful, it is more important for this
person to have good management and people skills and understand the business. Many organizations do not have someone with the
CIO's title; instead, the head of the information-systems function is called vice president of information systems or director of
information systems.

Functional Manager
As an information-systems organization becomes larger, many of the different functions are grouped and led by a manager. These
functional managers report to the CIO and manage the employees specific to their function. For example, in a large organization, a
group of systems analysts reports to a systems-analysis function manager. For more insight into how this might look, see the
discussion later in the chapter of how information systems are organized.

ERP Management
Organizations using an ERP require one or more individuals to manage these systems. These people make sure that the ERP system
is completely up to date, work to implement any changes to the ERP needed, and consult with various user departments on needed
reports or data extracts.

Project Managers
Information-systems projects are notorious for going over budget and being delivered late. In many cases, a failed IT project can
spell doom for a company. A project manager is responsible for keeping projects on time and budget. This person works with the
project stakeholders to keep the team organized and communicates the status of the project to management. A project manager does
not have authority over the project team; instead, the project manager coordinates schedules and resources to maximize the project
outcomes. A project manager must be a good communicator and an extremely organized person. A project manager should also
have good people skills. Many organizations require their project managers to become certified as project management
professionals (PMP).

Information-Security Officer
An information security officer is in charge of setting information-security policies for an organization and then overseeing those
policies' implementation. This person may have one or more people reporting to them as part of the information security team. As
information has become a critical asset, this position has become highly valued. The information-security officer must ensure that
the organization’s information remains secure from both internal and external threats.

This page titled 9.4: Managing Information Systems is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong
T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

9.4.1 https://workforce.libretexts.org/@go/page/9804
9.5: Emerging Roles
As technology evolves, many new roles are becoming more common as other roles fade. For example, as we enter the age of “big
data,” we see the need for more data analysts and business-intelligence specialists. Many companies are now hiring social media
experts and mobile-technology specialists. The increased use of cloud computing and virtual-machine technologies also is breeding
demand for expertise in those areas.
Cloud system engineer: In the past, companies would typically store their data in large physical databases or even hire
database firms, but today, they turn to cloud storage as a low-cost and effective means of storing data. This is where cloud
engineers come in. They are responsible for the design, planning, management, maintenance, and support of an organization's
cloud computing environment.
Cyber Security Analyst (or engineer): As new technologies emerge, so do the number of security threats online. Cybersecurity
is a growing field that focuses on protecting organizations from digital attacks and keeping their information and networks safe.
The following are examples of some of the many cybersecurity roles:
Security Administrator: These professionals serve in high-level roles, overseeing the IT security efforts of their organization.
They create policies and procedures, identify weak areas of networks, install firewalls, and respond to security breaches.
Security Architect: Security architects design, plan, and supervise systems that thwart potential computer security threats.
They must find the strengths and weaknesses of their organizations' computer systems, often developing new security
architectures.
Security Analyst: Organizations employ a security analyst to protect computer and networking systems from cyber-attacks
and hackers and keep information and networks safe.
AI/Machine Learning Engineer: These engineers develop and maintain AI (artificial intelligence) machines and systems that
have the ability to learn and utilize existing knowledge. As more and more industries turn towards automating certain aspects of
the workforce, AI engineers will be in high demand.
Computer Vision Engineer: Computer vision engineers create and use computer vision and machine learning algorithms that
acquire, process, and analyze digital images, videos, etc. Their work is closely linked to AR(augmented reality) and VR (virtual
reality). As we see the rise of such technologies as self-driving vehicles, these skills' demands will continue to grow.
Big Data Engineer: Big Data Engineers create and manage a company's Big Data infrastructure, such as SQL engines and
tools. A big data engineer installs continuous pipelines that run to and from huge pools of filtered information from which data
scientists can pull relevant data sets for their analyses.
Health Information Technician: Health information technicians use specialized computer programs and administrative
techniques to ensure that patient's electronic health records are complete, accurate, accessible, and secure.
Mobile Application developers: Mobile App developers create software for mobile devices. They write programs inside a
mobile development environment using Objective C, C++, or Java programming languages. A mobile app developer will
typically choose an OS such as Google’s Android or Apple's IOS and develop apps for that environment.

This page titled 9.5: Emerging Roles is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

9.5.1 https://workforce.libretexts.org/@go/page/9805
9.6: Career Path in Information Systems
These job descriptions do not represent all possible jobs within an information system organization. Larger organizations will have
more specialized roles; smaller organizations may combine some of these roles. Many of these roles may exist outside of a
traditional information-systems organization, as we will discuss below.

Figure 9.6.1 : Jobs in Information Systems - Image from Pickpic is licensed CCO-PD
Working with information systems can be a rewarding career choice. Whether you want to be involved in very technical jobs
(programmer, database administrator) or want to be involved in working with people (systems analyst, trainer), there are many
different career paths available.
Often, those in technical jobs who want career advancement find themselves in a dilemma: do they want to continue doing
technical work, where sometimes their advancement options are limited or do they want to become a manager of other employees
and put themselves on a management career track? In many cases, those proficient in technical skills are not gifted with managerial
skills. Some organizations, especially those that highly value their technically skilled employees, will create a technical track that
exists in parallel to the management track to retain employees who are contributing to the organization. Today, most large
organizations have dual career paths - the Managerial and Technical/Professional.
Then there are people from other fields who want to get into IT. For example, a writer wants to become a technical writer, and a
salesperson may want to become a quality tester.
People have many different reasons for transitioning into the IT industry, and the timing couldn’t be better. The IT industry is
facing a massive shortage of workers, both domestic and international, and there are many employment opportunities at every
level.

 Sidebar: Are Certifications Worth Pursuing?


As technology is becoming more important to businesses, hiring employees with technical skills is becoming critical. But how
can an organization ensure that the person they are hiring has the necessary skills? These days, many organizations are
including technical certifications as a prerequisite for getting hired.
Certifications are designations given by a certifying body that someone has a specific knowledge level in a specific technology.
This certifying body is often the vendor of the product itself, though independent certifying organizations, such as CompTIA,
also exist. Many of these organizations offer certification tracks, allowing a beginning certificate as a prerequisite to getting
more advanced certificates. To get a certificate, you generally attend one or more training classes and then take one or more
certification exams. Passing the exams with a certain score will qualify you for a certificate. In most cases, these classes and
certificates are not free and, in fact, can run into the thousands of dollars. Some examples of the certifications in the highest
demand include Microsoft (software certifications), Cisco (networking), and SANS (security), Oracle (database, SQL).
For many working in IT (or thinking about an IT career), determining whether to pursue one or more of these certifications is
an important question. For many jobs, such as those involving networking or security, the employer will require a certificate to
determine which potential employees have a basic level of skill. For those already in an IT career, a more advanced certificate
may lead to a promotion. However, other cases, when experienced with a certain technology, will negate the need for
certification. For those wondering about the importance of certification, the best solution is to talk to potential employers and
those already working in the field to determine the best choice. Perusing different job websites to see the trend of hot IT jobs
and associated requirements is a good place to start.

9.6.1 https://workforce.libretexts.org/@go/page/9806
Organizing the Information-Systems Function
In the early years of computing, the information-systems function (generally called data processing) was placed in the
organization's finance or accounting department. As computing became more important, a separate information-systems function
was formed. However, it was still generally placed under the CFO and considered an administrative function of the company. In the
1980s and 1990s, when companies began networking internally and then linking up to the Internet, the information-systems
function was combined with the telecommunications functions and designated the information technology (IT) department. As
information technology's role continued to increase, especially the increased risk over security and privacy, its place in the
organization also moved up the ladder. In many organizations today, the head of IT (the CIO) reports directly to the CEO or COO.
There are still places where IT reports to a VP of finance.
IT is often organized into these functions:
IT support (call support)
Security
Database
Network
Applications to support end-user apps (i.e., Office) or enterprise apps (ERP, MRP).
The size of each function varies depending on the level of outsourcing a company decides to do.
Not all IT-related tasks are done directly by IT staff. Some tasks may be done by other groups in a firm such as Marketing or
Manufacturing. For example, marketing or engineering groups may choose their own vendor to support and provide cloud services
for the company's products or services. Collaboration with IT is critical to avoid creating confusion for end-user support and
training. Some IT tasks can also be outsourced to external partners.

Outsourcing
Outsourcing- using third-party service providers- to handle some of your business processes became a popular business strategy
back in the '80s and 90’s to combat rising labor costs and allow firms to focus on their core functions. For example, an early
function that firms outsourced is payroll. With the Internet boom and bust in 2000-2001 and the rise of the global marketplace,
outsourcing is now a common business strategy for companies of all sizes.

Figure 9.6.2 : Outsourcing. Image by Jireh Gibson is licensed Pixabay


If an organization needs a specific skill for a limited period of time, instead of training an existing employee or hiring someone
new, the job can be outsourced. Outsourcing can be used in many different situations within the information-systems function, such
as designing and creating a new website or the upgrade of an ERP system. Some organizations see outsourcing as a cost-cutting
move, contracting out a whole group or department. In some cases, outsourcing has become a necessity - the only feasible way to
grow your business, launch a product, or manage operations is by using an outside vendor for certain tasks.

Job Outlook
IT jobs are projected to grow due to continued increase in cloud computing, cybersecurity concert, and firms’ expansion, from both
computing and non-computing industries, to adopt new technologies and digital platforms,
According to the Bureau of Labor Statistics, jobs in computer and information system managers are projected to grow 10% from
2019 to 2029, 4% for network and computer systems administrators, 8% for computer support specialists.

9.6.2 https://workforce.libretexts.org/@go/page/9806
References
Bureau of Labor Statistics, U.S. Department of Labor, Occupational Outlook Handbook, Computer and Information Systems
Managers. Retrieved November 13, 2020, from https://www.bls.gov/ooh/management/computer-and-information-systems-
managers.htm
Careers in IT. Retrieved November 13, 2020, from https://www.itcareerfinder.com/it-careers/mobile-application-developer.html

This page titled 9.6: Career Path in Information Systems is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-
Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

9.6.3 https://workforce.libretexts.org/@go/page/9806
9.7: Information-Systems Users – Types of Users
Information-Systems Users – Types of Users
Besides the people who work to create, administer, and manage information systems, one more significant group of people: the
users of information systems. This group represents a considerable percentage of the people involved. If the user cannot
successfully learn and use an information system, the system is doomed to failure.
One tool used to understand how users will adopt a new technology comes from a 1962 study by Everett Rogers. In his book,
Diffusion of Innovation, 1 Rogers explains how new ideas and technology spread via communication channels over time.
Innovations are initially perceived as uncertain and even risky. To overcome this uncertainty, most people seek out others like
themselves who have already adopted the new idea or technology. Thus, the diffusion process consists of successive groups of
consumers adopting new technology( shown in blue in the graph below); the adoption rate will start slowly and then dramatically
increase once adoption reaches a certain point - its market share(yellow curve) reaches saturation level and becomes self-
sustaining.

Figure 9.4: Technology adoption user types


Image by Rogers Everett, licensed under Public domain, via Wikimedia Commons
Rogers identified five (sections of the blue curve) specific types of technology adopters:
Innovators: Innovators are the first individuals to adopt new technology. Innovators are willing to take risks, are the youngest
in age, have the highest social class, have great financial liquidity, are very social, and have the closest contact with scientific
sources and interaction with other innovators. Risk tolerance has them adopting technologies that may ultimately fail. Financial
resources help absorb these failures (Rogers 1962 5th ed, p. 282).
Early adopters: The early adopters adopt an innovation after a technology has been introduced and proven. These individuals
have the highest degree of opinion leadership among the other adopter categories, which means that they can influence the
largest majority's opinions. They are typically younger in age, have higher social status, more financial liquidity, more advanced
education, and are more socially aware than later adopters. These people are more discrete in adoption choices than innovators
and realize the judicious choice of adoption will help them maintain a central communication position (Rogers 1962 5th ed, p.
283).
Early majority: Individuals in this category adopt an innovation after a varying degree of time. This time of adoption is
significantly longer than the innovators and early adopters. This group tends to be slower in the adoption process, has above
average social status, has contact with early adopters, and seldom holds opinion leadership positions in a system (Rogers 1962
5th ed, p. 283).
Late majority: The late majority will adopt an innovation after the average member of the society. These individuals approach
an innovation with a high degree of skepticism, have below-average social status, very little financial liquidity, contact others in
the late majority and the early majority, and show very little opinion leadership.
Laggards: Individuals in this category are the last to adopt an innovation. Unlike those in the previous categories, individuals in
this category show no opinion leadership. These individuals typically have an aversion to change agents and tend to be

9.7.1 https://workforce.libretexts.org/@go/page/13083
advanced in age. Laggards typically tend to be focused on “traditions,” are likely to have the lowest social status and the lowest
financial liquidity, be the oldest of all other adopters, and be only in contact with family and close friends.
Knowledge of the diffusion theory and the five types of technology users help provide additional insight into how to implement
new information systems within an organization. For example, when rolling out a new system, IT may want to identify the
innovators and early adopters within the organization and work with them first, then leverage their adoption to drive the
implementation.
This process of diffusion of new ideas and technology can usually take months or years. But there are exceptions: the use of the
internet in the 1990s and mobile devices in recent years to communicate, interact socially, access news and entertainment have
spread more rapidly than possibly any other innovation in humankind's history.

References
Rogers, E. M. (1962). Diffusion of innovations. New York: Free Press

This page titled 9.7: Information-Systems Users – Types of Users is shared under a CC BY 3.0 license and was authored, remixed, and/or curated
by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

9.7.2 https://workforce.libretexts.org/@go/page/13083
9.8: Summary
Summary
This chapter has reviewed the many different categories of individuals - from the front-line help-desk workers to system analysts to
chief information officer(CIO) -who make up the people component of information systems. The world of information technology
is changing so fast that new roles are being created all the time, and roles that have existed for decades are being phased out. That
said, this chapter should have given you a good idea of the importance of the people component of information systems.

This page titled 9.8: Summary is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

9.8.1 https://workforce.libretexts.org/@go/page/13084
9.9: Study Questions
Study Questions
1. Describe the role of a systems analyst.
2. What are some of the different roles of a computer engineer?
3. What are the duties of a computer operator?
4. What does the CIO do?
5. Describe the job of a DBA.
6. Explain the point of having two different career paths in information systems.
7. What are the five types of information-systems users?
8. Why would an organization outsource?

Exercises
1. Which IT job would you like to have? Do some original research and write a two-page paper describing the duties of the job
you are interested in.
2. Spend a few minutes on Dice or Monster to find IT jobs in your area. What are IT jobs currently available? Write up a two-page
paper describing three jobs, their starting salary (if listed), and the skills and education needed for the job.
3. How is the IT function organized in your school or place of employment? Create an organization chart showing how the IT
organization fits into your overall organization. Comment on how centralized or decentralized the IT function is.
4. What type of IT user are you? Take a look at the five types of technology adopters, and then write a one-page summary of
where you think you fit in this model.

This page titled 9.9: Study Questions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

9.9.1 https://workforce.libretexts.org/@go/page/13085
CHAPTER OVERVIEW

10: Information Systems Development


 Learning Objectives

Upon successful completion of this chapter, you will be able to:


Explain the overall process of developing a new software application;
Explain the differences between software development methodologies;
Understand the different types of programming languages used to develop software;
Understand some of the issues surrounding the development of mobile applications; and
Identify the four primary implementation policies.

People build information systems for people’s use. This chapter will look at different methods to manage an information system's
development process, with special attention to software development, review mobile application development, and discuss end-user
computing. We will look at key trade-offs that organizations face in making critical decisions to “build vs. buy or subscribe,” the
balancing act between scope, cost, and time while delivering a high-quality project and obtaining the buy-in from the users.
10.1: Introduction
10.2: Systems Development Life Cycle (SDLC) Model
10.3: Software Development
10.4: Implementation Methodologies
10.5: Summary
10.6: Study Questions
10.7: Summary

This page titled 10: Information Systems Development is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-
Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1
10.1: Introduction
When someone has an idea for a new function to be performed by a computer, how does that idea become a reality? If a company
wants to implement a new business process and needs new hardware or software to support it, how do they go about making it
happen? How do they decide whether to build their own solution or buy or subscribe to a solution available in the market?
This chapter will discuss the different methods of taking those ideas and bringing them to reality, a process known as information
systems development.

This page titled 10.1: Introduction is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

10.1.1 https://workforce.libretexts.org/@go/page/9808
10.2: Systems Development Life Cycle (SDLC) Model
Systems Development Life Cycle (SDLC) Model
SDLC was first developed in the 1960s to manage the large projects associated with corporate systems running on mainframes. It is
a very structured process designed to manage large projects with many people's efforts, including technical, business, support
professionals. These projects are often costly to build, and they have a large impact on the organization. A failed project or an
incorrect business decision to pick a wrong project to fund can be a business or financial catastrophe for an organization.
SDLC is a model defining a process of a set of phases for planning, analysis, design, implementation, maintenance. Chapter 1
discusses that an information system (IS) includes hardware, software, database, networking, process, and people. SDLC has been
used often to manage an IS project that may include one, some, or all of the elements of an IS. Let’s walk through each of the five
phases of an SDLC as depicted in Figure 10.1:

Fig 10.1 - Software Development Lifecycle Model. Image by Ly-Huong Pham, Ph.D. is licensed under CC BY NC
1. Planning. In this phase, a request is initiated by someone who acts as a sponsor for this idea. A small team is assembled to
conduct a preliminary assessment of the request's merit and feasibility. The objectives of this phase are:
To determine how the request fits with the company’s strategy or business goals.
To conduct a feasibility analysis, which includes an analysis of the technical feasibility (is it possible to create this?), the
economic feasibility (can we afford to do this?), and the legal feasibility (are we allowed to do this?).
To recommend a go/no go for the request. If it is a go, then a concept proposal is also produced for management to
approve.
2. Analysis. Once the concept proposal is approved, the project is formalized with a new project team (including the previous
phase). Using the concept proposal as the starting point, the project members work with different stakeholder groups to
determine the new system's specific requirements. No programming or development is done in this step. The objectives of this
phase are:
Identify and Interview key stakeholders.
Document key procedures
Develop the data requirements
To produce a system-requirements document as the result of this phase. This has the details to begin the design of the
system.
3. Design. Once the system requirements are approved, the team may be reconfigured to bring in more members. This phase aims
for the project team to take the system requirements document created in the previous phase and develop the specific technical
details required for the system. The objectives are:
Translate the business requirements into specific technical requirement
Design the user interface, database, data inputs and outputs, and reports
Produce a system-design document as the result of this phase. . This document will have everything a programmer will
need to create the system.

10.2.1 https://workforce.libretexts.org/@go/page/9809
4. Implementation. Once a system design is approved, the software code finally gets written in the programming phase, and the
development effort for other elements such as hardware also happens. The purpose is to create an initial working system. The
objectives are:
Develop the software code, and other IS components. Using the system- design document as a guide, developers begin to
code or develop all the IS project components.
Test the working system through a series of structured tests such as:
The first is a unit test, which tests individual parts of the code for errors or bugs.
Next is a system test, where the system's different components are tested to ensure that they work together properly.
Finally, the user-acceptance test allows those that will be using the software to test the system to ensure that it meets
their standards.
Iteratively test any fixes again to address any bugs, errors, or problems found during testing.
Train the users
Provide documentation
Perform necessary conversions from any previous system to the new system.
Produce, as a result, the initial working system that meets the requirements laid out in the analysis phase and the
design developed in the design phase.
5. Maintenance. This phase takes place once the implementation phase is complete. In this phase, the system must have a
structured support process in place to:
Report bugs
Deploy bug fixes
Accept requests for new features
Evaluate the priorities of reported bugs or requested features to be implemented
Identify a predictable and regular schedule to release system updates and perform backups.
Dispose of data and anything else that is no longer needed
Organizations can combine or sub-divide these phases to fit their needs. For example, instead of one phase, Planning, an
organization can choose to have two phases: Initiation, Concept; or splitting the implementation into two phases: implementation
and testing.

Waterfall Model
One specific SDLC-based model is the Waterfall model, and the name is often thought to be the same as SDLC. It is used to
manage software projects as depicted in Fig 10.2 with five phases: Requirements, Design, Implement, Verification, and
Maintenance. This model stresses that each phase must be completed before the next one can begin (hence the name waterfall). For
example, changes to the requirements are not allowed once the implementation phase has begun, or changes must be sought and
approved to a change process. They may require the project to restart from the requirement phase since new requirements need to
be approved, which may cause the design to be revised before the implementation phase can begin.

Fig 10.2 Waterfall Model of System Development. Image by Peter Kemp / Paul Smit is licensed CC BY 3.0

10.2.2 https://workforce.libretexts.org/@go/page/9809
The waterfall model's rigid structure has been criticized for being quite rigid and causing teams to be risk-averse to avoid going
back to previous phases. However, there are benefits to such a structure too. Some advantages and disadvantages of SDLC and
Waterfall are:
Advantages and Disadvantage of SDLC and Waterfall

Advantages Disadvantages

The robust process to control and track changes to minimize the Take time to record everything, which leads to additional cost and time
number of risks can derail the project unknowingly. to the schedule.

Standard and transparent processes help the management of large Too much time spent attending meetings, seeking approval, etc. which
teams. lead to additional cost and time to the schedule.

Documentation reduces the risks of losing personnel, easier to add Some members do not like to spend time writing, leading to the
people to the project. additional time needed to complete a project.

It is difficult to incorporate changes or customers’ feedback since the


Easier to trace a problem in the system to its root whenever errors are
project has to go back to one or more previous phases, leading teams to
found, even after the project is completed.
become risk-averse.

Other models are developed over time to address these criticisms. We will discuss two other models: Rapid Application
Development and Agile, as different approaches to SDLC.

Rapid Application Development (RAD)


Rapid application development (RAD) is a software development (or systems-development) methodology that focuses less on
planning and incorporating changes on an ongoing basis. RAD focuses on quickly building a working model of the software or
system, getting feedback from users, and updating the working model. After several iterations of development, a final version is
developed and implemented. Let’s walk through the four phases in the RAD model as depicted in Fig. 10.3.

Fig 10.3 Image Rapid Application Development Model is licensed Public domain.
1. Requirements Planning. This phase is similar to the planning, analysis, and design phases of the SDLC.
2. User Design. In this phase, the users' representatives work with the system analysts, designers, and programmers to
interactively create the system's design. One technique for working with all of these various stakeholders is the Joint
Application Development (JAD) session. A JAD session gets all relevant users who interact with the systems from different
perspectives, other key stakeholders, including developers, to have a structured discussion about the system's design. The
objectives are for users to understand and adopt the working model and for the developers to understand how the system needs
to work from the user’s perspective to provide a positive user experience.
3. Construction. In the construction phase, the tasks are similar to SDLC’s implementation phase. The developers continue to
work interactively with the users to incorporate their feedback as they interact with the working model that is being developed.
This is an interactive process, and changes can be made as developers are working on the program. This step is executed
parallel with the User Design step in an iterative fashion until an acceptable version of the product is developed.
4. Cutover. This step is similar to some of the SDLC implementation phase tasks. The system goes live or is fully deployed. All
steps required to move from the previous state to using the new system are completed here.
Compared to the SDLC or Waterfall model, the RAD methodology is much more compressed. Many of the SDLC steps are
combined, and the focus is on user participation and iteration. This methodology is better suited for smaller projects and has the

10.2.3 https://workforce.libretexts.org/@go/page/9809
added advantage of giving users the ability to provide feedback throughout the process. SDLC requires more documentation and
attention to detail and is well suited to large, resource-intensive projects. RAD is better suited for projects that are less resource-
intensive and need to be developed quickly. Here are some of the advantages and disadvantages of RAD:
Advantages and Disadvantage of RAD

Advantages Disadvantages

Risks of weak implementation of features that are not visible to the


Increase quality due to the frequency of interacting with the users
users, such as security
Lack of control over the system changes due to a working version's fast
Reduce risks of users’ refusal to accept the finished product
turn-around to address users’ issues.

Improve chances of on-time, on-budget completion as users update in Lack of design since changes are being put in the system might
real-time, avoiding surprises during development. unknowingly affect other parts of the system.

Scarce resources as developers are tied up, which could slow down
Increase interaction time between developers/experts and users
other projects.

Best suited for small to medium size project teams Difficult to scale up to large teams

Agile Development Methodologies


Agile methodologies are a group of methodologies that utilize incremental changes focusing on quality and attention to detail. Each
increment is released in a specified period of time (called a time box), creating a regular release schedule with particular objectives.
While considered a separate methodology from RAD, they share some of the same principles: iterative development, user
interaction, and changeability. The agile methodologies are based on the “Agile Manifesto,” first released in 2001.
The characteristics of agile methods include:
small cross-functional teams that include development-team members and users;
daily status meetings to discuss the current state of the project;
short time-frame increments (from days to one or two weeks) for each change to be completed; and
At the end of each iteration, a working project is completed to demonstrate to the stakeholders.
In essence, the Agile approach puts a higher value on tasks that promote interaction, build frequent working versions,
customers/user collaboration, and quick response to change and less emphasis on processes and documentation. The agile
methodologies' goal is to provide an iterative approach's flexibility while ensuring a quality product.
There are a variety of models that are built using Agile methodologies. One such example is the Scrum development model.
Scrum development model
This model is suited for small teams who work to produce a set of features within fixed-time interactions, such as two- to four
weeks, called sprints. Let’s walk through the four key elements of a Scrum model as depicted in Fig 10.4.

Fig 10.4. The Scrum project management method. Image by Lakeworks is licensed CC BY-SA 4.0
1. Product backlog. This is a detailed breakdown list of work to be done. All the work is prioritized based on criteria such as
risks, dependencies, mission-critical, etc. Developers select their own tasks and self-organize to get the work done.
2. Sprint backlog. This is a list of the work to be done in the next sprint.
3. Sprint. This is a fixed time, such as 1-day, 2-weeks, or 4-weeks, as agreed by the team. A daily progress meeting is called a
daily scrum, typically a short 10-15 minute meeting facilitated by a scrum master whose role is to remove roadblocks for the
team.

10.2.4 https://workforce.libretexts.org/@go/page/9809
4. Working increment of the software. This is a working version that is incrementally built with the breakdown lists at the end of
the sprints.

Lean Methodology
One last methodology we will discuss is a relatively new concept taken from the business bestseller The Lean Startup, by Eric Reis.

Fig 10.5. The Lean Methodology. David T. Bourgeois, Ph.D. is licensed CC BY-SA 2.0
This methodology focuses on taking an initial idea and developing a minimum viable product (MVP). The MVP is a working
software application with just enough functionality to demonstrate the idea behind the project. Once the MVP is developed, it is
given to potential users for review. Feedback on the MVP is generated in two forms: (1) direct observation and discussion with the
users, and (2) usage statistics gathered from the software itself. Using these two forms of feedback, the team determines whether
they should continue in the same direction or rethink the project's core idea, change the functions, or create a new MVP. This
change in strategy is called a pivot. Several iterations of the MVP are developed, with new functions added each time based on the
feedback, until a final product is completed.
The biggest difference between the lean methodology and the other methodologies is that the system's full set of requirements is
unknown when the project is launched. As each iteration of the project is released, the statistics and feedback gathered are used to
determine the requirements. The lean methodology works best in an entrepreneurial environment where a company is interested in
determining if their idea for a software application is worth developing.

References:
Manifesto for Agile Software Development (2001). Retrieved December 10, 2020, from http://agilemanifesto.org/
The Lean Startup. Retrieved on December 9, 2020, from http://theleanstartup.com/

This page titled 10.2: Systems Development Life Cycle (SDLC) Model is shared under a CC BY 3.0 license and was authored, remixed, and/or
curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative
(OERI)) .

10.2.5 https://workforce.libretexts.org/@go/page/9809
10.3: Software Development
Software Development
Many of the methodologies discussed above are used to manage software development since programming is complex, and
sometimes errors are hard to detect. We learned in chapter 2 that software is created via programming, and programming is the
process of creating a set of logical instructions for a digital device to follow using a programming language. The programming
process is sometimes called “coding” because the syntax of a programming language is not in a form that everyone can understand
– it is in “code.”
The process of developing good software is usually not as simple as sitting down and writing some code. True, sometimes a
programmer can quickly write a short program to solve a need. But most of the time, the creation of software is a resource-
intensive process that involves several different groups of people in an organization. In the following sections, we are going to
review several different methodologies for software development.

Sidebar: The project management quality triangle


When developing software or any product or service, there is tension between the developers and the different stakeholder groups,
such as management, users, and investors. Fig. 10.5 illustrates the tension of the three requirements: time, cost, and quality that
project managers need to make tradeoffs in. From how quickly the software can be developed (time), to how much money will be
spent (cost), to how well it will be built (quality). The quality triangle is a simple concept. It states that you can only address two of
the following: time, cost, and quality for any product or service being developed.

Fig 10.6 Project Management Quality Triangle. Image by Mapto is licensed Public domain
So what does it mean that you can only address two of the three? It means that the finished product's quality depends on the three
variables: scope, schedule, and the allocated budget. Changes in any of these three variables affect the other two, hence, the quality.
For example, if a feature is added, but no additional time is added to the schedule to develop and test, the code's quality may suffer,
even if more money is added. There are times when it is not even feasible to make the tradeoff. For example, adding more people
to a project where members are so overwhelmed that they don’t have time to manage or train new people. Overall, this model helps
us understand the tradeoffs we must make when developing new products and services.

Programming Languages
One of the important decisions that a project team needs to make is to decide which programming language(s) are to be used and
associated tools in the development process. As mentioned in chapter 3, software developers create software using one of several
programming languages. A programming language is a formal language that provides a way for a programmer to create structured
code to communicate logic in a format that the computer hardware can execute. Over the past few decades, many different
programming languages have evolved to meet many different needs.
There is no one way to categorize the languages. Still, they are often grouped by type (i.e., query, scripting), or chronologically by
year when it was introduced (i.,e. Fortran was introduced in 1954s), by their “generation,” by how it was translated to the machine
code, or how it was executed. We will discuss a few categories in this chapter.
Generations of Programming Languages
Early languages were specific to the type of hardware that had to be programmed; each type of computer hardware had a different
low-level programming language (in fact, even today, there are differences at the lower level, though higher-level programming
languages now obscure them). In these early languages, precise instructions had to be entered line by line – a tedious process.

10.3.1 https://workforce.libretexts.org/@go/page/9810
Some common characteristics are summarized below to illustrate some differences among these generations:

First-generation Second-generation Third-generation Fourth-generation Fifth-generation


(1GL) (2GL) (3GL) (4GL) (5GL)

Time introduced (est). 1940s or earlier 1950s 1950s-1970s 1970s-1990s 1980s-1900s

Use a set of syntax The syntax is more


They are made of
that is readable by structured and is The syntax is friendly
Instructions binary numbers of 0s Still in progress.
human and made up of more to non-programmers
and 1s
programmers human-like language

Machine independent
Machine dependent
Machine dependent Machine independent High-level
Category Low level, Assembly Logic programming
Machine code High Level abstraction,
Languages
Advanced 3GLs

Code can be read and More machine-


Very fast, no need for written by independent May not need
Advantage ‘translation’ to 0s and programmers easier More friendly to Easy to learn programmers to write
1s than learning machine programmers programs
code General-purpose

Must be converted to May go multiple steps


Machine dependent, Still early in the
Disadvantage machine code, still to translate to More specialized
not portable adoption phase
machine-dependent machine code

Modern 3GLs are


If needed to interact If needed to interact
more commonly used.
with hardware with hardware Limited
Early 3GLs are used Database, web
Today’s usage directly such as directly such as Visual tools, Artificial
to maintain existing development
drivers (i.e., USB drivers (i.e., USB intelligence research
business programs or
driver) driver)
scientific programs

Early 3GLs: COBOL,


Fortran Perl, PhP, Python,
Examples Machine language Assembly language Mercury, OPS5
Modern 3PLs: C, SQL, Ruby
C++, Java, Javascript

Statista.com reported that by early 2020, Javascript was the most used language among developers worldwide. To see the complete
list, please visit Statista.com for more details.
Sidebar: Examples of languages
First-generation language: machine code. In machine code, programming is done by directly setting actual ones and zeroes (the
bits) using binary code. Here is an example program that
adds 1234 and 4321 using machine language:

10111001 00000000

11010010 10100001

00000100 00000000

10001001 00000000

00001110 10001011

00000000 00011110

00000000 00011110

00000000 00000010

10111001 00000000

11100001 00000011

10.3.2 https://workforce.libretexts.org/@go/page/9810
00010000 11000011

10001001 10100011

00001110 00000100

00000010 00000000

Second-generation language. Assembly language gives English-like phrases to the machine-code instructions, making it easier to
program. An assembly-language program must be run through an assembler, which converts it into machine code. Here is an
example program that adds 1234 and 4321 using assembly language:
MOV CX,1234 MOV DS:[0],CX MOV CX,4321 MOV AX,DS:[0]
MOV BX,DS:[2] ADD AX,BX
MOV DS:[4],AX
Third-generation languages are not specific to the type of hardware they run and are much more like spoken languages. Most third-
generation languages must be compiled, a process that converts them into machine code. Well-known third-generation languages
include BASIC, C, Pascal, and Java. Here is an example using BASIC:
A=1234 B=4321 C=A+B END
Fourth-generation languages are a class of programming tools that enable fast application development using intuitive interfaces
and environments. Many times, a fourth-generation language has a particular purpose, such as database interaction or report-
writing. These tools can be used by those with very little formal training in programming and allow for the quick development of
applications and/or functionality. Examples of fourth-generation languages include Clipper, FOCUS, FoxPro, SQL, and SPSS.
Why would anyone want to program in a lower-level language when they require so much more work? The answer is similar to
why some prefer to drive stick-shift automobiles instead of automatic transmission: control and efficiency. Lower-level languages,
such as assembly language, are much more efficient and execute much more quickly. You have finer control over the hardware as
well. Sometimes, a combination of higher- and lower-level languages is mixed together to get the best of both worlds: the
programmer will create the overall structure and interface using a higher-level language but will use lower-level languages
wherever in the program that requires more precision.
Compiled vs. Interpreted
Besides classifying a programming language based on its generation, it can also be classified as compiled or interpreted language.
As we have learned, a computer language is written in a human-readable form. In a compiled language, the program code is
translated into a machine-readable form called an executable that can be run on the hardware. Some well-known compiled
languages include C, C++, and COBOL.
An interpreted language requires a runtime program to be installed to execute. This runtime program then interprets the program
code line by line and runs it. Interpreted languages are generally easier to work with but are slower and require more system
resources. Examples of popular interpreted languages include BASIC, PHP, PERL, and Python. The web languages such as HTML
and Javascript would also be considered interpreted because they require a browser to run.
The Java programming language is an interesting exception to this classification, as it is actually a hybrid of the two. A program
written in Java is partially compiled to create a program that can be understood by the Java Virtual Machine (JVM). Each type of
operating system has its own JVM, which must be installed, allowing Java programs to run on many different types of operating
systems.
Procedural vs. Object-Oriented
A procedural programming language is designed to allow a programmer to define a specific starting point for the program and then
execute sequentially. All early programming languages worked this way. As user interfaces became more interactive and graphical,
it made sense for programming languages to evolve to allow the user to define the program's flow. The object-oriented
programming language is set up to define “objects” that can take certain actions based on user input. In other words, a procedural
program focuses on the sequence of activities to be performed; an object-oriented program focuses on the different items being
manipulated.

10.3.3 https://workforce.libretexts.org/@go/page/9810
For example, in a human-resources system, an “EMPLOYEE” object would be needed. If the program needed to retrieve or set
data regarding an employee, it would first create an employee object in the program and then set or retrieve the values needed.
Every object has properties, which are descriptive fields associated with the object. In the example below, an employee object has
the properties “Name,” “Employee number,” “Birthdate,” and “Date of hire.” An object also has “methods,” which can take actions
related to the object. In the example, there are two methods. The first is “ComputePay(),” which will return the current amount
owed to the employee. The second is “ListEmployees(),” which will retrieve a list of employees who report to this employee.
Employee Object

Object: EMPLOYEE
First_Name
Last_Name
Employee_ID
Birthdate
Date_of_hire
ComputePay()
ListEmployees()

Programming Tools
Another decision that needs to be made during the development of an IS is the set of tools needed to write programs. To write
programs, programmers need tools to enter code, check for the code's syntax, and some method to translate their code into machine
code. To be more efficient at programming, programmers use integrated tools such as an integrated development environment
(IDE) or computer-aided software-engineering (CASE) tools.
Integrated Development Environment (IDE)

For most programming languages, an IDE can be used. An IDE provides various tools for the programmer, all in one place with a
consistent user interface. IDE usually includes:
an editor for writing the program that will color-code or highlight keywords from the programming language;
a help system that gives detailed documentation regarding the programming language;
a compiler/interpreter, which will allow the programmer to run the program;
a debugging tool, which will provide the programmer details about the execution of the program to resolve problems in the
code; and
a check-in/check-out mechanism allows a team of programmers to work together on a project and not write over each other’s
code changes.
Statista.com reports that 80% of software developers worldwide from 2018 and 2019 use a source code collaboration tool such as
GitHub, 77% use a standalone IDE such as Eclipse, 69% use Microsoft Visual Studio. For a complete list, please visit statista.com.
Computer-aided software engineering (CASE) Tools
While an IDE provides several tools to assist the programmer in writing the program, the code still must be written. Computer-
aided software engineering (CASE) tools allow a designer to develop software with little or no programming. Instead, the CASE
tool writes the code for the designer. CASE tools come in many varieties, but their goal is to generate quality code based on the
designer's input.

Build vs. Buy or Subscribe


When an organization decides that a new software program needs to be developed, they must determine if it makes more sense to
build it themselves or purchase it from an outside company. This is the “build vs. buy” decision. This ‘buy’ decision now includes
the option to subscribe instead of buying it outright.
There are many advantages to purchasing software from an outside company. First, it is generally less expensive to purchase a
software package than to build it. Second, when a software package is purchased, it is available much more quickly than if the
package is built in-house. Third, companies or consumers pay a one-time price and get to keep the software for as long as the
license allows and could be as long as you own it or even after the vendor stops supporting it. Software applications can take
months or years to build; a purchased package can be up and running within a month. A purchased package has already been tested,

10.3.4 https://workforce.libretexts.org/@go/page/9810
and many of the bugs have already been worked out, and additional support contracts can be purchased. It is the role of a systems
integrator to make various purchased systems and the existing systems at the organization work together.
There are also disadvantages to purchasing software. First, the same software you are using can be used by your competitors. If a
company is trying to differentiate itself based on a business process in that purchased software, it will have a hard time doing so if
its competitors use the same software. Another disadvantage to purchasing software is the process of customization. If you
purchase a software package from a vendor and then customize it, you will have to manage those customizations every time the
vendor provides an upgrade. With the rise of security and privacy, companies may lack the in-house expertise to respond quickly.
Installing various updates and dealing with bugs encountered may also be a burden to IT staff and users. This can become an
administrative headache.
A hybrid solution is to subscribe. Subscribe means that instead of selling products individually, vendors now offer a subscription
model that the users can rent and pay periodically, such as monthly, yearly. The renting model has been used in many other
industries such as movies, books and recently has moved into high tech industries. Companies and consumers can now subscribe to
almost everything, as we discussed in earlier chapters, from additional storage in your email platforms such as Google Drive or
Microsoft Onedrive, to software such as Quickbooks, Microsoft Office 365, to hosting and web support services such as Amazon
AWS. Vendors benefit from converting one-time sales to recurring sales and increase customer loyalty. Customers benefit from the
headache of installing updates, having the software support and updates taken care of automatically, knowing that the software
continues to be updated with new features. A subscription model is now a prevalent option for both consumers and businesses.
Even if an organization determines to buy or subscribe, it still makes sense to go through many of the same analyses to compare the
costs and benefits of building it themselves. This is an important decision that could have a long-term strategic impact on the
organization.
Web Services

Chapter 3 stated that the move to cloud computing has allowed software to be looked at as a service. One option companies have
these days to license functions provided by other companies instead of writing the code themselves. These are called web services,
and they can greatly simplify the addition of functionality to a website.
For example, suppose a company wishes to provide a map showing the location of someone who has called their support line. By
utilizing Google Maps API web services, they can build a Google Map right into their application. Or a shoe company could make
it easier for its retailers to sell shoes online by providing a shoe-size web service that the retailers could embed right into their
website.
Web services can blur the lines between “build vs. buy.” Companies can choose to build a software application themselves but then
purchase functionality from vendors to supplement their system.

End-User Computing or Shadow IT


In many organizations, application development is not limited to the programmers and analysts in the information-technology
department. Especially in larger organizations, other departments develop their own department-specific applications. The people
who build these are not necessarily trained in programming or application development, but they tend to be adept with computers.
A person, for example, who is skilled in a particular software package, such as a spreadsheet or database package, may be called
upon to build smaller applications for use by his or her own department. This phenomenon is referred to as end-user development,
or end-user computing, or shadow IT.
End-user computing can have many advantages for an organization. First, it brings the development of applications closer to those
who will use them. Because IT departments are sometimes quite backlogged, it also provides a means to have software created
more quickly. Many organizations encourage end-user computing to reduce the strain on the IT department.
End-user computing does have its disadvantages as well. If departments within an organization are
developing their own applications, the organization may end up with several applications that perform similar functions, which is
inefficient since it duplicated effort. Sometimes, these different versions of the same application provide different results, bringing
confusion when departments interact. These applications are often developed by someone with little or no formal training in
programming. In these cases, the software developed can have problems that have to be resolved by the IT department. End-user
computing can be beneficial to an organization, but it should be managed. The IT department should set guidelines and provide
tools for the departments who want to create their own solutions.

10.3.5 https://workforce.libretexts.org/@go/page/9810
Communication between departments will go a long way towards the successful use of end-user computing.

Sidebar: Building a Mobile App


Software development typically includes building applications to run on desktops, servers, or mainframes. However, the web's
commercialization has created additional software development categories such as web design, content development, web server.
Web-related development effort for the internet is now called web development. Earlier web development activities include
building websites to support businesses or to build e-commerce systems and have made technologies such as HTML very popular
with web designers and programming languages such as Perl, Python, Java popular for programmers. Pre-packaged websites are
now available for consumers to purchase without learning HTML or hiring a web designer. For example, entrepreneurs who want
to start a bakery business can now buy a pre-build website with a shopping cart, all ready to start a business without incurring
costly expenses to build it themselves.
With the rise of mobile phones, a new type of software development called mobile app development came into being. Statista.com
forecasts that Mobile apps revenues will increase significantly from $98B in 2014 to over $935B by 2023. This means that the need
for mobile app developers has also increased.
In many ways, building an application for a mobile device is the same as building an application for a traditional computer.
Understanding the application requirements, designing the interface, working with users – all of these steps still need to be carried
out. The decision process to pick the right programming languages and tools remains the same.
However, there are specific differences that programmers must consider in building apps for mobile devices. They are:
The user interface must vary to adapt to different screen size
The use of fingers as pointers or to type in text instead of keyboard and mouse on the desktop
Specific requirements from the OS vendor must be met for the app to be included in each store (i.e., Apple’s App Store or
Android’s Play Store)
The integration with the desktop or the cloud to synch up data
Tight integration with other built-in hardware such as cameras, biometric or motion sensors.
Less available memory, storage space, and processing power
Mobile apps are now available for just about everything and continue to grow.

References:
Javascript was the most used language among developers worldwide (2020). Retrieved December 10, 2020, from Satistica.com
Google Maps Platform Documentation. Retrieved December 10, 2020, from https://developers.google.com/maps/documentation
Programming/development tools used by software developers worldwide from 2018 and 2019 (2020). Retrieved December 10,
2020, from Statista.com
Worldwide mobile app revenues in 2014 to 2023 (2010.) Retrieved December 10, 2020, from Statista.com

This page titled 10.3: Software Development is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

10.3.6 https://workforce.libretexts.org/@go/page/9810
10.4: Implementation Methodologies
Implementation Methodologies
Once a new system is developed (or purchased), the organization must determine the best method for implementing it. Convincing
a group of people to learn and use a new system can be a challenging process. Using the new software and the business processes it
gives rise to can have far-reaching effects within the organization.
There are several different methodologies an organization can adopt to implement a new system. Four of the most popular are listed
below.
Direct cutover. In the direct-cutover implementation methodology, the organization selects a particular date that the old system
will not be used anymore. On that date, the users begin using the new system, and the old system is unavailable. The advantages
of using this methodology are that it is speedy and the least expensive. However, this method is the riskiest as well. If the new
system has an operational problem or is not properly prepared, it could prove disastrous for the organization.
Pilot implementation. In this methodology, a subset of the organization (called a pilot group) starts using the new system
before the rest of the organization. This has a smaller impact on the company and allows the support team to focus on a smaller
group of individuals.
Parallel operation. With the parallel operation, the old and new systems are used simultaneously for a limited period of time.
This method is the least risky because the old system is still being used while the new system is essentially being tested.
However, this is the most expensive methodology since work is duplicated and support is needed for both systems in full.
Phased implementation. In a phased implementation, different functions of the new application are used as functions from the
old system are turned off. This approach allows an organization to move from one system to another slowly.
These implementation methodologies depend on the complexity and importance of the old and new systems.

Change Management
As new systems are brought online, and old systems are phased out, it becomes important to manage how change is implemented.
Change should never be introduced in a vacuum. The organization should be sure to communicate proposed changes before they
happen and plan to minimize the impact of the change that will occur after implementation. Training and incorporating users’
feedback are critical to increasing user’s acceptance of the new system. Without gaining the user’s acceptance, the risk of failure is
very high. Change management is a critical component of IT oversight.

Maintenance
Once a new system has been introduced, it enters the maintenance phase. In this phase, the system is in production and is being
used by the organization. While the system is no longer actively being developed, changes need to be made when bugs are found,
or new features are requested. During the maintenance phase, IT management must ensure that the system continues to stay aligned
with business priorities, has a clear process to accept requests, problem reports, deploy updates to ensure user’s satisfaction with
continuous improvements in the product's quality.
With the rise of privacy concerns, many companies now add policies about maintaining their customers’ data or data collected
during the project. Policies such as when to dispose of, how to dispose of, where to store are just a few examples.

This page titled 10.4: Implementation Methodologies is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-
Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

10.4.1 https://workforce.libretexts.org/@go/page/9811
10.5: Summary
Developing an IS can be costly and a complex process to manage a group of professionals to deliver a new system on time and
budget. There are several development models from the formal SDLC process to more informal processes such as agile
programming or lean methodologies to provide a framework to manage all the phases from start to finish.
Software development is about so much more than programming. Programming languages have evolved from very low-level
machine-specific languages to higher-level languages that allow a programmer to write software for a wide variety of machines.
Most programmers work with software development tools that provide them with integrated components to make the software
development process more efficient.
For some organizations, building their own software applications does not make the most sense; instead, they choose to purchase or
rent software built by a third party to save development costs and speed implementation. In end-user computing, software
development happens outside the information technology department. When implementing new software applications,
organizations need to consider several different types of implementation methodologies.
An organization’s responsibilities to complete a software development do not end with the deployment of the software. It now
includes a clear and systemic process to maintain and protect customers’ and projects’ data to address security and privacy
concerns.

This page titled 10.5: Summary is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

10.5.1 https://workforce.libretexts.org/@go/page/9813
10.6: Study Questions
Study Questions
1. What are the steps in the SDLC methodology?
2. What is RAD software development?
3. What is the Waterfall model?
4. What makes the lean methodology unique?
5. What is the difference between the Waterfall and Agile models?
6. What is a sprint?
7. What are three differences between second-generation and third-generation languages?
8. Why would an organization consider building its own software application if it is cheaper to buy one?
9. What is the difference between the pilot implementation methodology and the parallel implementation methodology?
10. What is change management?
11. What are the four different implementation methodologies?

Exercises
1. Which software-development methodology would be best if an organization needed to develop a software tool for a small group
of users in the marketing department? Why? Which implementation methodology should they use? Why?
2. Doing your own research, find three programming languages and categorize them in these areas: generation, compiled vs.
interpreted, procedural vs. object-oriented.
3. Some argue that HTML is not a programming language. Doing your own research, find three arguments for why it is not a
programming language and three arguments for why it is.
4. Read more about responsive design using the link given in the text. Provide the links to three websites that use responsive
design and explain how they demonstrate responsive-design behavior.
5. Research the criteria and cost to put a mobile app into Apple’s App Store. Write a report.
6. Research to find out what elements to use to estimate the cost to build an app. Write a report.

This page titled 10.6: Study Questions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

10.6.1 https://workforce.libretexts.org/@go/page/13111
10.7: Summary

Your page has been created!


Remove this content and add your own.

Edit page
Click the Edit page button in your user bar. You will see a suggested structure for your content. Add your content and hit
Save.

Tips:

Drag and drop


Drag one or more image files from your computer and drop them onto your browser window to add them to your page.

Classifications
Tags are used to link pages to one another along common themes. Tags are also used as markers for the dynamic organization
of content in the CXone Expert framework.

Working with templates


CXone Expert templates help guide and organize your documentation, making it flow easier and more uniformly. Edit
existing templates or create your own.

Visit for all help topics.

This page titled 10.7: Summary is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

10.7.1 https://workforce.libretexts.org/@go/page/13113
SECTION OVERVIEW

3: Information Systems Beyond the Organization


11: Information Systems Beyond the Organization
11.1: Introduction
11.2: The Global Firm
11.3: The Digital Divide
11.4: Summary
11.5: Study Questions

12: The Ethical and Legal Implications of Information System


12.1: Introduction
12.2: Intellectual Property
12.3: The Digital Millennium Copyright Act
12.4: Summary
12.5: Study Questions

13: Future Trends in Information Systems


13.1: Introduction
13.2: Collaborative
13.3: Internet of Things (IoT)
13.4: Future of Information Systems
13.5: Study Questions

This page titled 3: Information Systems Beyond the Organization is shared under a CC BY 3.0 license and was authored, remixed, and/or curated
by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1
CHAPTER OVERVIEW

11: Information Systems Beyond the Organization


 Learning Objectives

Upon successful completion of this chapter, you will be able to:


Explain the concept of globalization;
Describe the role of information technology in globalization;
Identify the issues experienced by firms as they face a global economy; and
Define the digital divide and explain Nielsen’s three stages of the digital divide.

The rapid rise of the Internet has made it easier than ever to do business worldwide. This chapter will look at the impact that the
Internet is having on the globalization of business. Firms will need to manage challenges and leverage opportunities due to
globalization and digitalization. It will discuss the digital divide concept, what steps have been taken to date to alleviate it, and
what more needs to be done.
11.1: Introduction
11.2: The Global Firm
11.3: The Digital Divide
11.4: Summary
11.5: Study Questions

This page titled 11: Information Systems Beyond the Organization is shared under a CC BY 3.0 license and was authored, remixed, and/or curated
by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1
11.1: Introduction
In this chapter, we will look at how the internet has opened the world to globalization. We will look at where it began and fast
forward to where we are today. We will be reviewing the influences of man, machine, and technology that enables globalization. It
is now just as simple to communicate with someone on the other side of the world as to talk to someone next door. In this chapter,
we will look at the implications of globalization and its impact on the world.

What Is Globalization?
Globalization is found in economics and refers to the integration of goods, services, and culture among the people and nations of
the world. Globalization has accelerated since the turn of the 18th century due to mass improvement in transportation and
technology. Globalization has its roots as far back as an exploration of finding the New World. Globalization creates world
markets. Places that were once limited to only providing goods and services to the immediate area now have open access to other
countries worldwide. The expansion of global markets has increased economic activities in the exchange of goods, services, and
funds, which has created global markets that are now readily feasible. Today the ease of the connectivity of people has accelerated
the speed of globalization. People no longer have to sail for a year to share goods or services.

Fig. 11.1 Globalization in Handshake, Hands, Laptop, Monitor. Image by Gerd Altmann is licensed CC BY-SA 2.0
The internet has connected nations together. From its initial beginnings in the United States in the 1970s to the World Wide Web
development, it has crept into home use with the introduction of the personal computer by the 1980s. The 90’s then introduced
social networks and e-commerce of today; the Internet has continued to increase the integration between countries, making
globalization a fact of life for citizens worldwide. The Internet is truly a worldwide phenomenon. By Q3 of 2020, approximately
4.9 billion people, or more than half of the world’s population, use the internet. For more details, please view the data at
internetworldstats.com/stats.htm.

Fig 11.2 - World Internet Usage and Population Statistics. Source: https://internetworldstats.com/stats.htm

11.1.1 https://workforce.libretexts.org/@go/page/9815
The Network Society
In 1996, social-sciences researcher Manuel Castells published The Rise of the Network Society. He identified new ways to
organize economic activity around the networks that the new telecommunication technologies have provided. This new, global
economic activity was different from the past because “it is an economy with the capacity to work as a unit in real-time on a
planetary scale.” (Castells, 2000) We are now into this network society, where we are all connected on a global scale.

The World Is Flat


In Thomas Friedman’s seminal book, The World Is Flat (Friedman, 2005), he unpacks the impacts that the personal computer, the
Internet, and communication software have had on business, specifically its impact on globalization. He begins the book by
defining the three eras of globalization:
“ Globalization 1.0″ occurred from 1492 until about 1800. In this era, globalization was centered around countries. It was about
how much horsepower, wind power, and steam power a country had and how creatively it was deployed. The world shrank
from size “large” to size “medium.”
“ Globalization 2.0″ occurred from about 1800 until 2000, interrupted only by the two World Wars. In this era, the dynamic
force driving change was multinational companies. The world shrank from size “medium” to size “small.”
“Globalization 3.0″ is our current era, beginning in the year 2000. The convergence of the personal computer, fiber-optic
Internet connections, and software has created a “flat-world platform” that allows small groups and even individuals to go
global. The world has shrunk from size “small” to size “tiny.”
According to Friedman (2005), this third era of globalization was brought about, in many respects, by information technology.
Some of the specific technologies he lists include:
The graphical user interface for the personal computer popularized in the late 1980s. Before the graphical user interface, using a
computer was relatively difficult. By making the personal computer something that anyone could use, it became commonplace
very quickly. Friedman points out that this digital storage of content made people much more productive and, as the Internet
evolved, made it simpler to communicate content worldwide.
The build-out of the Internet infrastructure during the dot-com boom during the late-1990s. During the late 1990s,
telecommunications companies laid thousands of miles of fiber-optic cable worldwide, turning network communications into a
commodity. At the same time, the Internet protocols, such as SMTP (e-mail), HTML (web pages), and TCP/IP (network
communications), became standards that were available for free and used by everyone.
The introduction of software to automate and integrate business processes. As the Internet continued to grow and become the
dominant form of communication, it became essential to build on the standards developed earlier so that the websites and
applications running on the Internet would work well together. Friedman calls this “workflow software,” by which he means
software that allows people to work together more easily and allows different software packages and databases to integrate
easily. Examples include payment-processing systems and shipping calculators.
These three technologies came together in the late 1990s to create a “platform for global collaboration.” Once these technologies
were in place, they continued to evolve. Friedman also points out a couple more technologies that have contributed to the flat-world
platform – the open-source movement (see chapter 10) and the advent of mobile technologies.
The World Is Flat was published in 2005. Since then, we have seen even more growth in information technologies that have
contributed to global collaborations. We will discuss current and future trends in chapter 13.

References
Castells, Manuel (2000). The Rise of the Network Society (2nd ed.). Blackwell Publishers, Inc., Cambridge, MA, USA.
Friedman, T. L. (2005). The world is flat: A brief history of the twenty-first century. New York: Farrar, Straus and Giroux.
Q3 2020 Internet usage. Retrieved December 5, 2020, from https://internetworldstats.com/stats.htm.

This page titled 11.1: Introduction is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

11.1.2 https://workforce.libretexts.org/@go/page/9815
11.2: The Global Firm
The Global Firm
The new era of globalization allows any business to become international. By accessing this new platform of technologies or
network, Castells’ vision (Castells, 2000) of working as a unit in real-time on a planetary scale can be a reality. He believed the
collective could benefit society. Some of the advantages of this include the following:
Access to expertise and labor around the world. Organizations are no longer being limited by viable candidates locally and
can now hire people from the global labor pool. This also allows organizations to pay a lower labor cost for the same work
based on the prevailing wage in different countries.
Operate 24 hours a day. With employees in different time zones worldwide, an organization can literally operate around the
clock, handing off work on projects from one part of the world to another. Businesses can also keep their digital storefront (their
website) open all the time.
Access to a larger market for firm products. Once a product is being sold online, it is available for purchase from a
worldwide consumer base. Even if a company’s products do not appeal beyond its own country’s borders, being online has also
made the product more visible to consumers within that country.
Achieve a diversity of the market. It helps companies to stabilize their overall revenue sources. The company could be
experiencing a gain in revenues in one country and be down the other side of the world, which will help to stabilize their
revenues.
Gain more exposure to foreign investment opportunities. Globalization helps companies to become more familiar with
opportunities in the new areas that they are expanding into.
To fully take advantage of these new capabilities, companies need to understand that there are also challenges in dealing with
employees, customers from different cultures, and other countries' economies. Some of these challenges include:
Infrastructure differences. Each country has its own infrastructure, many of which are not of the same quality as the US
infrastructure. Americans are currently getting around 135 Mbps of download speed and 52 Mbps of upload speed through their
fixed broadband connections — good for eighth in the world and around double the global average. For every South Korea (16
average speed), there is an Egypt (0.83 MBps) or an India (0.82 MBps). A business cannot depend on every country it deals
with having the same Internet speeds. See the sidebar called “How Does My Internet Speed Compare?”
Labor laws and regulations. Different countries (including the United States) have different laws and regulations. A company
that wants to hire employees from other countries must understand the different regulations and concerns.
Legal restrictions. Many countries have restrictions on what can be sold or how a product can be advertised. A business needs
to understand what is allowed. For example, in Germany, it is illegal to sell anything Nazi-related; in China, it is illegal to put
anything sexually suggestive online.
Language, customs, and preferences. Every country has its own (or several) unique culture(s), which a business must
consider when trying to market a product. Additionally, different countries have different preferences. For example, in some
parts of the world, people prefer to eat their french fries with mayonnaise instead of ketchup; in other parts of the world,
specific hand gestures (such as the thumbs-up) are offensive.
International shipping. Shipping products between countries promptly can be challenging. Inconsistent address formats,
dishonest customs agents, and prohibitive shipping costs are all factors that must be considered when trying to deliver products
internationally.
Volatility of currency. This could occur when you are buying or selling goods, the currency has big fluctuations in value when
converting to a different countries’ currency, such as the euro, yen, and dollar.
Because of these challenges, many businesses choose not to expand globally, either for labor or for customers. Whether a business
has its own website or relies on a third party, such as Amazon or eBay, the question of whether to globalize must be carefully
considered.
Globalization has changed greatly in the last several decades. It has seen positive development, with associated costs and benefits
such as organizations have seen its fortune changed and progress and modernization are brought into various parts of the world.
However, its benefits are not necessarily evenly distributed across the world. With the global pandemic of 2020 (Covid-19),
globalization is now viewed by many as risks to the national supply chain of goods and services, job losses, increased gap of
inequality, and health risks. It is expected that globalization post-Covid will need to mitigate these risks to move it to a more
balanced approach between independence and integration between countries (Kobrin, 2020).

11.2.1 https://workforce.libretexts.org/@go/page/9816
Sidebar: How Does My Internet Speed Compare?
Internet speed varies by geographies, such as states and countries, as reported by Statista.com. For example, as of August 2020,
Singapore's internet speed is ~218 Mbps, while Hungary is ~156 Mbps. Please visit Statista.com for more details.
Statista.com also reported that as of June 2020, over 42% of US households did not know the download speed of their household
internet service. The download speed varies from 10 Mbps or less to over 100 Mbps. There are several free tools that you can use
to test your household internet upload and download speed, such as the app Speedtest, a free download (as of this writing).

References
Castells, Manuel (2000). The Rise of the Network Society (2nd ed.). Blackwell Publishers, Inc., Cambridge, MA, USA.
Kobrin, S.J (2020). How globalization became a thing that goes bump in the night. J Int Bus Policy 3, 280–286.
https://doi.org/10.1057/s42214-020-00060-y
Statista. (2020). Countries with the fastest average fixed broadband internet speeds as of August 2020 (in Mbps). Retrieved
December 5, 2020, from https://www.statista.com/statistics/896772/countries-fastest-average-fixed-broadband-internet-speeds/.
Statista. (2020). Household internet download speed of adults in the United States as of June 2020. Retrieved December 5, 2020,
from https://www.statista.com/statistics/368545/us-state-high-speed-internet-households/.

This page titled 11.2: The Global Firm is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

11.2.2 https://workforce.libretexts.org/@go/page/9816
11.3: The Digital Divide
The Digital Divide
As the Internet continues to make inroads across the world, it also creates a separation between those who have access to this
global network and those who do not. This separation is called the “digital divide” and is of great concern. Kilburn (2005)
summarizes this concern in his article Crossroads:
Adopted by the ACM Council in 1992, the ACM Code of Ethics and Professional Conduct focuses on issues involving the Digital
Divide that could prevent certain categories of people - those from low-income households, senior citizens, single-parent children,
the undereducated, minorities, and residents of rural areas — from receiving adequate access to the wide variety of resources
offered by computer technology. This Code of Ethics positions the use of computers as a fundamental ethical consideration: “In a
fair society, all individuals would have equal opportunity to participate in, or benefit from, the use of computer resources regardless
of race, sex, religion, age, disability, national origin, or other similar factors.” The article discusses the digital divide in various
forms and analyzes reasons for the growing inequality in people’s access to Internet services. It also describes how society can
bridge the digital divide: the serious social gap between information “haves” and “have-nots.”
The digital divide is categorized into three stages: the economic divide, the usability divide, and the empowerment divide (Nielson,
2006)
The economic divide is usually called the digital divide: it means that some people can afford to have a computer and Internet
access while others cannot. Because of Moore’s Law (see chapter 2), the price of hardware has continued to drop, and, at this
point, we can now access digital technologies, such as smartphones, for very little. This fact, Nielsen asserts, means that the
economic divide is a moot point for all intents and purposes, and we should not focus our resources on solving it.
The usability divide is concerned with the fact that “technology remains so complicated that many people couldn’t use a
computer even if they got one for free.” And even for those who can use a computer, accessing all the benefits of having one is
beyond their understanding. Included in this group are those with low literacy and seniors. According to Nielsen, we know how
to help these users, but we are not doing it because there is little profit.
The empowerment divide is the most difficult to solve. It is concerned with how we use technology to empower ourselves.
Very few users truly understand the power that digital technologies can give them. In his article, Nielsen explains that his (and
others’) research has shown that very few users contribute content to the Internet, use the advanced search, or even distinguish
paid search ads from organic search results. Many people will limit what they can do online by accepting the basic, default
settings of their computer and not understanding how they can truly be empowered.
Understanding the digital divide using these three stages provides an approach to developing solutions and monitoring our progress
in bridging the digital divide gap.
The digital divide can occur between countries, regions, or even neighborhoods. There are pockets with little or no Internet access
in many US cities, while just a few miles away, high-speed broadband is common. For example, in 2020, the US Federal
Communications Commission (FCC ) reports that “In urban areas, 97% of Americans have access to high-speed fixed service. In
rural areas, that number falls to 65%. And on Tribal lands, barely 60% have access. All told, nearly 30 million Americans cannot
reap the benefits of the digital age.” Overall, Statista.com reported that as of August 2020, only ~85% of the US population has
internet access.
The global pandemic (Covid-19) has made Internet access an essential requirement due to the social distance or lockdown
mandates and has spotlighted this issue globally.

Challenges and efforts to bridge the Digital Divide gap


Solutions to the digital divide have had mixed success over the years. Initial effort focused on providing internet access and/or
computing devices with some degrees of success. However, just providing Internet access and/or computing devices is not enough
to bring true Internet access to a country, region, or neighborhood.
The Worldbank and International Monetary Fund (IMF), in their annual meeting in 2020, brought together global leaders and
private innovators to discuss how to bridge the digital gap globally. Three challenges were identified:
1. Lack of infrastructure remains a major barrier to connectivity
2. Greater collaboration is needed between the public and private sectors
3. Education and training to help connect people in underserved communities

11.3.1 https://workforce.libretexts.org/@go/page/9817
In June 2020, the UN Secretary-General stated that Digital Divide is now ‘a Matter of Life and Death’ amid the COVID-19 Crisis
and called on global leaders for global cooperation to meet the goal: every person has safe and affordable access to the Internet by
2030.
With this challenge being made acute due to the global pandemic of 2020 (Covid-19), many leaders have increased their
investment to bridge this gap in their countries. For example, the IMF reported that countries like Kenya, Ghana, Rwanda, and
Tanzania had made great progress in using mobile to connect their citizens to financial systems (IMF, 2020). Many states in the
United States have increased their funding through public or private partnerships, such as the California Closing the Divide
initiative (CA dept of education, 2020).
Continued global investment to bridge this gap remains a critical need for the global world, both during and post-global pandemic.

Sidebar: Using Gaming to Bridge the Digital Divide


Paul Kim, the Assistant Dean and Chief Technology Officer of the Stanford Graduate School of Education, designed a project to
address the digital divide for children in developing countries (Kim et al., 2011.) In their project, the researchers wanted to
understand if children can adopt and teach themselves mobile learning technology without help from teachers or other adults and
the processes and factors involved in this phenomenon. The researchers developed a mobile device called TeacherMate, which
contained a game designed to help children learn math. The unique part of this research was that the researchers interacted directly
with the children; they did not channel the mobile devices through the teachers or the schools. Another important factor to
consider: to understand the context of the children’s educational environment, the researchers began the project by working with
parents and local nonprofits six months before their visit. While the results of this research are too detailed to go into here, it can be
said that the researchers found that children can, indeed, adopt and teach themselves mobile learning technologies.
What makes this research so interesting when thinking about the digital divide is that the researchers found that, to be effective,
they had to customize their technology and tailor their implementation to the specific group they were trying to reach. One of their
conclusions stated the following:
Considering the rapid advancement of technology today, mobile learning options for future projects will only increase.
Consequently, researchers must continue to investigate their impact; we believe there is a specific need for more in-depth studies on
ICT [information and communication technology] design variations to meet different localities' challenges. To read more about Dr.
Kim’s project, locate the paper referenced in the list of references.

References
ACM (2020). ACM Code of Ethics and Professional conduct. Retrieved December 5, 2020, from https://www.acm.org/code-of-
ethics.
Digital Divide ‘a Matter of Life and Death’ amid COVID-19 Crisis, Secretary‑General Warns Virtual Meeting, Stressing Universal
Connectivity Key for Health, Development. Retrieved November 1, 2020, from www.un.org/press/en/2020/sgsm20118.doc.htm
Kiburn, Kim (2005). Challenges in HCI: Digital divide. Crossroads 12, 2 (December 2005), 2-2. DOI=10.1145/1144375.1144377
http://doi.acm.org/10.1145/1144375.1144377.
Kim, P., Buckner, E., Makany, T., & Kim, H. (2011). A comparative analysis of a game-based mobile learning model in low-
socioeconomic communities of India. International Journal of Educational Development. doi:10.1016/j.ijedudev.2011.05.008.
Nielsen, J (2006). Digital Divide: The 3 Stages. Retrieved Nov 1, 2020, from http://www.nngroup.com/articles/digital-divide-the-
three-stages/.
Statista. (2020). Internet usage in the United States. Retrieved December 5, 2020, from
https://www.statista.com/topics/2237/internet-usage-in-the-united-states/.

This page titled 11.3: The Digital Divide is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

11.3.2 https://workforce.libretexts.org/@go/page/9817
11.4: Summary
Summary
Information technology has driven change on a global scale. As documented by Castells and Friedman, technology has given us the
ability to integrate with people worldwide using digital tools. These tools have allowed businesses to broaden their labor pools,
markets, and even operating hours. But they have also brought many new complications for businesses, which now must
understand regulations, preferences, and cultures from many different nations. This new globalization has also exacerbated the
digital divide using Nielson's three stages. The 2020 global pandemic has accentuated both the problems and increased efforts in
bridging the digital divide globally.

This page titled 11.4: Summary is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

11.4.1 https://workforce.libretexts.org/@go/page/9818
11.5: Study Questions
Study Questions
1. What does the term globalization mean?
2. What are the three areas of globalization?
3. Which technologies have had the biggest effect on globalization?
4. What are some of the advantages brought about by globalization?
5. What are the disadvantages of globalization?
6. What does the term digital divide mean?
7. What are Jakob Nielsen’s three stages of the digital divide?
8. Which country has the highest average Internet speed?
9. What are the effects of the global pandemic on the digital divide?

Exercises
1. Compare the concept of Friedman’s “Globalization 3.0″ with Nielsen's empowerment stage of the digital divide.
2. Do some original research to determine some of the US company's regulations before doing business in one of the following
countries: China, Mexico, Iran, and India.
3. Go to speedtest.net to determine your Internet speed. Compare your speed at home to the Internet speed at two other locations,
such as your local coffee shop, school, place of employment. Write up a one-page summary that compares these locations.
4. Write a report to assess Nielson’s three stages based on your today’s experience.
5. Go to this website https://www.ntia.doc.gov/data/digital-nation-data-explorer#sel=internetUser&disp=map or search for
"Digital Nation Data Explorer" to locate it. Report the internet usage in your state and compare it with your own experience
6. Give one example of the digital divide and describe what you would do to address it.
7. How the research conducted by Manuel Castells influences globalization.

This page titled 11.5: Study Questions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

11.5.1 https://workforce.libretexts.org/@go/page/9819
CHAPTER OVERVIEW

12: The Ethical and Legal Implications of Information System


 Learning Objectives

Upon successful completion of this chapter, you will be able to:


Describe what the term information systems ethics means;
Explain what a code of ethics is and describe the advantages and disadvantages;
Define the term intellectual property and explain the protections provided by copyright, patent, and trademark
Describe what Creative Commons is and be able to identify what the different licenses mean.
Describe the challenges that information technology brings to individual privacy.

The rapid changes in all the components of information systems in the past few decades have brought a broad array of new
capabilities and powers to governments, organizations, and individuals alike. This chapter will discuss the effects that these new
capabilities have had and the legal and regulatory changes that have been put in place in response, and what ethical issues
organizations and IT communities need to consider in using or developing emerging solutions and services that regulations are not
fully developed.
12.1: Introduction
12.2: Intellectual Property
12.3: The Digital Millennium Copyright Act
12.4: Summary
12.5: Study Questions

This page titled 12: The Ethical and Legal Implications of Information System is shared under a CC BY 3.0 license and was authored, remixed,
and/or curated by Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative
(OERI)) .

1
12.1: Introduction
Introduction
Information systems have had an impact far beyond the world of business. In the past four decades, technology has fundamentally
altered our lives: from the way we work, how we play to how we communicate, and how we fight wars. Mobile phones track us as
we shop at stores and go to work. Algorithms based on consumer data allow firms to sell us products that they think we need or
want. New technologies create new situations that we have never dealt with before. They can threaten individual autonomy, violate
privacy rights, and can also be morally contentious. How do we handle the new capabilities that these devices empower us with?
What new laws are going to be needed to protect us from ourselves and others? This chapter will kick off with a discussion of the
impact of information systems on how we behave (ethics). This will be followed by the new legal structures being put in place,
focusing on intellectual property and privacy.

Information Systems Ethics


The term ethics is defined as “a set of moral principles” or “the principles of conduct governing an individual or a group.” Since the
dawn of civilization, the study of ethics and its impact has fascinated humankind. But what do ethics have to do with information
systems?
The introduction of new technology can have a profound effect on human behavior. New technologies give us capabilities that we
did not have before, which create environments and situations that have not been specifically addressed in ethical terms. Those who
master new technologies gain new power; those who cannot master them may lose power. In 1913, Henry Ford implemented the
first moving assembly line to create his Model T cars. While this was a great step forward technologically (and economically), the
assembly line reduced human beings' value in the production process. The development of the atomic bomb concentrated
unimaginable power in the hands of one government, which then had to wrestle with the decision to use it. Today’s digital
technologies have created new categories of ethical dilemmas.
For example, the ability to anonymously make perfect copies of digital music has tempted many music fans to download
copyrighted music for their own use without making payment to the music’s owner. Many of those who would never have walked
into a music store and stolen a CD find themselves with dozens of illegally downloaded albums.
Digital technologies have given us the ability to aggregate information from multiple sources to create profiles of people. What
would have taken weeks of work in the past can now be done in seconds, allowing private organizations and governments to know
more about individuals than at any time in history. This information has value but also chips away at the privacy of consumers and
citizens.
Communication technologies like social media(Facebook, Twitter, Instagram, LinkedIn, internet blogs) give so many people access
to so much information that it's getting harder and harder to tell what’s real and what’s fake. Its widespread use has blurred the line
between professional, personal, and private. Employers now have access to information that has traditionally been considered
private and personal, giving rise to new legal and ethical ramifications.
Some technologies like self-driving vehicles(drones), artificial intelligence, the digital genome, and additive manufacturing
methods(GMO) are transitioning into a new phase, becoming more widely used or incorporated into consumer goods, requiring
new ethical and regulatory guidelines.

Code of Ethics
One method for navigating new ethical waters is a code of ethics. A code of ethics is a document that outlines a set of acceptable
behaviors for a professional or social group; generally, it is agreed to by all members of the group. The document details different
actions that are considered appropriate and inappropriate.
A good example of a code of ethics is the Code of Ethics and Professional Conduct of the Association for Computing Machinery,
an organization of computing professionals that includes educators, researchers, and practitioners. Here is an excerpt from the
preamble:
Computing professionals' actions change the world. To act responsibly, they should reflect upon the wider impacts of their work,
consistently supporting the public good. The ACM Code of Ethics and Professional Conduct ("the Code") expresses the
profession's conscience. Additionally, the Code serves as a basis for remediation when violations occur. The Code includes
principles formulated as statements of responsibility based on the understanding that the public good is always the primary

12.1.1 https://workforce.libretexts.org/@go/page/9822
consideration. Each principle is supplemented by guidelines, which provide explanations to assist computing professionals in
understanding and applying the principle.
Section 1 outlines fundamental ethical principles that form the basis for the remainder of the Code. Section 2 addresses additional,
more specific considerations of professional responsibility. Section 3 guides individuals who have a leadership role, whether in the
workplace or a volunteer professional capacity. Commitment to ethical conduct is required of every ACM member, and principles
involving compliance with the Code are given in Section 4.
In the ACM’s code, you will find many straightforward ethical instructions, such as the admonition to be honest and trustworthy.
But because this is also an organization of professionals that focuses on computing, there are more specific admonitions that relate
directly to information technology:
No one should enter or use another’s computer system, software, or data files without permission. One must always have
appropriate approval before using system resources, including communication ports, file space, other system peripherals, and
computer time.
Designing or implementing systems that deliberately or inadvertently demean individuals or groups is ethically unacceptable.
Organizational leaders are responsible for ensuring that computer systems enhance, not degrade, working life quality. When
implementing a computer system, organizations must consider all workers' personal and professional development, physical
safety, and human dignity. Appropriate human-computer ergonomic standards should be considered in system design and the
workplace.
One of the major advantages of creating a code of ethics is clarifying the acceptable standards of behavior for a professional group.
The varied backgrounds and experiences of the members of a group lead to various ideas regarding what is acceptable behavior.
While to many the guidelines may seem obvious, having these items detailed provides clarity and consistency. Explicitly stating
standards communicates the common guidelines to everyone in a clear manner.
Having a code of ethics can also have some drawbacks. First of all, a code of ethics does not have legal authority; in other words,
breaking a code of ethics is not a crime in itself. So what happens if someone violates one of the guidelines? Many codes of ethics
include a section that describes how such situations will be handled. In many cases, repeated violations of the code result in
expulsion from the group.
In the case of ACM: “Adherence of professionals to a code of ethics is largely a voluntary matter. However, if a member does not
follow this code by engaging in gross misconduct, membership in ACM may be terminated.” Expulsion from ACM may not impact
many individuals since membership in ACM is usually not a requirement for employment. However, expulsion from other
organizations, such as a state bar organization or medical board, could carry a huge impact.
Another possible disadvantage of a code of ethics is that there is always a chance that important issues will arise that are not
specifically addressed in the code. Technology is changing exponentially, and advances in artificial intelligence mean new ethical
issues related to machines. The code of ethics might not be updated often enough to keep up with all of the changes. However, a
good code of ethics is written in a broad enough fashion that it can address the ethical issues of potential technology changes. In
contrast, the organization behind the code makes revisions.
Finally, a code of ethics could also be a disadvantage because it may not entirely reflect the ethics or morals of every member of
the group. Organizations with a diverse membership may have internal conflicts as to what is acceptable behavior. For example,
there may be a difference of opinion on the consumption of alcoholic beverages at company events. In such cases, the organization
must choose the importance of addressing a specific behavior in the code.

Sidebar: Acceptable Use Policies(AUP) (20%)


Many organizations that provide technology services to a group of constituents or the public require an acceptable use policy
(AUP) before those services can be accessed. Like a code of ethics, it is a set of rules applied by the organization that outlines what
users may or may not do while using the organization’s services. Usually, the policy requires some acknowledgment that the rules
are well understood, including potential violations. An everyday example of this is the terms of service that must be agreed to
before using the public Wi-Fi at Starbucks, McDonald’s, or even a university. An AUP is an important document as it demonstrates
due diligence of the organization's security and protection of sensitive data, which protects the organization from legal actions.
Here is an example of an acceptable use policy from Virginia Tech.
Just as with a code of ethics, these acceptable use policies specify what is allowed and what is not allowed. Again, while some of
the items listed are obvious to most, others are not so obvious:

12.1.2 https://workforce.libretexts.org/@go/page/9822
“Borrowing” someone else’s login ID and password are prohibited.
Using the provided access for commercial purposes, such as hosting your own business website, is not allowed.
Sending out unsolicited emails to a large group of people is prohibited.
Also, as with codes of ethics, violations of these policies have various consequences. In most cases, such as with Wi-Fi, violating
the acceptable use policy will mean that you will lose your access to the resource. While losing access to Wi-Fi at Starbucks may
not have a lasting impact, a university student getting banned from the university’s Wi-Fi (or possibly all network resources) could
greatly impact.

References
ACM Code of Ethics. Preamble. Retrieved November 10, 2020, from https://www.acm.org/code-of-ethics.

This page titled 12.1: Introduction is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

12.1.3 https://workforce.libretexts.org/@go/page/9822
12.2: Intellectual Property
Intellectual Property
One of the domains that digital technologies have deeply impacted is the domain of intellectual property. Digital technologies have
driven a rise in new intellectual property claims and made it much more difficult to defend intellectual property.
Merriam-Webster Dictionary defines Intellectual property as “property (as an idea, invention, or process) that derives from the
work of the mind or intellect. This could include song lyrics, a computer program, a new type of toaster, or even a sculpture.
Practically speaking, it is challenging to protect an idea. Instead, intellectual property laws are written to protect the tangible results
of an idea. In other words, just coming up with a song in your head is not protected, but if you write it down, it can be protected.
Protection of intellectual property is important because it gives people an incentive to be creative. Innovators with great ideas will
be more likely to pursue those ideas if they clearly understand how they will benefit. In the US Constitution, Article 8, Section 8,
the authors saw fit to recognize the importance of protecting creative works:
Congress shall have the power... To promote the Progress of Science and useful Arts by securing for limited Times to Authors and
Inventors the exclusive Right to their respective Writings and Discoveries.
An important point to note here is the “limited time” qualification. While protecting intellectual property is important because of its
incentives, it is also necessary to limit the amount of benefit that can be received and allow the results of ideas to become part of
the public domain.
Outside of the US, intellectual property protections vary. You can find out more about a specific country’s intellectual property
laws by visiting the World Intellectual Property Organization.
There are many intellectual property types such as copyrights, patents, trademarks, industrial design rights, plant variety rights, and
trade secrets. In the following sections, we will review three of the best-known intellectual property protection: copyright, patent,
and trademark.

Copyright
Copyright is the protection given to songs, movies, books, computer software, architecture, and other creative works, usually for a
limited time. An artist can, for example, sue if his painting is copied and sold on T-shirts without permission. A coder can sue if
another Web developer verbatim takes her code. Any work that has an “author” can be copyrighted. It covers both published and
unpublished work. Under the terms of copyright, the author of the work controls what can be done with the work, including:
Who can make copies of the work?
Who can create derivative works from the original work?
Who can perform the work publicly?
Who can display the work publicly?
Who can distribute the work?
Often, work is not owned by an individual but is instead owned by a publisher with whom the original author has an agreement. In
return for the rights to the work, the publisher will market and distribute the work and then pay the original author a portion of the
proceeds.
Copyright protection lasts for the life of the original author plus seventy years. In the case of a copyrighted work owned by a
publisher or another third party, the protection lasts for ninety-five years from the original creation date. For works created before
1978, the protections vary slightly. You can see the full details on copyright protections by reviewing the Copyright Basics
document available at the US Copyright Office’s website. See also the sidebar “History of Copyright Law.”

Obtaining Copyright Protection


In the United States, copyright is obtained by the simple act of creating the original work. In other words, when an author writes
down that song, makes that film, or designs that program, he or she automatically has the copyright. However, it is advisable to
register for a copyright with the US Copyright Office for a work that will be used commercially. A registered copyright is needed to
bring legal action against someone who has used a work without permission.

12.2.1 https://workforce.libretexts.org/@go/page/9823
First Sale Doctrine
If an artist creates a painting and sells it to a collector who then, for whatever reason, proceeds to destroy it, does the original artist
have any recourse? What if the collector, instead of destroying it, begins making copies of it and sells them? Is this allowed?
The protections that copyright law extends to creators have an important limitation. The first sale doctrine is a part of copyright law
that addresses this, as shown below:
The first sale doctrine, codified at 17 U.S.C. § 109, provides that an individual who knowingly purchases a copy of a copyrighted
work from the copyright holder receives the right to sell, display or otherwise dispose of that particular copy, notwithstanding the
interests of the copyright owner.
So, in our examples, the copyright owner has no recourse if the collector destroys her artwork. But the collector does not have the
right to make copies of the artwork.

Fair Use
Another important provision within copyright law is that of fair use. Fair use is a limitation on copyright law that allows for
protected works without prior authorization in specific cases. For example, if a teacher wanted to discuss a current event in her
class, she could pass out copies of a copyrighted news story to her students without first getting permission. Fair use allows a
student to quote a small portion of a copyrighted work in a research paper.
Unfortunately, the specific guidelines for what is considered fair use and what constitutes copyright violation are not well defined.
Fair use is a well-known and respected concept and will only be challenged when copyright holders feel that their work's integrity
or market value is being threatened. The following four factors are considered when determining if something constitutes fair use 9
:
The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational
purposes;
The nature of the copyrighted work;
The amount and substantiality of the portion used concerning the copyrighted work as a whole;
The effect of the use upon the potential market for, or value of, the copyrighted work.
If you are ever considering using a copyrighted work as part of something you are creating, you may be able to do so under fair
use. However, it is always best to check with the copyright owner to ensure you are staying within your rights and not infringing
upon theirs.

Sidebar: The History of Copyright Law


As noted above, current copyright law grants copyright protection for seventy years after the author’s death or ninety-five years
from the date of creation for a work created for hire. But it was not always this way.
The first US copyright law, which only protected books, maps, and charts, protected for only 14 years with a renewable term of 14
years. Over time, copyright law was revised to grant protections to other forms of creative expressions, such as photography and
motion pictures. Congress also saw fit to extend the length of the protections, as shown in the chart below. Today, copyright has
become big business, with many businesses relying on copyright-protected works for their income.

Fig 12.1 Expansion of U.S. Copyright act by Tom Bell licensed CC-BY-SA 3.0

12.2.2 https://workforce.libretexts.org/@go/page/9823
Many now think that the protections last too long. The Sonny Bono Copyright Term Extension Act 1998 has been nicknamed the
“Mickey Mouse Protection Act,” as it was enacted just in time to protect the copyright on the Walt Disney Company’s Mickey
Mouse character. It extended copyright terms to the life of the author plus 70 years. Because of this term extension, many works
from the 1920s and 1930s were still protected by copyright and could not enter the public domain until 2019 or later. Mickey
Mouse will not be in the public domain until 2024.

References
ACM Code of Ethics. Preamble. Retrieved November 10, 2020, from https://www.acm.org/code-of-ethics.
US copyright. Copyright basics. Retrieved November 10, 2020, from https://www.copyright.gov/.
US copyright. More information on fair use. Retrieved from https://www.copyright.gov/fair-use/more-info.html.

This page titled 12.2: Intellectual Property is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

12.2.3 https://workforce.libretexts.org/@go/page/9823
12.3: The Digital Millennium Copyright Act
The Digital Millennium Copyright Act
As digital technologies have changed what it means to create, copy, and distribute media, a policy vacuum has been created. In
1998, the US Congress passed the Digital Millennium Copyright Act (DMCA), which extended copyright law to consider digital
technologies. An anti-piracy statute makes it illegal to duplicate digital copyrighted works and sell or freely distribute them. Two of
the best-known provisions from the DMCA are the anti-circumvention provision and the “safe harbor” provision.
The anti-circumvention provision makes it illegal to create technology to circumvent technology that has been put in place to
protect a copyrighted work. This provision includes the creation of the technology and the publishing of information that
describes how to do it. While this provision does allow for some exceptions, it has become quite controversial and has led to a
movement to have it modified.
The “safe harbor” provision limits online service providers' liability when someone using their services commits copyright
infringement. This provision allows YouTube, for example, not to be held liable when someone posts a clip from a copyrighted
movie. The provision does require the online service provider to take action when they are notified of the violation (a
“takedown” notice). For an example of how takedown works, here’s how YouTube handles these requests: YouTube Copyright
Infringement Notification.
Many think that the DMCA goes too far and ends up limiting our freedom of speech. The Electronic Frontier Foundation (EFF) is
at the forefront of this battle. For example, in discussing the anti-circumvention provision, the EFF states:
Yet, the DMCA has become a serious threat that jeopardizes fair use, impedes competition and innovation, chills free expression
and scientific research, and interferes with computer intrusion laws. If you circumvent DRM [digital rights management] locks for
non-infringing fair uses or create the tools to do so, you might be on the receiving end of a lawsuit.

Creative Commons
In chapter 2, we learned about open-source software. Open-source software has few or no copyright restrictions; the software
creators publish their code and make their software available for others to use and distribute for free. This is great for software, but
what about other forms of copyrighted works? If an artist or writer wants to make their works available, how can they go about
doing so while still protecting their work integrity? Creative Commons is the solution to this problem.
Creative Commons is an international nonprofit organization that provides legal tools for artists and authors around the world. The
tools offered to make it simple to license artistic or literary work for others to use or distribute consistently with the creator's
intentions. Creative Commons licenses are indicated with the symbol CC . It is important to note that Creative Commons and the
public domain are not the same. When something is in the public domain, it has absolutely no restrictions on its use or distribution.
Works whose copyrights have expired, for example, are in the public domain.
By using a Creative Commons license, creators can control the use of their work while still making it widely accessible. By
attaching a Creative Commons license to their work, a legally binding license is created. Creators can choose from the following
six licenses with varying permissions from the least open to the most open license:
CC-BY: This is the least restrictive license. It lets others distribute, remix, adapt, and build upon the original work, in any
medium or format, even commercially, as long as they give the author credit(attribution) for the original work.
CC-BY-SA: This license restricts the distribution of the work via the “share-alike” clause. This means that others can freely
distribute, remix, adapt and build upon the work, but they must give credit to the original author, and they must share using the
same Creative Commons license.
CC-BY-NC: NC stands for “non-commercial.” This license is the same as CC-BY but adds that no one can make money with
this work - non-commercial purposes only.
CC-BY-NC-SA: This license allows others to distribute, remix, adapt, and build upon the original work for non-commercial
purposes, but they must give credit to the original author and share using the same license.
CC-BY-NC-ND: This license is the same as CC-BY-NC and adds the ND restriction, which means that no derivative works
may be made from the original.
CCO: allows creators to give up their copyright and put their works into the worldwide public domain. It allows others to
distribute, remix, adapt and build upon in any medium or format with no conditions.

12.3.1 https://workforce.libretexts.org/@go/page/9824
This book has been written under the creative commons license CC-BY. More than half a billion licensed works exist on the Web
free for students and teachers to use, build upon, and share. To learn more about Creative Commons, visit the Creative Commons
website.

Patent
Another important form of intellectual property protection is the patent. A patent creates protection for someone who invents a new
product or process. The definition of invention is quite broad and covers many different fields. Here are some examples of items
receiving patents:
circuit designs in semiconductors;
prescription drug formulas;
firearms;
locks;
plumbing;
engines;
coating processes; and
business processes.
Once a patent is granted, it provides the inventors with protection from others infringing on their patent. A patent holder has the
right to “exclude others from making, using, offering for sale, or selling the invention throughout the United States or importing the
invention into the United States for a limited time in exchange for public disclosure of the invention when the patent is granted.”
As with copyright, patent protection lasts for a limited period of time before the invention or process enters the public domain. In
the US, a patent lasts twenty years. This is why generic drugs are available to replace brand-name drugs after twenty years.

Obtaining Patent Protection


Unlike copyright, a patent is not automatically granted when someone has an interesting idea and writes it down. In most countries,
a patent application must be submitted to a government patent office. A patent will only be granted if the invention or process
being submitted meets certain conditions:
It must be original. The invention being submitted must not have been submitted before.
It must be non-obvious. You cannot patent something that anyone could think of. For example, you could not put a pencil on a
chair and try to get a patent for a pencil-holding chair.
It must be useful. The invention being submitted must serve some purpose or have some use that would be desired.
The United States Patent and Trademark Office(USPTO) is the federal agency that grants U.S patents and registering trademarks. It
reviews patent applications to ensure that the item being submitted meets these requirements. This is not an easy job: USPTO
processes more than 600,000 patent applications and grants upwards of 300,000 patents each year. It took 75 years to issue the first
million patents. The last million patents took only three years to issue; digital technologies drive much of this innovation.

Sidebar: What Is a Patent Troll?


The advent of digital technologies has led to a large increase in patent filings and, therefore, many patents being granted. Once a
patent is granted, it is up to the patent owner to enforce it; if someone is found to be using the invention without permission, the
patent holder has the right to sue to force that person to stop and collect damages.
The rise in patents has led to a new form of profiteering called patent trolling. A patent troll is a person or organization who gains
the rights to a patent but does not actually make the invention that the patent protects. Instead, the patent troll searches for illegally
using the invention in some way and sues them. In many cases, the infringement being alleged is questionable at best. For example,
companies have been sued for using Wi-Fi or for scanning documents, technologies that have been on the market for many years.
Recently, the US government has begun taking action against patent trolls. Several pieces of legislation are working their way
through Congress that will, if enacted, limit the ability of patent trolls to threaten innovation. You can learn a lot more about patent
trolls by listening to a detailed investigation titled When Patents Attack conducted by the radio program This American Life.

12.3.2 https://workforce.libretexts.org/@go/page/9824
Trademark
A trademark is a word, phrase, logo, shape, or sound that identifies a source of goods or services. For example, the Nike “Swoosh,”
the Facebook “f,” and Apple’s apple (with a bite taken out of it) Kleenex(facial tissue brand) are all trademarked. The concept
behind trademarks is to protect the consumer. Imagine going to the local shopping center to purchase a specific item from a specific
store and finding that there are several stores all with the same name!
Two types of trademarks exist – a common-law trademark and a registered trademark. As with copyright, an organization will
automatically receive a trademark if a word, phrase, or logo is being used in the normal course of business (subject to some
restrictions, discussed below). A common-law trademark is designated by placing “TM” next to the trademark. A registered
trademark has been examined, approved, and registered with the trademark office, such as the Patent and Trademark Office in the
US. A registered trademark has the circle-R (®) placed next to the trademark.
While most any word, phrase, logo, shape, or sound can be trademarked, there are a few limitations.
A trademark will not hold up legally if it meets one or more of the following conditions:
1. The trademark is likely to confuse with a mark in a registration or prior application.
2. The trademark is merely descriptive for the goods/services. For example, trying to register the trademark “blue” for a blue
product you sell will not pass muster.
3. The trademark is a geographic term.
4. The trademark is a surname. You will not be allowed to trademark “Smith’s Bookstore.”
5. The trademark is ornamental as applied to the goods. For example, a repeating flower pattern that is a design on a plate cannot
be trademarked.
As long as an organization uses its trademark and defends it against infringement, the protection afforded by it does not expire.
Thus, many organizations defend their trademark against other companies whose branding even only slightly copies their
trademark. For example, Chick-fil-A has trademarked the phrase “Eat Mor Chikin” and has vigorously defended it against a small
business using the slogan “Eat More Kale.” Coca-Cola has trademarked its bottle's contour shape and will bring legal action against
any company using a bottle design similar to theirs. As an example of trademarks that have been diluted and have now lost their
protection in the US are “aspirin” (originally trademarked by Bayer), “escalator” (originally trademarked by Otis), and “yo-yo”
(originally trademarked by Duncan).

Information Systems and Intellectual Property


The rise of information systems has forced us to rethink how we deal with intellectual property. From the increase in patent
applications swamping the government’s patent office to the new laws that must be put in place to enforce copyright protection,
digital technologies have impacted our behavior.

Privacy
The term privacy has many definitions, but privacy will mean the ability to control information about oneself for our purposes. Our
ability to maintain our privacy has eroded substantially in the past decades due to information systems.

Personally Identifiable Information(PII)


Information about a person that can uniquely establish that person’s identity is called personally identifiable information, or PII.
This is a broad category that includes information such as:
name;
social security number;
date of birth;
place of birth;
mother‘s maiden name;
biometric records (fingerprint, face, etc.);
medical records;
educational records;
financial information; and
employment information.

12.3.3 https://workforce.libretexts.org/@go/page/9824
Organizations that collect PII are responsible for protecting it. The Department of Commerce recommends that “organizations
minimize the use, collection, and retention of PII to what is strictly necessary to accomplish their business purpose and mission.”
They go on to state that “the likelihood of harm caused by a breach involving PII is greatly reduced if an organization minimizes
the amount of PII it uses, collects, and stores.” 4 Organizations that do not protect PII can face penalties, lawsuits, and loss of
business. In the US, most states now have laws requiring organizations that have had security breaches related to PII to notify
potential victims, as does the European Union.
Just because companies are required to protect your information does not mean they are restricted from sharing it. In the US,
companies can share your information without your explicit consent (see sidebar below), though not all do so. The FTC urges
companies that collect PII to create a privacy policy and post it on their website. California requires a privacy policy for any
website that does business with a resident of the state.
While the US's privacy laws seek to balance consumer protection with promoting commerce, in the European Union, privacy is
considered a fundamental right that outweighs the interests of commerce. This has led to much stricter privacy protection in the EU
and makes commerce more difficult between the US and the EU.

Non-Obvious Relationship Awareness


Digital technologies have given us many new capabilities that simplify and expedite the collection of personal information. Every
time we come into contact with digital technologies, information about us is being made available. From our location to our web-
surfing habits, our criminal record, to our credit report, we are constantly being monitored. This information can then be aggregated
to create profiles of every one of us. While much of the information collected was available in the past, collecting it and combining
it took time and effort. Today, detailed information about us is available for purchase from different companies. Even information
not categorized as PII can be aggregated so that an individual can be identified.

Fig 12.2: Non-obvious relationship awareness(NORA). Image by David Bourgeois, Ph.D. is licensed CC-By-NC-SA 4.0
First commercialized by big casinos looking to find cheaters, NORA is used by both government agencies and private
organizations, and it is big business. In some settings, NORA can bring many benefits, such as in law enforcement. By identifying
potential criminals more quickly, crimes can be solved more quickly or even prevented before they happen. But these advantages
come at a price: our privacy.

Restrictions on Data Collecting


Information privacy or data protection laws provide legal guidelines for obtaining, using, and storing data about its citizens. The
European Union has had the General Data Protection Regulation(GDPR) in force since 2018. The US does not have a
comprehensive information privacy law but has adopted sectoral laws. 9

Children’s Online Privacy Protection Act(COPPA)


Websites collecting information from children under the age of thirteen are required to comply with the Children’s Online Privacy
Protection Act (COPPA), which is enforced by the Federal Trade Commission (FTC). To comply with COPPA, organizations must
make a good-faith effort to determine the age of those accessing their websites. If users are under thirteen years old, they must
obtain parental consent before collecting any information.

12.3.4 https://workforce.libretexts.org/@go/page/9824
Family Educational Rights and Privacy Act(FERPA)
The Family Educational Rights and Privacy Act (FERPA) is a US law that protects student education records' privacy. In brief, this
law specifies that parents have a right to their child’s educational information until they reach either the age of eighteen or begin
attending school beyond the high school level. At that point, control of the information is given to the child. While this law is not
specifically about the digital collection of information on the Internet, the educational institutions collecting student information are
at a higher risk for disclosing it improperly because of digital technologies. This became especially apparent during the Covid-19
pandemic when all face-to-face classes at educational institutions transitioned to online classes. Institutions need to have policies in
place that protect student privacy during video meetings and recordings.

Health Insurance Portability and Accountability Act (HIPAA)


The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is the law that specifically singles out records related to
health care as a special class of personally identifiable information. This law gives patients specific rights to control their medical
records, requires health care providers and others who maintain this information to get specific permission to share it, and imposes
penalties on the institutions that breach this trust. Since much of this information is now shared via electronic medical records, the
protection of those systems becomes paramount.
If you key in the data in the US, you own the right to store and use it even if the data was collected without permission except
regulated by laws and rules such as above. Very few states recognize an individual’s right to privacy; California is the exception.
The California Online Privacy Protection Act of 2003(OPPA) requires operators of commercial websites or online services that
collect personal information on California residents through a website to post a privacy policy on the site conspicuously.

Sidebar: Do Not Track


When it comes to getting permission to share personal information, the US and the EU have different approaches. In the US, the
“opt-out” model is prevalent; in this model, the default agreement is that you have agreed to share your information with the
organization and explicitly tell them that you do not want your information shared. No laws prohibit sharing your data (beyond
some specific categories of data, such as medical records). In the European Union, the “opt-in” model is required to be the default.
In this case, you must give your explicit permission before an organization can share your information.
To combat this sharing of information, the Do Not Track initiative was created. As its creators explain 3 :
Do Not Track is a technology and policy proposal that enables users to opt-out of tracking by websites they do not visit, including
analytics services, advertising networks, and social platforms. At present, few of these third parties offer a reliable tracking opt-out,
and tools for blocking them are neither user-friendly nor comprehensive. Much like the popular Do Not Call registry, Do Not Track
provides users with a single, simple, persistent choice to opt-out of third-party web tracking.

References:
EFF. Unintended consequences - 16 years under DCMA (2014). Retrieved November 10, 2020, from
https://www.eff.org/wp/unintended-consequences-16-years-under-dmca
EFF. Do not track. Retrieved November 10, 2020, from http://donottrack.us/
Guide to Protecting the Confidentiality of Personally Identifiable Information (PII). National Institute of Standards and
Technology. US Department of Commerce Special Publication 800-122. http://csrc.nist.gov/publications/nistpubs/800-122/sp800-
122.pdf
US Patent and Trademark Office, "What is a patent?" Retrieved November 10, 2020, from www.uspto.gov/patents/

This page titled 12.3: The Digital Millennium Copyright Act is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by
Ly-Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

12.3.5 https://workforce.libretexts.org/@go/page/9824
12.4: Summary
Summary
The rapid changes in information technology in the past few decades have brought a broad array of new capabilities and powers to
governments, organizations, and individuals alike. These new capabilities have required thoughtful analysis and the creation of new
norms, regulations, and laws. This chapter has seen how intellectual property and privacy have been affected by these new
capabilities and how the regulatory environment has been changed to address them.

This page titled 12.4: Summary is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

12.4.1 https://workforce.libretexts.org/@go/page/9825
12.5: Study Questions
Study Questions
1. What does the term information systems ethics mean?
2. What is a code of ethics? What is one advantage and one disadvantage of a code of ethics?
3. What does the term intellectual property mean? Give an example.
4. What protections are provided by a copyright? How do you obtain one?
5. What is fair use?
6. What protections are provided by a patent? How do you obtain one?
7. What does a trademark protect? How do you obtain one?
8. What does the term personally identifiable information mean?
9. What protections are provided by HIPAA, COPPA, and FERPA?
10. How would you explain the concept of NORA?

Exercises
1. Provide one example of how information technology has created an ethical dilemma that would not have existed before the
advent of information technology.
2. Find an example of a code of ethics or acceptable use policy related to information technology and highlight five points that you
think are important.
3. Find an example of work done under a CC license.
4. Do some original research on the effort to combat patent trolls. Write a two-page paper that discusses this legislation.
5. Give an example of how NORA could be used to identify an individual.
6. How are intellectual property protections different across the world? Pick two countries and do some original research, then
compare the patent and copyright protections offered in those countries to those in the US. Write a two- to three-page paper
describing the differences.

This page titled 12.5: Study Questions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

12.5.1 https://workforce.libretexts.org/@go/page/9826
CHAPTER OVERVIEW

13: Future Trends in Information Systems


 Learning Objectives

Upon successful completion of this chapter, you will be able to:


Describe future trends in information systems.

This final chapter will present an overview or advances of some of the new or recently introduced technologies. From wearable
technology, virtual reality, Internet of Things, quantum computing to artificial intelligence, this chapter will provide a look forward
to what the next few years will bring to potentially transform how we learn, communicate, do business, work, and play.
13.1: Introduction
13.2: Collaborative
13.3: Internet of Things (IoT)
13.4: Future of Information Systems
13.5: Study Questions

This page titled 13: Future Trends in Information Systems is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-
Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

1
13.1: Introduction
Introduction
Information systems have evolved at a rapid pace ever since their introduction in the 1950s. Today, devices that we can hold in one
hand are more powerful than the computers used to land a man on the moon. The Internet has made the entire world accessible to
people, allowing us to communicate and collaborate like never before. In this chapter, we will examine current trends and look
ahead to what is coming next.

Global
The first trend to note is the continuing expansion of globalization due to the commercialization of the internet. The use of the
Internet is growing worldwide, and with it, the use of digital devices. All regions are forecasted for significant growth, with some
regions higher than others, such as Asia and Latin America.
The United Nations June 2020 “Report of the Secretary-General Roadmap for Digital Cooperation” reports that 86.6% of people in
the developed countries are online, while only 19% people are online in the least developed countries, with Europe being the region
with the highest usage rates and Africa with the lowest usage rate.
Chapter 11 discussed that by Q3 of 2020, approximately 4.9 billion people, or more than half of the world’s population, use the
internet and forecast growth of 1,266% for the world total, with Asia being the highest 2136%, Latin America at 2489%. The
smallest growth is still forecasted over 200% growth. For more details, please view the data at
https://internetworldstats.com/stats.htm.

Social Media
Social media is one of the most popular internet activities worldwide. Statista.com reports that as of January 2020, the global usage
rate for social media is 49%, and people spend about 144 minutes per day on social media. Even then, there are still billions of
people that remain unconnected, according to datareportal.com. For more details, please read the entire report of Digital 2020.
As of October 2020, Statista.com also reports that Facebook remains the most popular social network globally with about 2.7B
monthly active users, YouTube and WhatsApp with 2B, WeChat at 1.2B, Instagram at 1.1B, Twitter at 353M, TikTok at 689M, etc.
For more details, please view this report at Statista.com.

Personalization
With the continued increased usage of the internet and e-commerce, users have moved beyond the simple, unique ringtones on
mobile phones. They now expect increased personalized experience in the products or services, entertainment, and learning, such
as highly targeted, just-in-time recommendations that are finely tuned with their preferences from vendors' data. For example,
Netflix recommends what shows they might want to watch. Wearable devices from various vendors such as Apple, Google,
Amazon make personalized recommendations for exercises, meditation, diet, among others, based on your current health
conditions.

Mobile
Perhaps the most impactful trend in digital technologies in the last decade has been the advent of mobile technologies. Beginning
with the simple cell phone in the 1990s and evolving into the smartphones and tablets of today, mobile growth has been
overwhelming.
Smartphones were introduced in the 1990s. This new industry has exploded into a trillion-dollar industry with $484B spent on
smartphones, $176B in mobile advertising, $118B in Apps, $77B in accessories, $25B in wearables (Statista, 2020.) For more
details, please view The Trillion-Dollar Smartphone Economy.

Wearables
The wearable market, which is now a $25B economy, includes specific-purpose products such as fitness bands, smart socks,
eyewear, hearing aids. We are now seeing a convergence in general-purpose devices such as computers and televisions and portable
devices such as smartwatches and smartphones. It is also anticipated that wearable products will touch different aspects of
consumers’ life. For example, smart clothing such as Neviano smart swimsuits, Live’s Jacquard jacket (Lifewire, 2020),

13.1.1 https://workforce.libretexts.org/@go/page/9829
Advances in artificial intelligence, sensors, and robotics will expand to wearables for front-line workers such as Exoskeletons such
as Ekso’s EVO to assist workers who have to carry heavy weight items such as firefighters, warehouse workers, or to health
industries to provide mobility for people who are limited in mobility.

References
EVO is designed to provide power without pain. Retrieved December 10, 2020, from https://eksobionics.com/ekso-evo/.
Statista. The Trillion-Dollar Smartphone Economy (2019). Retrieved December 11, 2020, from
https://www.statista.com/chart/20258/estimated-sales-of-smartphones-and-related-products-and-services/.
Lifewire. The 7 Best Smart Clothes of 2020. Retrieved December 11, 2020, from https://www.lifewire.com/best-smart-clothes-
4176104.
Report of the Secretary-General Roadmap for Digital Cooperation (June 2020.) Retrieved December 11, 2020, from
https://www.un.org/en/content/digital-cooperation-roadmap/.
World Internet Usage and Population Statistics 2020 Year-Q3 Estimates (2020). Retrieved December 11, 2020, from
https://internetworldstats.com/stats.htm.

This page titled 13.1: Introduction is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

13.1.2 https://workforce.libretexts.org/@go/page/9829
13.2: Collaborative
Collaborative
Collaborators as free content-providers
Internet usage has continued to give rise to the collaborative effort among consumers and businesses worldwide. Consumers have
gained influence by sharing reviews of products and services. It is common for people to look up other people’s reviews before
buying a product, visiting restaurants via sites such as Yelp, instead of believing the information from vendors directly. Businesses
have leveraged consumers’ collaboration to contribute to the content of a product. For example, the smartphone app Waze is a
community-based tool that keeps track of the route you are traveling and how fast you are making your way to your destination. In
return for providing your data, you can benefit from the data being sent from all other app users. Waze will route you around traffic
and accidents based upon real-time reports from other users. These businesses rely on users spending their free time to write free
reviews to be shared with other people in these examples. In essence, they monetize people’s time and content.

Shared economy collaborators


New types of companies such as Airbnb and Uber incorporate consumers into their business model and share a fraction of the
revenues. These companies monetize everyday person’s owned assets. For example, Airbnb uses its technology platform to rent out
rooms, houses to people by people who actually own these assets. Uber popularized the gig economy by having people use their
own cars as drivers. This trend is expected to continue and expand in other industries such as advertising.

Telecommunication
Personal communication
Video communication technologies such as Voice-over-IP (VoIP) have given consumers a means to communicate with each other
for free instead of paying for expensive traditional phone lines through free services such as Microsoft Skype and WhatApp The
combined use of smartphones, VoIP, more powerful servers, among others, have made landlines outdated and expensive. By 2019,
the number of landlines had decreased to less than 40% from 90% in 2004 (Statista.com, 2019.)

Entertainment
The above trend continues to affect other industries, such as the consumers’ exodus of cable services or pay-TV to streaming
services, a phenomenon called ‘cutting the cord’ due to the rise of companies such as Netflix and Hulu. By 2022, it is estimated
that the number of households not paying for TV services in North America will grow to around 55.1 million (Statista.com, 2019).
The convergence of TV, computers, and entertainment will continue as technologies become easier to use and the infrastructure
such as 5G networks, to deliver data becomes faster.

Virtual environment
Tele-work
Telecommuting has been a trend that ebbs and flows as companies experiment with technologies to allow their workers to work
from home. However, with the Covid-19 pandemic, telecommuting became essential as people worldwide worked from home to
comply with national or regional stay-at-home orders. The debate over the merit of telework has been set aside, and its adoption
spread to many industries that have eschewed this use of technology. For example, therapy counseling, medical visits with primary
care providers can now be done remotely. The Post-pandemic work environment may not necessarily be the same as it was. Now,
organizations have gained valuable insights about having most, if not all, of their entire workforce work from home. In one year,
Zoom, the name of a relatively unknown company providing video communications, became a household word, gaining 37% in
usage rate, with Microsoft Teams trailing at 19%, Skype at 17%, Google Hangouts at 9%, and slack at 7% (Statista, 2020)

Immersion - virtual reality


Tele-work allows us to see other people while we remain in our physical world. Virtual reality (VR) gives us a perception of being
physically in another world. Research in building VR has been going on since the 1990s or even earlier. One example is CAVE2,
also known as the Next-Generation CAVE (NG-CAVE), a research project funded by the National Science Foundation in 1992 to
allow researchers to ‘walk around in a human brain or fly on Mars, etc.”. Please watch this video on YouTube or search for the
phrase with the keyword ‘CAVE2’ for more details.

13.2.1 https://workforce.libretexts.org/@go/page/9830
Technologies are not yet mature enough to give us a 100% immersive experience. They may be good enough for some products
recently on a smaller scale in gaming or training. For example, if we use a VR goggle to play a game, we become a character. The
same technology can be used in training for police officers.

Figure 13.2.1 : A woman using the Manus VR glove development kit in 2016. (CC BY-SA 4.0; Manus VR via Wikipedia)

3D Printing
3D printing completely changes our current thinking of what a printer is or the notion of printing. We typically use printers to print
reports, letters, or pictures on physical papers. A 3-D printer allows you to print virtually any 3-D object based on a model of that
object designed on a computer. 3-D printers work by creating layer upon layer of the model using malleable materials, such as
different types of glass, metals, wax, or even food ingredients
3-D printing is quite useful for prototyping the designs of products to determine their feasibility and marketability. 3-D printing has
also been used to create working prosthetic legs or handguns. Icon can print a 500sqt home in 48 hours for $10,000. NASA wants
to print pizzas for astronauts, and we can now print cakes too. In 2020, The US Air Force produces the first 3D-printed metal part
for aircraft engines.
This technology can potentially affect the global value chain to develop products, and entrepreneurs can build prototypes in their
garage or provide solutions to some social challenges. For example, producing a prototype of a 3D object for research and
engineering can now be done in-house using a 3D printer which speeds up the development time. Tiny homes can be provided at a
fraction of a cost of a traditional home.
With the rising need from consumers for more personalization (as discussed earlier), this technology may help businesses deliver
on this need through shoes, clothing, and even 3D printed cars.

References
CAVE2 immerses scientists and engineers in their research -- literally! - Science Nation (2013). Retrieved December 10, 2020,
from https://www.youtube.com/watch?v=kjAviW2alpA.
I Printed a 3D Gun (2013). Retrieved December 10, 2020, from https://mashable.com/2013/06/02/3d-printed-gun/.
NASA astronauts may soon be able to 3D-print pizzas in space (2017). Retrieved December 11, 2020, from
https://www.zdnet.com/article/nasa-astronauts-may-soon-be-able-to-3d-print-pizzas-in-space/.
Printed homes (2018). Retrieved December 11, 2020, from https://www.iconbuild.com/hom.
US Air Force produces the first 3D-printed metal part for aircraft engines (2020.) Retrieved December 11, 2020, from
https://www.flightglobal.com/fixed-wing/us-air-force-produces-first-3d-printed-metal-part-for-aircraft-engines/139643.article.
Statista. Cord Cutting (2019). Retrieved December 11, 2020, from https://www.statista.com/topics/4527/cord-cutting/.
Statista. Most used collaboration tools used for remote work in the United States in 2020. Retrieved December 10, 2020, from
https://www.statista.com/statistics/1123023/top-collaboration-tools-for-remote-workers-in-the-us/.

13.2.2 https://workforce.libretexts.org/@go/page/9830
This page titled 13.2: Collaborative is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham, Tejal
Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

13.2.3 https://workforce.libretexts.org/@go/page/9830
13.3: Internet of Things (IoT)
Internet of Things (IoT)
Rouse (2019) explains that IoT is implemented as a set of web-enabled physical objects or things embedded with software,
hardware, sensors, processors to collect and send data as they acquire from their environments. A ‘thing’ could be just about
anything, a machine, an object, an animal, or even people as long as each thing has an embedded unique ID and is web-enabled.
In a report by McKinsey & Company on the Internet of Things (Chui et al., 2010), six broad applications are identified:
Tracking behavior. When products are embedded with sensors, companies can track these products' movements and even
monitor interactions with them. Business models can be fine-tuned to take advantage of this behavioral data. Some insurance
companies, for example, are offering to install location sensors in customers’ cars. That allows these companies to base the
price of policies on how a car is driven and where it travels.
Enhanced situational awareness. Data from large numbers of sensors, for example, in infrastructure (such as roads and
buildings), or to report on environmental conditions (including soil moisture, ocean currents, or weather), can give decision-
makers a heightened awareness of real-time events, particularly when the sensors are used with advanced display or
visualization technologies. Security personnel, for instance, can use sensor networks that combine video, audio, and vibration
detectors to spot unauthorized individuals who enter restricted areas.
Sensor-driven decision analysis. The Internet of Things also can support longer-range, more complex human planning and
decision making. The technology requirements – tremendous storage and computing resources linked with advanced software
systems that generate various graphical displays for analyzing data – rise accordingly.
Process optimization. Some industries, such as chemical production, are installing legions of sensors to bring much greater
granularity to monitoring. These sensors feed data to computers, which in turn analyze the data and then send signals to
actuators that adjust processes – for example, by modifying ingredient mixtures, temperatures, or pressures.
Optimized resource consumption. Networked sensors and automated feedback mechanisms can change usage patterns for
scarce resources, such as energy and water. This can be accomplished by dynamically changing the price of these goods to
increase or reduce demand.
Complex autonomous systems. The most demanding use of the Internet of Things involves the rapid, real-time sensing of
unpredictable conditions and instantaneous responses guided by automated systems. This kind of machine decision-making
mimics human reactions, though at vastly enhanced performance levels. The automobile industry, for instance, is stepping up
the development of systems that can detect imminent collisions and take evasive action.
IoT has evolved since the 1970s, and by 2020, it is now most associated with smart homes. Products such as smart thermostats,
smart doors, lights, home security systems, home appliances, etc. For example, Amazon Echo, Google Home, Apple’s HomePod
are smart home hubs to manage all the smart IoT in the home. More and more IoT devices will continue to be offered as vendors
seek to make everything ‘smart.’

Autonomous
A trend that is emerging is autonomous robots and vehicles. By combining software, sensors, and location technologies, devices
that can operate themselves to perform specific functions are being developed. These take the form of creations such as medical
nanotechnology robots (nanobots), self-driving cars, self-driving trucks, drones, or crewless aerial vehicles (UAVs).
A nanobot is a robot whose components are on a nanometer scale, which is one-billionth of a meter. While still an emerging field, it
is showing promise for applications in the medical field. For example, a set of nanobots could be introduced into the human body to
combat cancer or a specific disease. In March of 2012, Google introduced the world to their driverless car by releasing a video on
YouTube showing a blind man driving the car around the San Francisco area (or search for "Self-Driving Car Test: Steve Mahan).
The car combines several technologies, including a laser radar system, worth about $150,000.
By 2020, 38 states have enacted some legislation allowing various activities from conducting studies, limited pilot testing, full
deployment of commercial motor vehicles without a human operator; The details can be found at ghsa.org.
The Society of Automotive Engineers (SAE, 2018) has designed a zero to five rating system detailing the varying levels of
automation — the higher the level, the more automated the vehicle is.
Level Zero: No Automation – The driver does all the driving without any help from the vehicle
Level One: Driver Assistance – The vehicle helps steer or speed up/slow down, but the driver still does the driving.

13.3.1 https://workforce.libretexts.org/@go/page/9831
Level Two: Partial Automation – The vehicle helps with one or more systems, but the driver still does the driving.
Level Three: Conditional Automation – The vehicle helps with steering and brake/acceleration, but the driver still needs to
monitor, can intervene as necessary still sitting in the driver seat.
Level Four: High Automation – The vehicle completes all driving duties even if the driver does not intervene in limited
conditions (i.e., local taxis)
Level Five: Full Automation – The vehicle completes all duties without a driver on all roads in all conditions.
Consumers have begun seeing the features in levels 1 and 3 being integrated with today’s non-autonomous cars, and this trend is
expected to continue.
A UAV often referred to as a “drone,” is a small airplane or helicopter that can fly without a pilot. Instead of a pilot, they are either
run autonomously by computers in the vehicle or operated by a person using a remote control. While most drones today are used
for military or civil applications, there is a growing market for personal drones. For a few hundred dollars, a consumer can
purchase a drone for personal use.
Commercial use of UAV is beginning to emerge. Companies such as Amazon plan to deliver their packages to customers using
drones, Walmart plans to use drones to carry things in their stores. This sector is forecasted to become a $12.6B worldwide market
by 2025 (Statista.com, 2019).

References:
Autonomous Vehicles. Retrieved December 10, 2020, from https://www.ghsa.org/state-laws/issues/autonomous%20vehicles.
Chui, M. and Roberts R (2010, March 1). The Internet of Things. Retrieved December 10, 2020, from
https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-internet-of-things.
Rouse, Margaret (2019). Internet of things (IoT). IoT Agenda. Retrieved December 11, 2020, from
https://internetofthingsagenda.techtarget.com/definition/Internet-of-Things-IoT.
SAE International Releases Updated Visual Chart for Its “Levels of Driving Automation” Standard for Self-Driving Vehicles
(2018). Retrieved December 10, 2020, from https://www.sae.org/news/press-room/2018/12/sae-international-releases-updated-
visual-chart-for-its-%E2%80%9Clevels-of-driving-automation%E2%80%9D-standard-for-self-driving-vehicles.
Statista. Commercial Drones are Taking Off (2019). Retrieved December 11, 2020, from
https://www.statista.com/chart/17201/commecial-drones-projected-growth/.

This page titled 13.3: Internet of Things (IoT) is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T.
Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

13.3.2 https://workforce.libretexts.org/@go/page/9831
13.4: Future of Information Systems
Future of Information Systems
Quantum computer
Today’s computers use bits as data units. A bit value can only be either 0 or 1, as we discussed in Chapter 2. Quantum computers
use qubit, which can represent a combination of both 0 and 1 simultaneously, leveraging the principles of quantum physics. This is
a game-changer for computing and will disrupt all aspects of information technology. The benefits include a significant speed
increase in calculations that will enable solutions for unsolvable problems today. However, there are many technical problems to be
solved yet since all the IS elements will need to be re-imagined. Google announced the first real proof of a working quantum
computer in 2019 (Menard, et al., 2020). Menard et al. also indicated that the industries that would benefit from this new computer
type would be industries with complex problems to solve, such as pharmaceutical, autonomous vehicles, cybersecurity, or intense
mathematical modeling such as Finance, Energy. For a full report, please visit McKinsey.com.

Blockchain
A blockchain is a set of blocks or a list of records linked using cryptography to record a transaction and track assets in a network.
Anything of value can be considered an asset and be tracked. Examples include a house, cash, patents, a brand. Once a transaction
is recorded, it cannot be changed retroactively. Hence, it is considered highly secured.
Blockchain has many applications, but bitcoin is mostly associated with it because it was the first application using blockchain
technology. Sometimes bitcoin and blockchain are mistakenly meant to be the same thing, but they are not.
Bitcoin is digital money or a cryptocurrency. It is an open-source application built using blockchain technology. It is meant to
eliminate the need for a central bank since people can directly send bitcoins. Simply put, bitcoin keeps track of a list of who sends
how many bitcoins to another person. One difference with today’s money is that a bitcoin's value fluctuates since it works like a
stock. Anyone can buy different bitcoin cryptocurrencies or other cryptocurrencies on bitcoin exchanges such as Coinbase. Bitcoin
and other cryptocurrencies are accepted by a few organizations such as Wikimedia, Microsoft, Wholefoods. However, bitcoin’s
adoption is still uncertain. If the adoption by major companies is accelerated, then banking locally and globally will change
significantly.
Some early businesses have begun to use blockchain as part of their operations. Kroger uses IBM blockchain to trace food from the
farms to its shelves to respond to food recalls quickly (IBM.com.) Amazon Managed Blockchain is a fully managed service that
makes it easy to create and manage scalable blockchain networks.

Artificial Intelligence (AI)


Artificial intelligence (AI) comprises many technologies to duplicate the functions of the human brain. It has been in research since
the 1950s and has seen an ebb and flow of interest. To understand and duplicate a human brain, AI is a complex interdisciplinary
effort that involves multiple fields such as computer science, linguistics, mathematics, neuroscience, biology, philosophy, and
psychology. One approach is to organize the technologies as below, and commercial solutions have been introduced:
1. Expert systems: also known as decision support systems, knowledge management. These solutions have been widely deployed
for decades, and we have discussed in earlier chapters such as knowledge management, decision support, customer relationship
management system, financial modeling.
2. Robotics: this trend is more recent even though it has been in research for decades. Robots can come in different shapes, such
as a familiar object, an animal, or a human. It can be tiny or as big as it can be designed:
1. A nanobot is a robot whose components are on the scale of about a nanometer.
2. A robot with artificial skins to look like a human is called a humanoid. They are being deployed in limited situations such as
assistants to police, senior citizens who need help, etc. Two popular robots are Atlas from Boston Dynamic and humanoid
Sophia from Hanson Robotics.
Consumer products such as the smart vacuum iRobot Roomba are now widely available. The adoption of certain types of robots
has accelerated in some industries due to the pandemic: Spot, the dog-like robot from Boston dynamics, is used to patrol for social
distancing.

13.4.1 https://workforce.libretexts.org/@go/page/9832
Figure 13.4.1 : Sophia, First Robot Citizen at the AI for Good Global Summit 2018. Image by ITU Pictures is licensed under CC
BY 2.0
3. Natural language: voice as a form of communication with our smart devices is now the norm—for example, Apple’s Siri,
Amazon’s Alexa.
4. Vision: advanced progress has been made towards camera technologies and solutions to store and manipulate visual images.
Examples include advanced security systems, drones, face recognition, smart glasses, etc.
5. Learning systems: Learning systems allow a computer (i.e., a robot) to react to situations based on the immediate feedback it
receives or the collection of feedback stored in its system. Simple forms of these learning systems can be found today in
customers' online-chat support, also known as ‘AI bot.’ One such example is IBM’s Watson Assistant.
6. Neural networks: This is a collection of hardware and software technologies. The hardware includes wearable devices that
allow humans to control machines using thoughts such as Honda Motor’s Brain-Machine Interface. This is still in the research
phase, but its results can impact many industries such as healthcare.
The goal of 100% duplicating a human brain has not been achieved yet since no AI systems have passed the Alan Turing test
known as Turing Test to answer the question 'Can a machine think?" Alan is widely considered a founder of the AI field and
devises a test to a machine's ability to show the equivalent intelligent behavior to that humans. The test does not look for correct
answers but rather answers closely resemble those a human would give.

Figure 13.4.2 : Alan Turing Aged 16. Image is licensed Public Domain
Even though AI has not been to duplicate a human brain yet, its advances have introduced many AI-based technologies such as AI
bot, robotics in many industries. AI progress has contributed to producing many practical business information systems that we
discussed throughout this book such as, voice recognition, cameras, robots, autonomous cars, etc. It has also raised concerns over
how ethical is the development of some AI technologies as we discussed in previous chapters.
Advances in artificial intelligence depend on the continuous effort to collect vast amounts of data, information, and knowledge,
advances in hardware, sophisticated methods to analyze both unconnected and connected large datasets to make inferences to
create new knowledge, supported by secured, fast networks.

13.4.2 https://workforce.libretexts.org/@go/page/9832
References
Boston Dynamics’ dog-like robot Spot is being used on coronavirus social distancing patrol (2020). Retrieved December 13, 2020,
from https://www.cnbc.com/2020/05/15/boston-dynamics-dog-like-robot-spot-used-on-social-distancing-patrol.html.
Changing your idea of what robots can do. Retrieved December 13, 2020, from https://www.bostondynamics.com/.
Honda's Brain-Machine Interface: controlling robots by thoughts alone (2009). Retrieved December 11, 2020, from
https://newatlas.com/honda-asimo-brain-machine-interface-mind-
control/11379/#:~:text=Honda%20Research%20Institute%2C%20Japan%2C%20has,using%20nothing%20more%20than%20thou
ght.&text=Then%2C%20the%20doors%20will%20be,and%20act%20directly%20upon%20them.
Kroger uses IBM Blockchain technology for farm to fork food traceability. Retrieved December 11, 2020, from
https://mediacenter.ibm.com/media/Kroger+uses+IBM+Blockchain+technology+for+farm+to+fork+food+traceability/0_527q9xfy.
Menard A., Ostojic I., and Patel M. (2020, February 6). A game plan for quantum computing. Retrieved December 10, 2020, from
https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/a-game-plan-for-quantum-computing.
The smarter AI assistant for business. Retrieved December 11, 2020, from https://www.ibm.com/cloud/watson-assistant-2/

This page titled 13.4: Future of Information Systems is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-
Huong T. Pham, Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

13.4.3 https://workforce.libretexts.org/@go/page/9832
13.5: Study Questions
Summary
Information systems have changed how we work, play, and learn since the internet was introduced to the mass. We may be at a
tipping point now with many significant advances of technologies that have been in research for many decades and are converged
roughly at the same time as described in the above trends.
The adoption of many technologies has also been accelerated due to the 2020 Covid-19 Pandemic. Organizations will need to
determine how they want to move forward to leverage opportunities and manage risks should any of the above trends become a
reality.
As the world of information technology moves forward, we will be constantly challenged by new capabilities and innovations that
will both amaze and disgust us. As we learned in chapter 12, many times, the new capabilities and powers that come with these
new technologies will test us and require a new way of thinking about the world. Businesses and individuals alike need to be aware
of these coming changes and prepare for them.

Study Questions
1. Which countries are the biggest users of the Internet? Social media? Mobile?
2. Which region had the largest Internet growth (in %) as of this year?
3. Identify the top three social media networks by active users?
4. How many people worldwide still need to be connected?
5. Explain what a virtual environment is.
6. What are two different applications of wearable technologies?
7. What are two different applications of collaborative technologies?
8. What capabilities do printable technologies have?
9. What makes anything an Internet of Think?
10. What is a UAV?
11. What is Conditional Automation?
12. What is a nanobot?
13. What is a humanoid?
14. Explain what Artificial intelligence is.

Exercises
1. If you were going to start a new technology business, which of the emerging trends do you think would be the biggest
opportunity? Do some original research to estimate the market size.
2. What could privacy concerns be raised by collaborative technologies such as Zoom?
3. Do some research about the first handgun printed using a 3-D printer and report some of the concerns raised.
4. Write up an example of how the Internet of Things might provide a business with a competitive advantage.
5. How do you think wearable technologies could improve overall healthcare?
6. What potential problems do you see with a rise in the number of driverless cars? Do some independent research and write a
two-page paper that describes where driverless cars are legal and what problems may occur.
7. Seek out the latest presentation by Mary Meeker on “Internet Trends” (if you cannot find it, the video from 2019 is available
https://www.youtube.com/watch?v=G_dwZB5h56E at the time of this writing). Write a one-page paper describing what the top
three trends are, in your opinion.

13.5.1 https://workforce.libretexts.org/@go/page/9833
8. Visit ghsa.org to find what level of support your state has given to autonomous vehicles. Write a summary of the different levels
of support for all states.

This page titled 13.5: Study Questions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Ly-Huong T. Pham,
Tejal Desai-Naik, Laurie Hammond, & Wael Abdeljabbar (ASCCC Open Educational Resources Initiative (OERI)) .

13.5.2 https://workforce.libretexts.org/@go/page/9833
Index
A C Cost advantage
Access Control Cable 7.3: Competitive Advantage
6.3: Tools for Information Security 5.9: Internet Connections CPU
Agile development methodology Chief Information Officer 2.2: Tour of a Digital Device
10.2: Systems Development Life Cycle (SDLC) 9.4: Managing Information Systems Creative Commons
Model Client And Servers 12.3: The Digital Millennium Copyright Act
AI engineers 5.5: Providing Resources in a Network Cyber Security
9.5: Emerging Roles Client server 7.5: Investing in IT for Competitive Advantage
Application software 1: What Is an Information System? Cyber Security Analyst
3.2: Types of Software 1.3: The Role of Information Systems 9.5: Emerging Roles
ARPANET Cloud Computing Cybercrime
5.2: A Brief History of the Internet 3: Software 6.5: Fighters in the War Against Cybercrime- The
Artificial intelligence 3.3: Cloud Computing Modern Security Operations Center
5.12: The Changing Network Environment Network
7.5: Investing in IT for Competitive Advantage
13.4: Future of Information Systems
Trends
D
Authentication Cloud Services
Data
7.4: Using Information Systems for Competitive
6.3: Tools for Information Security 1.2: Identifying the Components of Information
Advantage
Autonomous Cloud System Engineer
Systems
13.3: Internet of Things (IoT) Data Analytics
9.5: Emerging Roles
Availability Code of Ethics
4.12: Sidebar- What is data science?
6.2: The Information Security Triad- Confidentiality, Data Center
12.1: Introduction
Integrity, Availability (CIA) 5.12: The Changing Network Environment Network
6.6: Security vs. Availability Collaborative systems Trends
7.4: Using Information Systems for Competitive Data Governance
Advantage
B 13.2: Collaborative 4.10: Enterprise Databases
Backup Communication Data Mining
6.3: Tools for Information Security 5.4: How has the Human Network Influenced you? 4.8: Data Mining
Bargaining power of customers Competitive Advantage Data Science
7.3: Competitive Advantage 1.4: Can Information Systems Bring Competitive 4.12: Sidebar- What is data science?
Bargaining power of suppliers Advantage? Data Types
7.3: Competitive Advantage 7: Leveraging Information Technology (IT) for 4.4: Designing a Database
Competitive Advantage
Benefits of Data Warehousing 7.3: Competitive Advantage
Data Warehouse
4.7: Data Warehouse 7.5: Investing in IT for Competitive Advantage 4.7: Data Warehouse
Big Data 4.8: Data Mining
Components of an information system
4.6: Big Data 1.1: Introduction
Database Administrator
9.5: Emerging Roles 9.3: Information-Systems Operations and
Components of Information Systems Administration
Big Data Engineer 1: What Is an Information System?
9.5: Emerging Roles 1.2: Identifying the Components of Information
Database Vs. Spreadsheet
Binary Systems 4.5: Sidebar- The Difference between a Database and
a Spreadsheet
2.1: Introduction Computer Engineer
Databases
Bits 9.2: The Creators of Information Systems
4.2: Examples of Data
2.1: Introduction Computer Operator
DBMS
Blockchain 9.3: Information-Systems Operations and
Administration 4.9: Database Management Systems
13.4: Future of Information Systems
Computer Programmer Decision Support Systems
Bluetooth
9.2: The Creators of Information Systems 7.4: Using Information Systems for Competitive
2.4: Removable Media Advantage
Building Information Systems Computer Vision Engineer
9.5: Emerging Roles
Designing A Database
9.2: The Creators of Information Systems 4.4: Designing a Database
Business analytics Confidentiality
6.2: The Information Security Triad- Confidentiality,
developers
4.12: Sidebar- What is data science? 9.1: Introduction
Integrity, Availability (CIA)
Business Intelligence Developing Information Systems
Connection To The Internet
4.12: Sidebar- What is data science? 9: The People in Information System
5.9: Internet Connections
Business Process 9.2: The Creators of Information Systems
Connections
8.2: What Is a Business Process?
5.8: The Internet, Intranets, and Extranets
Device Security
Business Process Engineering 6.1: Introduction
Converging Network
8.2: What Is a Business Process?
5.10: The Network as a Platform Converged
Devices
Business Process Management Networks 5.12: The Changing Network Environment Network
7.4: Using Information Systems for Competitive Trends
COPPA
Advantage
12.3: The Digital Millennium Copyright Act
Differentiation advantage
BYOD 7.3: Competitive Advantage
Copyright
5.12: The Changing Network Environment Network
12.2: Intellectual Property
Diffusion of Innovation
Trends 9.7: Information-Systems Users – Types of Users

1 https://workforce.libretexts.org/@go/page/15306
Digital Cooperation Four Levels of Information Systems Information systems ethics
13.1: Introduction 7.4: Using Information Systems for Competitive 12.1: Introduction
Digital Devices Advantage Information Systems for Business
2.1: Introduction Functional Manager Preface
Digital Divide 9.4: Managing Information Systems Information Systems Functions
11.1: Introduction 9.6: Career Path in Information Systems
Digital Millennium Copyright Act G Innovators
12.3: The Digital Millennium Copyright Act Global Competition 9.7: Information-Systems Users – Types of Users
Direct Cutover 7.5: Investing in IT for Competitive Advantage Input Devices
10.4: Implementation Methodologies Global Firm 2.4: Removable Media
Documenting a Business Process 11.2: The Global Firm Integrated development environment
8.2: What Is a Business Process? Globalization 10.3: Software Development
Drones 11.1: Introduction Integrity
13.3: Internet of Things (IoT) Globalization Benefits 6.2: The Information Security Triad- Confidentiality,
DSL 11.2: The Global Firm Integrity, Availability (CIA)
5.9: Internet Connections Globalization Challenges Intellectual Property
11.2: The Global Firm 12.2: Intellectual Property
E GPU Interconnectivity
2.2: Tour of a Digital Device 5.8: The Internet, Intranets, and Extranets
Early Adopters
9.7: Information-Systems Users – Types of Users Grade Hopper Internet
1.3: The Role of Information Systems 5.2: A Brief History of the Internet
Early Majority 5.6: LANs, WANs, and the Internet
9.7: Information-Systems Users – Types of Users Graphics Processing Unit 5.8: The Internet, Intranets, and Extranets
Electronic Data Interchange 2.2: Tour of a Digital Device
Internet Connections
7.4: Using Information Systems for Competitive 5.9: Internet Connections
Advantage H Internet growth
Electronic Waste Hacktivists 13.1: Introduction
2.5: Other Computing Devices 6.2: The Information Security Triad- Confidentiality, Internet of things
Encryption Integrity, Availability (CIA)
13.3: Internet of Things (IoT)
6.3: Tools for Information Security Hardware
Internet Speed
End User Devices 1: What Is an Information System?
11.2: The Global Firm
5.12: The Changing Network Environment Network 1.2: Identifying the Components of Information
Trends Systems Intranet
2: Hardware 5.8: The Internet, Intranets, and Extranets
Enterprise Databases 5.5: Providing Resources in a Network
4.10: Enterprise Databases
Intrusion Detection System
HDD 6.3: Tools for Information Security
Enterprise Software 2.3: Sidebar- Moore’s Law
3.2: Types of Software
Investing in IT
Health Information Technician 7.5: Investing in IT for Competitive Advantage
ERP 9.5: Emerging Roles
3: Software
IoT
HIPAA 2.5: Other Computing Devices
3.2: Types of Software
12.3: The Digital Millennium Copyright Act 13.3: Internet of Things (IoT)
ERP management
History of information systems ISO 9000
9.4: Managing Information Systems
1: What Is an Information System? 8.2: What Is a Business Process?
Expert systems
How do information systems work? ISO certification
13.4: Future of Information Systems
1.1: Introduction 8.2: What Is a Business Process?
Extranet
5.8: The Internet, Intranets, and Extranets
IT
I 7.2: The Productivity Paradox
F IDE IT does not matter
10.3: Software Development 1.4: Can Information Systems Bring Competitive
Fair Use Advantage?
Identity Theft
12.2: Intellectual Property IT doesn't matter
6.4: Threat Impact
Fault Tolerance 7.2: The Productivity Paradox
Inbound logistics
5.11: Reliable Network
7.3: Competitive Advantage
FERPA J
Information Security
12.3: The Digital Millennium Copyright Act
6: Information Systems Security JAD
firewall
Information Security Officer 10.2: Systems Development Life Cycle (SDLC)
6.3: Tools for Information Security Model
9.4: Managing Information Systems
Firm infrastructure
Information Security Triad
7.3: Competitive Advantage
6.2: The Information Security Triad- Confidentiality, K
First Sale Doctrine Integrity, Availability (CIA) Killer App
12.2: Intellectual Property Information Systems 3.2: Types of Software
Five Forces model About the Book Knowledge Management
7.3: Competitive Advantage 1.1: Introduction 4.11: Knowledge Management
Five levels of automation 7.4: Using Information Systems for Competitive
Advantage
13.3: Internet of Things (IoT)

2 https://workforce.libretexts.org/@go/page/15306
L Network Topology Peer To Peer
Laggards 5.7: Network Representations 5.5: Providing Resources in a Network
9.7: Information-Systems Users – Types of Users Network Trends people and information systems
LAN 5.12: The Changing Network Environment Network 9.1: Introduction
Trends Personal computer
5.6: LANs, WANs, and the Internet
5.7: Network Representations Networking 1.3: The Role of Information Systems
Late Majority 1.2: Identifying the Components of Information 2.5: Other Computing Devices
Systems Personally Identifiable Information
9.7: Information-Systems Users – Types of Users
Networking Communication 6.4: Threat Impact
Laurie Hammond
1: What Is an Information System? Phased Implementation
About the Book
Networking Trends In The Home 10.4: Implementation Methodologies
Learning Systems
5.13: Technology Trends in the Home PII
13.4: Future of Information Systems
Networks 6.4: Threat Impact
5.4: How has the Human Network Influenced you? Pilot Implementation
M 5.5: Providing Resources in a Network
10.4: Implementation Methodologies
Machine Learning Engineeers 5.6: LANs, WANs, and the Internet
9.5: Emerging Roles Neural Networks Porter’s five forces
7.3: Competitive Advantage
Mainframe 13.4: Future of Information Systems
1.3: The Role of Information Systems Normalized Database Porter's Five Forces
7: Leveraging Information Technology (IT) for
Mainframe era 4.4: Designing a Database
Competitive Advantage
1: What Is an Information System? NoSQL Porter's Value Chain
Management Information Systems 4.5: Sidebar- The Difference between a Database and
7: Leveraging Information Technology (IT) for
7.4: Using Information Systems for Competitive a Spreadsheet
Competitive Advantage
Advantage
Post PC
Marketing O 1: What Is an Information System?
7.3: Competitive Advantage ODB Powerline Networking
Metadata 4.9: Database Management Systems 5.13: Technology Trends in the Home
4.10: Enterprise Databases OER Business Information System PowerPoint Software
Mickey Mouse Protection Act textbook 3.2: Types of Software
12.2: Intellectual Property Front Matter Privacy
Mobile Application Developers OER Information System textbook 12.3: The Digital Millennium Copyright Act
9.5: Emerging Roles About the Book Privacy Concerns
Mobile Software Online Learning 4.11: Knowledge Management
3.2: Types of Software 5.4: How has the Human Network Influenced you? Private Cloud
Model K Operating Systems 3.3: Cloud Computing
1.3: The Role of Information Systems 3.2: Types of Software Productivity Paradox
Modern security operations center Operations 7: Leveraging Information Technology (IT) for
6.5: Fighters in the War Against Cybercrime- The 7.3: Competitive Advantage Competitive Advantage
Modern Security Operations Center 7.2: The Productivity Paradox
Oracle DBA
Moore’s law Productivity Software
9.3: Information-Systems Operations and
2.3: Sidebar- Moore’s Law Administration 3.2: Types of Software
Moore's law OS Programming Languages
2: Hardware 3.2: Types of Software 3.4: Software Creation
Motherboard Outbound logistics 10.3: Software Development
2.3: Sidebar- Moore’s Law 7.3: Competitive Advantage
Project Management Quality Triangle
10.3: Software Development
Output Devices
N 2.4: Removable Media
Project managers
Network Architecture 9.4: Managing Information Systems
Outsourcing
5.11: Reliable Network 9.6: Career Path in Information Systems
Network Diagrams Overview of an Information system Q
5.7: Network Representations 1: What Is an Information System? QoS
Network Reliability 5.11: Reliable Network
5.11: Reliable Network P Qualitative Data
Network Representations Parallel Operation 4.1: Introduction to Data and Databases
5.7: Network Representations 10.4: Implementation Methodologies
Quality Support Engineers
Network Safety Password Security 9.3: Information-Systems Operations and
5.14: Network Security Administration
6.3: Tools for Information Security
Network Security Quantitative Data
Patent 4.1: Introduction to Data and Databases
5.11: Reliable Network 12.3: The Digital Millennium Copyright Act
5.14: Network Security Quantum computer
Patent Troll
Network Security Threats 13.4: Future of Information Systems
12.3: The Digital Millennium Copyright Act
5.14: Network Security
PC revolution
Network Symbols
1.3: The Role of Information Systems
5.7: Network Representations

3 https://workforce.libretexts.org/@go/page/15306
R Software Development Transaction Processing System
RAD 10.3: Software Development 7.4: Using Information Systems for Competitive
Software Licenses Advantage
10.2: Systems Development Life Cycle (SDLC)
Model 3.4: Software Creation Types of Users
RAM SQL 9.7: Information-Systems Users – Types of Users
2.3: Sidebar- Moore’s Law 4.3: Structured Query Language Types of users of information systems
Relational Database SSD 9: The People in Information System
4.2: Examples of Data 2.3: Sidebar- Moore’s Law
4.5: Sidebar- The Difference between a Database and Starbucks value chain model U
a Spreadsheet UAV
7.3: Competitive Advantage
removal media Strategy and internet 13.3: Internet of Things (IoT)
2: Hardware Unstructured Decision
7.3: Competitive Advantage
Robotics Structured Decision 7: Leveraging Information Technology (IT) for
13.4: Future of Information Systems Competitive Advantage
7: Leveraging Information Technology (IT) for
Role of information systems Competitive Advantage
1: What Is an Information System? Supply Chain Management V
1.3: The Role of Information Systems
3.2: Types of Software Value Chain
Support Analyst 7.3: Competitive Advantage
S 9.1: Introduction Video Communications
Satellite 9.3: Information-Systems Operations and 5.12: The Changing Network Environment Network
5.9: Internet Connections Administration Trends
SCRUM System Security Video streaming
10.2: Systems Development Life Cycle (SDLC) 6.1: Introduction 13.2: Collaborative
Model Systems Analyst Virtual environment
SDLC model 9.2: The Creators of Information Systems 13.2: Collaborative
10.2: Systems Development Life Cycle (SDLC) Systems Development Life Cycle Virtualization
Model 10.2: Systems Development Life Cycle (SDLC) 3.3: Cloud Computing
Security Model
Vision Systems
6.6: Security vs. Availability
13.4: Future of Information Systems
Security Administrator T
9.5: Emerging Roles Tablet W
Separate Networks 2.5: Other Computing Devices
Wael Abdeljabbar
5.10: The Network as a Platform Converged Technical Certifications
Networks About the Book
9.6: Career Path in Information Systems
Shadow IT WAN
Technology
10.3: Software Development 5.6: LANs, WANs, and the Internet
1: What Is an Information System? 5.7: Network Representations
Smart Home Technology Technology development
5.13: Technology Trends in the Home
Waterfall
7.3: Competitive Advantage 10.2: Systems Development Life Cycle (SDLC)
Smart phones growth Telecommunication Model
13.1: Introduction
13.2: Collaborative Wearables
Smartphones Telework 13.1: Introduction
2.5: Other Computing Devices
13.2: Collaborative Web 1.0
SOC The Digital Divide 1.3: The Role of Information Systems
6.5: Fighters in the War Against Cybercrime- The
Modern Security Operations Center
11.3: The Digital Divide Web 2.0
Social Media growth Threat of new entrants 1: What Is an Information System?
7.3: Competitive Advantage 1.3: The Role of Information Systems
13.1: Introduction
Threat of substitutes Wireless Broadband
Software 5.13: Technology Trends in the Home
7.3: Competitive Advantage
1: What Is an Information System?
3: Software Threat Vectors WISP
5.5: Providing Resources in a Network 5.14: Network Security 5.13: Technology Trends in the Home
Software component of Information Trademark World Wide Web
12.3: The Digital Millennium Copyright Act 1.3: The Role of Information Systems
Systems 5.2: A Brief History of the Internet
1.2: Identifying the Components of Information Trainer
Systems 9.3: Information-Systems Operations and
Write computer programs
Administration 3: Software
Software Developer
9.2: The Creators of Information Systems

4 https://workforce.libretexts.org/@go/page/15306
Glossary
Digital data | Discrete, discontinuous Multicore | A type of architecture where a single
Amdahls law | a law or argument used to find the representations of information or works, as contrasted physical processor contains the core logic of two or
maximum expected improvement to an overall system
with continuous, or analog signals which behave in a more processors or packaged into a single integrated
when only part of the system is improved.
continuous manner, or represent information using a circuit
Analog data | Data that is represented in a physical continuous function.
Multiprocessor | Refers to the ability of a system
way.
Direct memory access | A method that allows an to support more than one processor and/or the ability
Assembler | Computer program which translates input/output (I/O) device to send or receive data to allocate tasks between them.
assembly language to an object file or machine directly to or from the main memory, bypassing the
CPU to speed up memory operations. The process is
Peripheral devices | Any auxiliary device such as
language format. a computer mouse or keyboard that connects to and
managed by a chip known as a DMA controller
Assembly Language | Low-level programming works with the computer in some way. Other examples
(DMAC).
language for a computer, or other programmable of peripherals are image scanners, tape drives,
device, in which there is a very strong (generally one- Input / Output | the process of input or output, microphones, loudspeakers, webcams, and digital
to-one) correspondence between the language and the encompassing the devices, techniques, media, and data cameras.
architecture’s machine code instructions. used
Polling | refers to actively sampling the status of an
Cache memory | Random access memory (RAM) Interrupt | A hardware signal that breaks the flow external device by a client program as a synchronous
that a computer microprocessor can access more of program execution and transfers control to a activity. Polling is most often used in terms of
quickly than it can access regular RAM. This memory predetermined storage location so that another input/output (I/O), and is also referred to as polled I/O
is typically integrated directly with the CPU chip or procedure can be followed or a new operation carried or software-driven I/O.
placed on a separate chip that has a separate bus out.
Quantization | The process of mapping a large set
interconnect with the CPU Machine Language | Set of instructions executed of input values to a (countable) smaller set.
Compiler | Computer program (or a set of directly by a computer’s central processing unit
(CPU). Sampling | The reduction of a continuous signal to a
programs) that transforms source code written in a
discrete signal. A common example is the conversion
programming language (the source language) into
Micro architecture | A description of the of a sound wave (a continuous signal) to a sequence of
another computer language (the target language), with
electrical circuitry of a computer, central processing samples (a discrete-time signal).
the latter often having a binary form known as object
unit, or digital signal processor that is sufficient for
code. Glossary has no license indicated.
completely describing the operation of the hardware.
Computer performance | Characterized by the
amount of useful work accomplished by a computer
system or computer network compared to the time and
resources used.

1 https://workforce.libretexts.org/@go/page/15312
Detailed Licensing
Overview
Title: Information Systems for Business
Webpages: 128
All licenses found:
CC BY 3.0: 97.7% (125 pages)
CC BY 4.0: 1.6% (2 pages)
Undeclared: 0.8% (1 page)

By Page
Information Systems for Business - CC BY 3.0 4.2: Examples of Data - CC BY 3.0
Front Matter - CC BY 3.0 4.3: Structured Query Language - CC BY 3.0
TitlePage - CC BY 3.0 4.4: Designing a Database - CC BY 3.0
InfoPage - CC BY 3.0 4.5: Sidebar- The Difference between a Database
ASCCC OERI - CC BY 3.0 and a Spreadsheet - CC BY 3.0
Table of Contents - Undeclared 4.6: Big Data - CC BY 3.0
About the Book - CC BY 3.0 4.7: Data Warehouse - CC BY 3.0
Licensing - CC BY 4.0 4.8: Data Mining - CC BY 3.0
Preface - CC BY 3.0 4.9: Database Management Systems - CC BY 3.0
4.10: Enterprise Databases - CC BY 3.0
1: What is an Information System? - CC BY 3.0
4.11: Knowledge Management - CC BY 3.0
1: What Is an Information System? - CC BY 3.0 4.12: Sidebar- What is data science? - CC BY 3.0
1.1: Introduction - CC BY 3.0 4.13: Summary - CC BY 3.0
1.2: Identifying the Components of Information 4.14: Study Questions - CC BY 3.0
Systems - CC BY 3.0 5: Networking and Communication - CC BY 3.0
1.3: The Role of Information Systems - CC BY 3.0
5.1: Introduction to Networking and
1.4: Can Information Systems Bring Competitive
Communication - CC BY 3.0
Advantage? - CC BY 3.0
5.2: A Brief History of the Internet - CC BY 3.0
1.5: Summary - CC BY 3.0
5.3: Networking Today - CC BY 3.0
1.6: Study Questions - CC BY 3.0
5.4: How has the Human Network Influenced
2: Hardware - CC BY 3.0 you? - CC BY 3.0
2.1: Introduction - CC BY 3.0 5.5: Providing Resources in a Network - CC BY
2.2: Tour of a Digital Device - CC BY 3.0 3.0
2.3: Sidebar- Moore’s Law - CC BY 3.0 5.6: LANs, WANs, and the Internet - CC BY 3.0
2.4: Removable Media - CC BY 3.0 5.7: Network Representations - CC BY 3.0
2.5: Other Computing Devices - CC BY 3.0 5.8: The Internet, Intranets, and Extranets - CC BY
2.6: Summary - CC BY 3.0 3.0
2.7: Study Questions - CC BY 3.0 5.9: Internet Connections - CC BY 3.0
3: Software - CC BY 3.0 5.10: The Network as a Platform Converged
3.1: Introduction to Software - CC BY 3.0 Networks - CC BY 3.0
3.2: Types of Software - CC BY 3.0 5.11: Reliable Network - CC BY 3.0
3.3: Cloud Computing - CC BY 3.0 5.12: The Changing Network Environment
3.4: Software Creation - CC BY 3.0 Network Trends - CC BY 3.0
3.5: Summary - CC BY 3.0 5.13: Technology Trends in the Home - CC BY 3.0
3.6: Study Questions - CC BY 3.0 5.14: Network Security - CC BY 3.0
5.15: Summary - CC BY 3.0
4: Data and Databases - CC BY 3.0
5.16: Study Questions - CC BY 3.0
4.1: Introduction to Data and Databases - CC BY
6: Information Systems Security - CC BY 3.0
3.0

1 https://workforce.libretexts.org/@go/page/24886
6.1: Introduction - CC BY 3.0 9.8: Summary - CC BY 3.0
6.2: The Information Security Triad- 9.9: Study Questions - CC BY 3.0
Confidentiality, Integrity, Availability (CIA) - CC 10: Information Systems Development - CC BY 3.0
BY 3.0 10.1: Introduction - CC BY 3.0
6.3: Tools for Information Security - CC BY 3.0 10.2: Systems Development Life Cycle (SDLC)
6.4: Threat Impact - CC BY 3.0 Model - CC BY 3.0
6.5: Fighters in the War Against Cybercrime- The 10.3: Software Development - CC BY 3.0
Modern Security Operations Center - CC BY 3.0 10.4: Implementation Methodologies - CC BY 3.0
6.6: Security vs. Availability - CC BY 3.0 10.5: Summary - CC BY 3.0
6.7: Summary - CC BY 3.0 10.6: Study Questions - CC BY 3.0
6.8: Study Questions - CC BY 3.0 10.7: Summary - CC BY 3.0
2: Information Systems for Strategic Advantage - CC BY
3: Information Systems Beyond the Organization - CC
3.0
BY 3.0
7: Leveraging Information Technology (IT) for
11: Information Systems Beyond the Organization -
Competitive Advantage - CC BY 3.0
CC BY 3.0
7.1: Introduction - CC BY 3.0 11.1: Introduction - CC BY 3.0
7.2: The Productivity Paradox - CC BY 3.0 11.2: The Global Firm - CC BY 3.0
7.3: Competitive Advantage - CC BY 3.0 11.3: The Digital Divide - CC BY 3.0
7.4: Using Information Systems for Competitive 11.4: Summary - CC BY 3.0
Advantage - CC BY 3.0 11.5: Study Questions - CC BY 3.0
7.5: Investing in IT for Competitive Advantage -
12: The Ethical and Legal Implications of
CC BY 3.0
Information System - CC BY 3.0
7.6: Summary - CC BY 3.0
7.7: Study Questions - CC BY 3.0 12.1: Introduction - CC BY 3.0
12.2: Intellectual Property - CC BY 3.0
8: Business Processes - CC BY 3.0
12.3: The Digital Millennium Copyright Act - CC
8.1: Introduction - CC BY 3.0 BY 3.0
8.2: What Is a Business Process? - CC BY 3.0 12.4: Summary - CC BY 3.0
8.3: Summary - CC BY 3.0 12.5: Study Questions - CC BY 3.0
8.4: Study Questions - CC BY 3.0
13: Future Trends in Information Systems - CC BY
9: The People in Information System - CC BY 3.0
3.0
9.1: Introduction - CC BY 3.0
13.1: Introduction - CC BY 3.0
9.2: The Creators of Information Systems - CC BY
13.2: Collaborative - CC BY 3.0
3.0
13.3: Internet of Things (IoT) - CC BY 3.0
9.3: Information-Systems Operations and
13.4: Future of Information Systems - CC BY 3.0
Administration - CC BY 3.0
13.5: Study Questions - CC BY 3.0
9.4: Managing Information Systems - CC BY 3.0
Back Matter - CC BY 3.0
9.5: Emerging Roles - CC BY 3.0
9.6: Career Path in Information Systems - CC BY Index - CC BY 3.0
3.0 Glossary - CC BY 3.0
9.7: Information-Systems Users – Types of Users Detailed Licensing - CC BY 4.0
- CC BY 3.0

2 https://workforce.libretexts.org/@go/page/24886

You might also like