Compilation notes for Unit 1
Compilation notes for Unit 1
SRM University
COMPILATION NOTES
15IT213: IT FUNDAMENTALS
15IT213 P IT Fundamentals 2 0 0 2
PURPOSE
Any discipline of engineering, when learned through formal education programs, necessitates
having a specially designed course which covers the fundamentals of various focus areas of that
discipline. With this in mind, the course on IT fundamentals is designed to provide the students
with fundamental know how’s of different topics in Information Technology in addition to
stressing the need for interpersonal skills development.
INSTRUCTIONAL OBJECTIVES
TEXT BOOK
REFERENCES
2. “Introduction to Information Technology”, ITL Education Solutions Ltd., Pearson
Education, IInd Edition, 2006
3. http://www.ischool.utexas.edu/~adillon/BookChapters/sociotechnical.html (User
Centeredness and Advocacy)
4. http://www.veryard.com/orgmgt/vsm.pdf (IT Systems Model)
5. www.hcibib.org/
UNIT 1
PERVASIVE THEMES IN IT
The pervasive nature of information technology means that there are a wide range of available
careers and professions at every level of professional development and to suit virtually any
interest. Information Technology is pervasive, and people in almost all working environments
will be immersed in it in one form or another. The pervasive nature of information technology
leads to a presbyopic sense of confidence in our ability to understand and manage its uses.
USER CENTEREDNESS
What do we mean by “user”? A user is a person who will use the system for performing tasks
that are part of his/her job or leisure activities.
The approaches in User-Centered Design (UCD) vary from Participatory Design (PD) to model-
based engineering. No matter the approach, UCD is not the simple, clear-cut way to successful
systems development as is sometimes made out. What does user centered design and
participatory design mean? And what does user mean? Depending on which discipline you
represent, whether you are an academic or practitioner, whether your user population is well
defined or not, these concepts are interpreted differently.
In order to be able to discuss the practical consequences of user centered design we needed to
define these concepts. Based on the international standard ISO/DIS 13407 (Human Centered
Design Process for Interactive Systems), an approach to software and hardware design that
identifies four different basic principles is given below:
1. An appropriate allocation of function between user and system
2. Active involvement of users,
3. Iterations of design solutions and
4. Multidisciplinary design teams.
In addition to this it is important for a truly user-centered design approach to ground the design
process on observations of the users’ practices in the real world. Similarly we regard
Participatory Design as a specific mode of User Centered Design which implies the involvement
of the users not only at the beginning and/or at the end of the process, but through all the design
process. In a PD approach the users actually participate in and are in charge of the making of the
design decisions.
PD suggests a bespoke product with the participants being the real users of the system being
developed. UCD can apply to either kind of product - but in one case you might be working with
the real users, while in others working with “representative” users.
What do we mean by “user”? A user is a person who will use the system for performing tasks
that are part of his/her job or leisure activities. People, who are only circumstantially affected by
the system, are considered stakeholders, e.g. managers or support personnel.
Video Documentation
Video is a very useful medium for analysis and for visualizing current and future scenarios.
Video is also a very efficient medium for showing developers how users actually use the
products that they have designed. When a developer has seen a user perform the same error a
couple of times there is no need for further convincing that the design must be changed. In order
to facilitate fruitful and informative interviews and to learn about the hierarchies of the
workplace in advance, spending time at the workplace with a video camera could be very useful.
CASE STUDY:
Why FOCUS on The User?
All Web sites have users. A user is anyone who visits a site. Depending on the type of site, its
goals and audience a user can be many different things. A user can be a reader, a shopper, an
information gatherer, a content contributor or a host of other things.
For a Web site to be successful there must be FOCUS put on the user from the beginning to the
end. A site is nothing without it's users and every effort should be made to understand the user
and meet their goals when developing a site. If the user's goals aren't met the site will be a
failure. It's as simple as that.
Get to Know The User
Anyone with a Web site wants it to succeed, to achieve the goals laid out for it. One of the first
steps toward achieving these goals is determining the audience for the Web site and getting to
know those who will use the site. It is nearly impossible to design and build a Web site properly
without having a clear understanding of who will be using the site and what for. There are many
ways to learn about the users of a site. These include:
Interviewing potential users or existing customers
Analysis of the competition
Surveys
Focus Groups
Statistical observation
Depending on the situation and the particularities of the project, getting to know the users can be
fairly easy or rather complicated. At the very least the effort needs to be made, even a little
understanding goes along way. Sounds practical doesn't it? Of course, but it's surprising how
often sites get built with no effort gone into learning about those folks who will be using the site.
Key difference: Data and information are interrelated. Data usually refers to raw data, or unprocessed data. It is
the basic form of data, data that hasn’t been analyzed or processed in any manner. Once the data is analyzed, it is
considered as information. Information is "knowledge communicated or received concerning a particular fact or
circumstance." Information is a sequence of symbols that can be interpreted as a message. It provides knowledge
or insight about a certain matter.
Data and information are interrelated. In fact, they are often mistakenly used interchangeably.
Data is considered to be raw data. It represents ‘values of qualitative or quantitative variables,
belonging to a set of items.’ It may be in the form of numbers, letters, or a set of characters. It is
often collected via measurements. In data computing or data processing, data is represented by in
a structure, such as tabular data, data tree, a data graph, etc.
Data usually refers to raw data, or unprocessed data. It is the basic form of data, data that hasn’t
been analyzed or processed in any manner. Once the data is analyzed, it is considered as
information.
Each student's test score is one piece of The class' average score or the
data school's average score is the
Example information that can be
concluded from the given
data.
ICT and computers are NOT the same thing. An ICT system is a set-up consisting of hardware,
software, data and the people who use them. It very often also includes communications
technology, such as the Internet. Computers are the hardware that is often part of an ICT
system. ICT Systems are used in a whole host of places such as offices, shops, factories, aircraft
and ships in addition to being used in activities such as communications, medicine and farming.
They are everyday and ordinary yet extraordinary in how they can add extra power to what we
do and want to do.
ICT systems have become important because by using them we are:
More productive - we can complete a greater number of tasks in the same time at reduced
cost by using computers than we could prior to their invention.
Able to deal with vast amounts of information and process it quickly.
Able to transmit and receive information rapidly.
Types of ICT system
There are three main types of ICT system:
Information systems
This type of ICT system is focused on managing data and information. Examples of these are a
sports club membership system or a supermarket stock system.
Control Systems
These ICT systems have controlling machines as their main aim. They use input, process and
output, but the output may be moving a robot arm to weld a car chassis rather than information.
Communications Systems
The output of these ICT systems is the successful transport of data from one place to another.
Input, output & system diagrams
What comes out of an ICT system is largely dependent on what you put into the system. The
acronym GIGO is a good way of thinking about this.
GIGO can be interpreted in 2 ways:
1. Good Input, Good Output
ICT systems work by taking inputs (instructions and data), processing them and producing
outputs that are stored or communicated in some way. The higher the quality and better thought-
out the inputs, the more useful will be the outputs.
2. Garbage In, Garbage Out
ICT systems cannot function properly if the inputs are inaccurate or faulty; they will either not be
able to process the data at all, or will output data which is erroneous or useless. That's why the
term GIGO is sometimes used to stand for "Garbage In, Garbage Out".
GIGO is a useful term to remember in the exam - it can help explain many issues such as why
validation is needed and why accurate data is valuable.
NETWORKING
Computer networking is the engineering discipline concerned with communication between
computer systems or devices. Networking, routers, routing protocols, and networking over the
public Internet have their specifications defined in documents called RFCs. Computer
networking is sometimes considered a sub-discipline of telecommunications, computer science,
information technology and/or computer engineering. Computer networks rely heavily upon the
theoretical and practical application of these scientific and engineering disciplines.
A computer network is any set of computers or devices connected to each other with the ability
to exchange data. Examples of networks are:
Local area network (LAN), which is usually a small network constrained to a small geographic
area.
Wide area network (WAN) that is usually a larger network that covers a large geographic area.
Wireless LANs and WANs (WLAN & WWAN) is the wireless equivalent of the LAN and
WAN
All networks are interconnected to allow communication with a variety of different kinds of
media, which including twisted-pair copper wire cable, coaxial cable, optical fiber, and various
wireless technologies. The devices can be separated by a few meters (e.g. via Bluetooth) or
nearly unlimited distances (e.g. via the interconnections of the Internet).
Views of networks
Users and network administrators often have different views of their networks. Often, users that
share printers and some servers form a workgroup, which usually means they are in the same
geographic location and are on the same LAN. A community of interest has less of a connotation
of being in a local area, and should be thought of as a set of arbitrarily located users who share a
set of servers, and possibly also communicate via peer-to-peer technologies.
Network administrators see networks from both physical and logical perspectives. The physical
perspective involves geographic locations, physical cabling, and the network elements (e.g.,
routers, bridges and application layer gateways that interconnect the physical media. Logical
networks, called, in the TCP/IP architecture, subnets , map onto one or more physical media. For
example, a common practice in a campus of buildings is to make a set of LAN cables in each
building appear to be a common subnet, using virtual LAN (VLAN) technology.
Both users and administrators will be aware, to varying extents, of the trust and scope
characteristics of a network. Again using TCP/IP architectural terminology, an intranet is a
community of interest under private administration usually by an enterprise, and is only
accessible by authorized users (e.g. employees) (RFC 2547). Intranets do not have to be
connected to the Internet, but generally have a limited connection. An extranet is an extension of
an intranet that allows secure communications to users outside of the intranet (e.g. business
partners, customers)RFC 3547.
Informally, the Internet is the set of users, enterprises, and content providers that are
interconnected by Internet Service Providers (ISP). From an engineering standpoint, the Internet
is the set of subnets, and aggregates of subnets, which share the registered IP address space and
exchange information about the reachability of those IP addresses using the Border Gateway
Protocol. Typically, the human-readable names of servers are translated to IP addresses,
transparently to users, via the directory function of the Domain Name System (DNS).
Over the Internet, there can be business-to-business (B2B), business-to-consumer (B2C) and
consumer-to-consumer (C2C) communications. Especially when money or sensitive information
is exchanged, the communications are apt to be secured by some form of communications
security mechanism. Intranets and extranets can be securely superimposed onto the Internet,
without any access by general Internet users, using secure Virtual Private Network (VPN)
technology.
Goals
A basic goal of HCI is to improve the interactions between users and computers by making
computers more usable and receptive to the user's needs. Specifically, HCI is concerned with:
methodologies and processes for designing interfaces (i.e., given a task and a class of
users, design the best possible interface within given constraints, optimizing for a desired
property such as learn ability or efficiency of use)
methods for implementing interfaces (e.g. software toolkits and libraries; efficient
algorithms)
techniques for evaluating and comparing interfaces
developing new interfaces and interaction techniques
developing descriptive and predictive models and theories of interaction
A long term goal of HCI is to design systems that minimize the barrier between the human's
cognitive model of what they want to accomplish and the computer's understanding of the user's
task.
Professional practitioners in HCI are usually designers concerned with the practical application
of design methodologies to real-world problems. Their work often revolves around designing
graphical user interfaces and web interfaces.
Researchers in HCI are interested in developing new design methodologies, experimenting with
new hardware devices, prototyping new software systems, exploring new paradigms for
interaction, and developing models and theories of interaction.
Design Methodologies
A number of diverse methodologies outlining techniques for human–computer interaction design
have emerged since the rise of the field in the 1980s. Most design methodologies stem from a
model for how users, designers, and technical systems interact. Early methodologies, for
example, treated users' cognitive processes as predictable and quantifiable and encouraged
design practitioners to look to cognitive science results in areas such as memory and attention
when designing user interfaces. Modern models tend to focus on a constant feedback and
conversation between users, designers, and engineers and push for technical systems to be
wrapped around the types of experiences users want to have, rather than wrapping user
experience around a completed system.
User-centered design: user-centered design (UCD) is a modern, widely practiced design
philosophy rooted in the idea that users must take center-stage in the design of any computer
system. Users, designers and technical practitioners work together to articulate the wants, needs
and limitations of the user and create a system that addresses these elements. Often, user-
centered design projects are informed by ethnographic studies of the environments in which
users will be interacting with the system.
Principles of User Interface Design: these are seven principles that may be considered at any
time during the design of a user interface in any order, namely Tolerance, Simplicity, Visibility,
Affordance, Consistency, Structure and Feedback.
1. Use both knowledge in the world and knowledge in the head. People work better when the
knowledge they need to do a task is available externally – either explicitly or through the
constraints imposed by the environment. But experts also need to be able to internalize regular
tasks to increase their efficiency. So systems should provide the necessary knowledge within the
environment and their operation should be transparent to support the user in building an
appropriate mental model of what is going on.
2. Simplify the structure of tasks. Tasks need to be simple in order to avoid complex problem
solving and excessive memory load. There are a number of ways to simplify the structure of
tasks. One is to provide mental aids to help the user keep track of stages in a more complex task.
Another is to use technology to provide the user with more information about the task and better
feedback. A third approach is to automate the task or part of it, as long as this does not detract
from the user’s experience. The final approach to simplification is to change the nature of the
task so that it becomes something more simple. In all of this, it is important not to take control
away from the user.
3. Make things visible: bridge the gulfs of execution and evaluation. The interface should make
clear what the system can do and how this is achieved, and should enable the user to see clearly
the effect of their actions on the system.
4. Get the mappings right. User intentions should map clearly onto system controls. User actions
should map clearly onto system events. So it should be clear what does what and by how much.
Controls, sliders and dials should reflect the task – so a small movement has a small effect and a
large movement a large effect.
5. Exploit the power of constraints, both natural and artificial. Constraints are things in the world
that make it impossible to do anything but the correct action in the correct way. A simple
example is a jigsaw puzzle, where the pieces only fit together in one way. Here the physical
constraints of the design guide the user to complete the task.
6. Design for error. To err is human, so anticipate the errors the user could make and design
recovery into the system.
7. When all else fails, standardize. If there are no natural mappings then arbitrary mappings
should be standardized so that users only have to learn them once. It is this standardization
principle that enables drivers to get into a new car and drive it with very little difficulty – key
controls are standardized. Occasionally one might switch on the indicator lights instead of the
windscreen wipers, but the critical controls (accelerator, brake, clutch, steering) are always the
same.
PROGRAMMING LANGUAGES
Computer programming (often shortened to programming or coding) is the process of writing,
testing, debugging/troubleshooting, and maintaining the source code of computer programs. This
source code is written in a programming language. The code may be a modification of an
existing source or something completely new, the purpose being to create a program that exhibits
a certain desired behavior (customization). The process of writing source codes requires
expertise in many different subjects, including knowledge of the application domain, specialized
algorithms, and formal logic.
Within software engineering, programming (the implementation) is regarded as one phase in a
software development process.
In some specialist applications or extreme situations a program may be written or modified
(known as patching) by directly storing the numeric values of the machine code instructions to
be executed into memory.
There is an ongoing debate on the extent to which the writing of programs is an art, a craft or an
engineering discipline.[1] Good programming is generally considered to be the measured
application of all three, with the goal of producing an efficient and maintainable software
solution (the criteria for "efficient" and "maintainable" vary considerably). The discipline differs
from many other technical professions in that programmers generally do not need to be licensed
or pass any standardized (or governmentally regulated) certification tests in order to call
themselves "programmers" or even "software engineers".
Another ongoing debate is the extent to which the programming language used in writing
programs affects the form that the final program takes. This debate is analogous to that
surrounding the Sapir-Whorf hypothesis in linguistics
Programming languages
Different programming languages support different styles of programming (called programming
paradigms). The choice of language used is subject to many considerations, such as company
policy, suitability to task, availability of third-party packages, or individual preference. Ideally,
the programming language best suited for the task at hand will be selected. Trade-offs from this
ideal involve finding enough programmers who know the language to build a team, the
availability of compilers for that language, and the efficiency with which programs written in a
given language execute.
Allen Downey, in his book "How To Think Like A Computer Scientist" wrote: The details look
different in different languages, but a few basic instructions appear in just about every language:
input: Get data from the keyboard, a file, or some other device. output: Display data on the
screen or send data to a file or other device. math: Perform basic mathematical operations like
addition and multiplication. conditional execution: Check for certain conditions and execute the
appropriate sequence of statements. repetition: Perform some action repeatedly, usually with
some variation.
Modern programming
Quality requirements:
Whatever the approach to the software development may be, the program must finally satisfy
some fundamental properties; bearing them in mind while programming reduces the costs in
terms of time and/or money due to debugging, further development and user support. Although
quality programming can be achieved in a number of ways, following five properties are among
the most relevant:
Efficiency: it is referred to the system resource consumption (computer processor, memory, slow
devices, networks and to some extent even user interaction) which must be the lowest possible.
Reliability: the results of the program must be correct, which not only implies a correct code
implementation but also reduction of error propagation (e.g. resulting from data conversion) and
prevention of typical errors (overflow, underflow or zero division).
Robustness: a program must anticipate situations of data type conflict and all other
incompatibilities which result in run time errors and stop the program. The focus of this aspect is
the interaction with the user and the handling of error messages.
Portability: it should work as it is in any software and hardware environment, or at least without
relevant reprogramming.
Readability: the purpose of the main program and of each subroutine must be clearly defined
with appropriate comments and self explanatory choice of symbolic names (constants, variables,
function names, classes, methods, ...).
Algorithmic Complexity
The academic field and engineering practice of computer programming are largely concerned
with discovering and implementing the most efficient algorithms for a given class of problem.
For this purpose, algorithms are classified into orders using so-called Big O notation, O(n),
which expresses resource use, such as execution time or memory consumption, in terms of the
size of an input. Expert programmers are familiar with a variety of well-established algorithms
and their respective complexities and use this knowledge to choose algorithms that are best
suited to the circumstances.
Methodologies
The first step in every software development project should be requirements analysis, followed
by modeling, implementation, and failure elimination (debugging). There exist a lot of differing
approaches for each of those tasks. One approach popular for requirements analysis is Use Case
analysis.
Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-
Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both
OOAD and MDA.
A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).
Implementation techniques include imperative languages (object-oriented or procedural),
functional languages, and logic languages.
Debugging is most often done with IDEs like Visual Studio, NetBeans, and Eclipse. Separate
debuggers like gdb are also used.
MULTIMEDIA
Multimedia refers to content that uses a combination of different content forms. This contrasts
with media that use only rudimentary computer displays such as text-only or traditional forms of
printed or hand-produced material. Multimedia includes a combination of text, audio, still
images, animation, video, or interactivity content forms.
Multimedia may be broadly divided into linear and non-linear categories. Linear active content
progresses often without any navigational control for the viewer such as a cinema presentation.
Non-linear uses interactivity to control progress as with a video game or self-paced computer
based training. Hypermedia is an example of non-linear content.
The various formats of technological or digital multimedia may be intended to enhance the users'
experience, for example to make it easier and faster to convey information. Or in entertainment
or art, to transcend everyday experience.
Enhanced levels of interactivity are made possible by combining multiple forms of media
content. Online multimedia is increasingly becoming object-oriented and data-driven, enabling
applications with collaborative end-user innovation and personalization on multiple forms of
content over time. Examples of these range from multiple forms of content on Web sites like
photo galleries with both images (pictures) and title (text) user-updated, to simulations whose co-
efficients, events, illustrations, animations or videos are modifiable, allowing the multimedia
"experience" to be altered without reprogramming. In addition to seeing and hearing, Haptic
technology enables virtual objects to be felt. Emerging technology involving illusions
of taste and smell may also enhance the multimedia experience.
Usage / Application
Creative industries Commercial uses Entertainment and fine arts
Education Journalism Engineering
Industry Medicine Mathematical and scientific research
Document imaging Disabilities
WEB TECHNOLOGY
Introduction
There are many Web technologies, from simple to complex, and explaining each in detail is
beyond the scope of this article. However, to help you get started with developing your own Web
sites, beyond simple WYSIWYG designing of Web pages in FrontPage, this article provides
brief definitions of the major Web technologies along with links to sites where you can find more
information, tutorials, and reference documentation.
Markup Languages
Markup is used to in text and word processing documents to describe how a document should
look when displayed or printed. The Internet uses markup to define how Web pages should look
when displayed in a browser or to define the data contained within a Web document.
There are many different types of markup languages. For example, Rich Text Formatting (RTF)
is a markup language that word processors use. This section describes the most common markup
languages that are used on the Internet.
HTML
HTML stands for Hypertext Markup Language. HTML is the primary markup language that is
used for Web pages. HTML tells the browser what to display on a page. For example, it specifies
text, images, and other objects and can also specify the appearance of text, such as bold or italic
text.
The World Wide Web Consortium (W3C) defines the specification for HTML. The current
versions of HTML are HTML 4.01 and XHTML 1.1.
Note DHTML stands for Dynamic HTML. DHTML combines cascading style sheets (CSS)
and scripting to create animated Web pages and page elements that respond to user interaction.
CSS
CSS stands for cascading style sheets. Cascading style sheets provide the ability to change the
appearance of text (such as fonts, colors, spacing) on Web pages. Using CSS, you can also
position elements on the page, make certain elements hidden, or change the appearance of the
browser, such as changing the color of scroll bars in Microsoft Internet Explorer.
Cascading style sheets can be used similar to FrontPage Themes. For example, you can apply a
cascading style sheet across all the pages in a Web site to give the site a uniform look and feel.
Then all you need to do is to change the CSS style formatting in a single file to change the look
and feel of an entire Web site.
XML
XML stands for Extensible Markup Language. Similar to HTML, XML is a markup language
designed for the Internet. However, unlike HTML, which was designed to define formatting of
Web pages, XML was designed to describe data. You can use XML to develop custom markup
languages.
As with HTML, the W3C defines the specifications for XML. See Extensible Markup
Language on the W3C Web site.
XSLT
XSLT is an abbreviation for XSL Transformations. XSLT uses the Extensible Stylesheet
Language (XSL), which you use to define the appearance of an XML document or change an
XML document into another kind of document—XML, HTML, or another markup language
format.
As with other Web markup languages, the W3C defines the specifications for XSL and XSLT.
Perhaps security has always been an issue to humans. Ancient humans must have found locations
where they could secure their families from attacks by wild animals or from unfriendly
neighbors. They built houses to keep themselves safe and secure from bad weather. Computing
technology has given the topic of security an all new meaning. Computing technology creates
new opportunities and new tools for attack, and crime has unfortunately followed. The 2006
eCrime Survey of United States companies, universities and governmental groups reported that
the average reporting business lost over $740,000 to computer security in 2005
The term “computer security” isn’t quite proper, because a computer is only a container. It is
actually, the information that is contained within the computer that needs to be secured. By
securing a computer we are, in fact, assuring the security of the information. Therefore, another
common name for a course like this is Information Assurance.
So many of our computer security mechanisms aren’t new at all; although clearly technology has
introduced many new concepts. From the beginning security starts with that which we are trying
to secure, namely the asset. The assets we secure on a computer might be important company
secrets that could be valuable to a competing company or personal information such as the
course grades of a university student or even money, since banks now process most money
electronically.
The term threat refers to any potentially harmful circumstance or event. Security systems are
created for the purpose of protecting assets against threats. If our computer is threatened by
viruses, then our security system may include such things as antivirus software, network
firewalls and file protection mechanisms to guard against this threat. Sometimes computer
scientists confuse the term “threat” with “vulnerability”.
However, the difference is that a threat comes from outside the asset, while a vulnerability is a
weakness within the security system intended to protect the asset. No system is ever completely
secure, because every system has vulnerabilities. However, damage does not occur unless there
is an attack which is defined as a deliberate attempt to exploit a vulnerability. Any mechanism
that is designed to guard against a vulnerability is called a mitigation.
Some computer scientists make the distinction between security and safety in terms of intent.
Intentional attacks, such as theft and vandalism, are generally the purpose of security; while
unintentional attacks, like floods and fire, relate more to safety. Often it is difficult to separate
the concepts of security and safety, since many security systems mitigate against both kinds of
vulnerabilities.
Computer security begins with even a simple single computer. Every operating system contains
flaws that provide vulnerabilities to an attacker. Microsoft Windows operating systems are well
known for security problems, but vulnerabilities have been published for others as well. Flaws in
software extend to application software also. There are vulnerabilities in applications from email
clients to web browsers. While software flaws are a major source or vulnerabilities, perhaps the
greatest cause of security problems is simply user ignorance. Users often behave in ways that are
insecure - from using their family name as a password to revealing their login information to a
stranger, humans are often the greatest security vulnerability.
Vulnerabilities multiply when a single computer is connected to a local area network (LAN).
LANs introduce the potential for poor (shoddy) network configuration. For example, many
people install wireless hubs in their homes without establishing security protocols and
passwords. In such a situation anyone sitting within the transmission range of the wireless hub
has unauthorized access.
Even though there are vulnerabilities in every computer and every LAN, the greatest number of
vulnerabilities appear only when the computer is connected to the Internet. The Internet is
vulnerable to attackers from all over the world - some attackers who are extremely sophisticated
and some attackers, called script kiddies, who are not. In addition, there are known weaknesses
with network protocols and flaws in the countless servers on the Internet; all of these are
vulnerabilities for the computer user connected to the Internet.
It is impossible to list all computer security attacks. It is even difficult to classify attacks.
Throughout this course we will examine types of attacks such as Denial of Service (DoS),
password cracking, social engineering (including phishing), as well as look at the causes of
viruses. But the attacks constantly change. In September, 2004, here were 100 known phishing1
websites; by December the number had increased to 700.
With all of these security-related problems and weaknesses, how do we mitigate the
vulnerabilities? There are five major categories of defense to be studied:
(1) Prevention - prevent an attack from ever occurring,
(2) Deterrence - deter the attacker in such a way that the risk to the attacker is not worth the
potential benefit.
(3) Deflection - sometimes it is possible to avoid attack by convincing the attacker to attack
elsewhere.
(4) Detection - sometimes attacks are unpreventable, but detecting that an attack has taken place
is often important to best survive the attack.
(5) Recovery - once an attack has been detected the computer and/or user should attempt to
recover by repairing damages.
Prevention is only a goal; it is not possible to prevent attacks on any practical computer. So we
approximate prevention by constructing security systems based upon deflection, deterrence,
detection and recovery. Think about all of the security mechanisms you know and which kind (or
kinds) of defense they provide. File encryption is a deterrence and possibly a deflection to files
that aren’t encrypted. File backups are for recovery. The second word in the name of IDS
(Intrusion Detection Systems) identifies their defense category.
The CIA triad (confidentiality, integrity and availability) is one of the core principles of
information security. (The members of the classic InfoSec triad -confidentiality, integrity and
availability - are interchangeably referred to in the literature as security attributes, properties,
security goals, fundamental aspects, information criteria, critical information characteristics and
basic building blocks.) There is continuous debate about extending this classic trio. Other
principles such as Accountability have sometimes been proposed for addition – it has been
pointed out that issues such as Non-Repudiation do not fit well within the three core concepts,
and as regulation of computer systems has increased (particularly amongst the Western nations)
Legality is becoming a key consideration for practical security installations.
Confidentiality
Confidentiality is necessary for maintaining the privacy of the people whose personal
information is held in the system.
Integrity
In information security, data integrity means maintaining and assuring the accuracy and
consistency of data over its entire life-cycle. [16] This means that data cannot be modified in an
unauthorized or undetected manner. This is not the same thing as referential
integrity in databases, although it can be viewed as a special case of consistency as understood in
the classic ACID model of transaction processing. Integrity is violated when a message is
actively modified in transit. Information security systems typically provide message integrity in
addition to data confidentiality.
Availability
For any information system to serve its purpose, the information must be available when it is
needed. This means that the computing systems used to store and process the information, the
security controls used to protect it, and the communication channels used to access it must be
functioning correctly. High availability systems aim to remain available at all times, preventing
service disruptions due to power outages, hardware failures, and system upgrades. Ensuring
availability also involves preventing denial-of-service attacks, such as a flood of incoming
messages to the target system essentially forcing it to shut down.
Authenticity
In computing, e-Business, and information security, it is necessary to ensure that the data,
transactions, communications or documents (electronic or physical) are genuine. It is also
important for authenticity to validate that both parties involved are who they claim to be. Some
information security systems incorporate authentication features such as "digital signatures",
which give evidence that the message data is genuine and was sent by someone possessing the
proper signing key.
Non-repudiation
In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also
implies that one party of a transaction cannot deny having received a transaction nor can the
other party deny having sent a transaction.
It is important to note that while technology such as cryptographic systems can assist in non-
repudiation efforts, the concept is at its core a legal concept transcending the realm of
technology. It is not, for instance, sufficient to show that the message matches a digital signature
signed with the sender's private key, and thus only the sender could have sent the message and
nobody else could have altered it in transit. The alleged sender could in return demonstrate that
the digital signature algorithm is vulnerable or flawed, or allege or prove that his signing key has
been compromised. The fault for these violations may or may not lie with the sender himself, and
such assertions may or may not relieve the sender of liability, but the assertion would invalidate
the claim that the signature necessarily proves authenticity and integrity and thus prevents
repudiation.
Security classification for information
An important aspect of information security and risk management is recognizing the value of
information and defining appropriate procedures and protection requirements for the
information. Not all information is equal and so not all information requires the same degree of
protection. This requires information to be assigned a security classification.
The first step in information classification is to identify a member of senior management as the
owner of the particular information to be classified. Next, develop a classification policy. The
policy should describe the different classification labels, define the criteria for information to be
assigned a particular label, and list the required security controls for each classification.
Some factors that influence which classification information should be assigned include how
much value that information has to the organization, how old the information is and whether or
not the information has become obsolete. Laws and other regulatory requirements are also
important considerations when classifying information.
The Business Model for Information Security enables security professionals to examine security
from systems perspective, creating an environment where security can be managed holistically,
allowing actual risks to be addressed.
The type of information security classification labels selected and used will depend on the nature
of the organization, with examples being:
In the business sector, labels such as: Public, Sensitive, Private, Confidential.
In the government sector, labels such as: Unclassified, Sensitive But
Unclassified, Restricted, Confidential, Secret, Top Secret and their non-English
equivalents.
In cross-sectoral formations, the Traffic Light Protocol, which consists of: White, Green,
Amber, and Red.
All employees in the organization, as well as business partners, must be trained on the
classification schema and understand the required security controls and handling procedures for
each classification. The classification of a particular information asset that has been assigned
should be reviewed periodically to ensure the classification is still appropriate for the
information and to ensure the security controls required by the classification are in place.
Access control
Access to protected information must be restricted to people who are authorized to access the
information. The computer programs, and in many cases the computers that process the
information, must also be authorized. This requires that mechanisms be in place to control the
access to protected information. The sophistication of the access control mechanisms should be
in parity with the value of the information being protected – the more sensitive or valuable the
information the stronger the control mechanisms need to be. The foundation on which access
control mechanisms are built start with identification and authentication.
Identification is an assertion of who someone is or what something is. If a person makes the
statement "Hello, my name is John Doe" they are making a claim of who they are. However,
their claim may or may not be true. Before John Doe can be granted access to protected
information it will be necessary to verify that the person claiming to be John Doe really is John
Doe. Typically the claim is in the form of a username. By entering that username you are
claiming "I am the person the username belongs to".
Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to
make a withdrawal, he tells the bank teller he is John Doe—a claim of identity. The bank teller
asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the
license to make sure it has John Doe printed on it and compares the photograph on the license
against the person claiming to be John Doe. If the photo and name match the person, then the
teller has authenticated that John Doe is who he claimed to be. Similarly by entering the correct
password, the user is providing evidence that they are the person they username belongs to.
There are three different types of information that can be used for authentication:
Something you know: things such as a PIN, a password, or your mother's maiden name.
Something you have: a driver's license or a magnetic swipe card.
Something you are: biometrics, including palm prints, fingerprints, voice prints and retina
(eye) scans.
Strong authentication requires providing more than one type of authentication information (two-
factor authentication). The username is the most common form of identification on computer
systems today and the password is the most common form of authentication. Usernames and
passwords have served their purpose but in our modern world they are no longer
adequate. Usernames and passwords are slowly being replaced with more sophisticated
authentication mechanisms.
Authorization
After a person, program or computer has successfully been identified and authenticated then it
must be determined what informational resources they are permitted to access and what actions
they will be allowed to perform (run, view, create, delete, or change). This is
called authorization. Authorization to access information and other computing services begins
with administrative policies and procedures. The policies prescribe what information and
computing services can be accessed, by whom, and under what conditions. The access control
mechanisms are then configured to enforce these policies. Different computing systems are
equipped with different kinds of access control mechanisms—some may even offer a choice of
different access control mechanisms. The access control mechanism a system offers will be
based upon one of three approaches to access control or it may be derived from a combination of
the three approaches.
To be effective, policies and other security controls must be enforceable and upheld. Effective
policies ensure that people are held accountable for their actions. All failed and successful
authentication attempts must be logged, and all access to information must leave some type of
audit trail.
Also, need-to-know principle needs to be in affect when talking about access control. Need-to-
know principle gives access rights to a person to perform their job functions. This principle is
used in the government, when dealing with difference clearances. Even though two employees in
different departments have a top-secret clearance, they must have a need-to-know in order for
information to be exchanged. Within the need-to-know principle, network administrators grant
the employee least amount privileges to prevent employees access and doing more than what
they are supposed to. Need-to-know helps to enforce the confidential-integrity-availability
(C-I-A) triad. Need-to-know directly impacts the confidential area of the triad.
Cryptography
Information security uses cryptography to transform usable information into a form that renders
it unusable by anyone other than an authorized user; this process is called encryption.
Information that has been encrypted (rendered unusable) can be transformed back into its
original usable form by an authorized user, who possesses the cryptographic key, through the
process of decryption. Cryptography is used in information security to protect information from
unauthorized or accidental disclosure while the information is in transit (either electronically or
physically) and while information is in storage.
Cryptography provides information security with other useful applications as well including
improved authentication methods, message digests, digital signatures, non-repudiation, and
encrypted network communications. Older less secure applications such as telnet and ftp are
slowly being replaced with more secure applications such as ssh that use encrypted network
communications. Wireless communications can be encrypted using protocols such
as WPA/WPA2 or the older (and less secure) WEP. Wired communications (such
as ITU-T G.hn) are secured using AES for encryption and X.1035 for authentication and key
exchange. Software applications such as Gnu PG or PGP can be used to encrypt data files and
Email.