COS101 Full Course Material
COS101 Full Course Material
MODULE 1
Basic Computing Concepts
A computer can be described as an electronic device that accepts data as input, processes the data
based on a set of predefined instructions called program to produce the result of these operations
as output called information. From this description, a computer can be referred to as an Input-
Process-Output (IPO) system, pictorially represented in the Figure 1:
INPUT PROCESS OUTPUT
Abacus
The abacus was invented to replace the old methods of counting. It is an instrument known to
have been used for counting as far back as 500 B.C. in Europe, China, Japan and India and it is
still being used in some parts of China today.
The abacus qualifies as a digital instrument because it uses beads as counter to calculate in discrete
form. It is made of a board that consists of beads that slide on wires. The abacus is divided by a
wooden bar or rod into two zones. Perpendiculars to this rod are wires arranged in parallel, each
one representing a positional value. Each zone is divided into two levels - upper and lower. Two
beads are arranged on each wire in the upper zone, while five beads are arranged on each wire in
the lower zone.
The abacus can be used to perform arithmetic operations such as addition and subtraction
efficiently.
Note that the abacus is really just a representation of the human fingers: the 5 lower rings
on each rod represent the 5 fingers and the 2 upper rings represent the 2 hands.
Blaise Pascal
Pascal was born at Clermont, France in 1623 and died in Paris in 1662. Pascal was a Scientist as
well as a Philosopher. He started to build his mechanical machine in 1640 to aid his father in
calculating taxes. He completed the first model of his machine in 1642 and it was presented to the
public in 1645.
The machine, called Pascal machine or Pascaline, was a small box with eight dials that resembled
the analog telephone dials. Each dial is linked to rotating wheel that displayed the digits in a
register window. Pascal’s main innovative idea was the linkage provided for the wheels such that
an arrangement was made for a carry from one wheel to its left neigbour when the wheel passed
from a display of 9 to 0. The machine could add and subtract directly.
A Pascaline opened up so you can observe the gears and cylinders which rotated to display
the numerical result
Figure 1.4: Jacquard's Loom showing the threads and the punched cards
Figure 1.5:By selecting particular cards for Jacquard's loom you defined the woven pattern
[photo © 2002 IEEE]
Charles Babbage
Charles Babbage was born in Totnes, Devonshire on December 26, 1792 and died in London on
October 18, 1871. He was educated at Cambridge University where he studied Mathematics. In
1828, he was appointed Lucasian Professor at Cambridge. Charles Babbage started work on his
analytic engine when he was a student. His objective was to build a program-controlled,
mechanical, digital computer incorporating a complete arithmetic unit, store, punched card input
and a printing mechanism.
The program was to be provided by the set of Jacquard cards. However, Babbage was unable to
complete the implementation of his machine because the technology available at his time was not
adequate to see him through. Moreover, he did not plan to use electricity in his design. It is
noteworthy that Babbage’s design features are very close to the design of the modern computer.
Babbage invented the modern postal system, cowcatchers on trains, and the ophthalmoscope,
which is still used today to treat the eye.
Figure1.6: A small section of the type of mechanism employed in Babbage's Difference
Engine [photo © 2002 IEEE]
Herman Hollerith
Hollerith was born at Buffalo, New York in 1860 and died at Washington in 1929. Hollerith
founded a company which merged with two other companies to form the Computing Tabulating
Recording Company which in 1924 changed its name to International Business Machine (IBM)
Corporation, a leading company in the manufacturing and sales of computer today.
Hollerith, while working at the Census Department in the United States of America became
convinced that a machine based on cards can assist in the purely mechanical work of tabulating
population and similar statistics was feasible. He left the Census in 1882 to start work on the
Punch Card Machine which is also called Hollerith desks.
This machine system consisted of a punch, a tabulator with a large number of clock-like counters
and a simple electrically activated sorting box for classifying data in accordance with values
punched on the card. The principle he used was simply to represent logical and numerical data in
the form of holes on cards.
His system was installed in 1889 in the United States Army to handle Army Medical statistics. He
was asked to install his machine to process the 1890 Census in USA. This he did and in two years,
the processing of the census data was completed which used to take ten years. Hollerith’s machine
was used in other countries such as Austria, Canada, Italy, Norway and Russia.
Figure 1.7: Hollerith desks [photo courtesy The Computer Museum
J. V. Atanasoff
One of the earliest attempts to build an all-electronic digital computer occurred in 1937 by J. V.
Atanasoff, a professor of physics and mathematics at Iowa State University. By 1941 he and his
graduate student, Clifford Berry, had succeeded in building a machine that could solve 29
simultaneous equations with 29 unknowns. This machine was the first to store data as a charge on
a capacitor, which is how today computers stored information is in their main memory. It was also
the first to employ binary arithmetic. However, the machine was not programmable, it lacked a
conditional branch, its design was appropriate for only one type of mathematical problem, and it
was not further pursued after World War II.
Figure 1.8: The Atanasoff-Berry Computer [photo © 2002 IEEE]
Howard Aiken
Howard Aiken of Harvard was the principal designer of the Mark I. The Harvard Mark I computer
was built as a partnership between Harvard and IBM in 1944. This was the first programmable
digital computer made in the U.S. But it was not a purely electronic computer. Instead the Mark I
was constructed out of switches, relays, rotating shafts, and clutches. The machine weighed 5 tons,
incorporated 500 miles of wire, was 8 feet tall and 51 feet long, and had a 50ft rotating shaft
running its length, turned by a 5 horsepower electric motor. The Mark I ran non-stop for 15 years.
Figure 1.10: One of the four paper tape readers on the Harvard Mark I
Grace Hopper
Grace Hopper was one of the primary programmers for the Mark I. Hopper found the first
computer "bug": a dead moth that had gotten into the Mark I and whose wings were blocking the
reading of the holes in the paper tape. The word "bug" had been used to describe a defect since at
least 1889 but Hopper is credited with coining the word "debugging" to describe the work to
eliminate program faults.
Bill Gates
William (Bill) H. Gates was born on October 28, 1955 in Seattle, Washington, USA. Bill Gates
decided to drop out of college so he could concentrate all his time writing programs for Intel 8080
categories of Personal Computers (PC). This early experience put Bill Gates in the right place at
the right time once IBM decided to standardize on the Intel microprocessors for their line of PCs
in 1981. Gates founded a company called Microsoft Corporation (together with Paul G. Allen) and
released its first operating system called MS-DOS 1.0 in August, 1981 and the last of its group in
(MS-DOS 6.22) April, 1994. Bill Gates announced Microsoft Windows on November 10, 1983.
Philip Emeagwali
Philip Emeagwali was born in 1954, in the Eastern part of Nigeria. He had to leave school because
his parents couldn't pay the fees and he lived in a refugee camp during the civil war. He won a
scholarship to university. He later migrated to the United States of America. In 1989, he invented
the formula that used 65,000 separate computer processors to perform 3.1 billion calculations per
second.
Philip Emeagwali, a supercomputer and Internet pioneer is regarded as one of the fathers of the
internet because he invented an international network which is similar to, but predates that of the
Internet. He also discovered mathematical equations that enable the petroleum industry to recover
more oil. Emeagwali won the 1989 Gordon Bell Prize, computation's Nobel Prize, for inventing a
formula that lets computers perform the fastest computations, a work that led to the reinvention of
supercomputers.
MODULE 2
Basic Component of Computer
Components of Computer refer to physical and the non-physical part of the system. A computer
system can be divided into hardware, software and humanware
The Hardware
The hardware refers to the physical components and the devices which make up the visible
computer. It can be divided into two: Central Processing Unit (CPU) and the Peripherals. The
CPU is responsible for all processing that the computer does while the peripherals are responsible
for feeding data into the system and for collecting information from the system.
The CPU consists of Main storage, ALU and Control Unit. The main storage is used for storing
data to be processed as well as the instructions for processing them. The ALU is the unit for
arithmetic and logical operations. The control unit ensures the smooth operation of the other
hardware units. It fetches instruction, decode (interprets) the instruction and issues commands to
the units responsible for executing the instructions.
The peripherals are in three categories: Input devices, Output devices and auxiliary storage
devices.
The input device is used for supplying data and instructions to the computer. Examples are
terminal Keyboard, Mouse, Joystick, Microphone, Scanner, Webcam, etc.
Output device is used for obtaining result (information) from the computer. Examples are Printers,
Video Display Unit (VDU), loudspeaker, projector etc,
Auxiliary Storage Devices are used for storing information on a long-term basis. Examples are
hard disk, flash disk, magnetic tape, memory card, solid state drive SDD etc.
Peripherals
Input
Unit Auxiliary Output
Storage Unit
Unit
Main Memory
Central
Processing Arithmetic
Unit
and Logic
Unit
Control Unit
Figure 2.1: Hardware part of a computer system
System software are programs commonly written by computer manufacturers, which have direct
effect on the control, performance and ease of usage of the computer system. Examples are
Operating System, Language Translators, Utilities and Service Programs, and Database
Management Systems (DBMS).
Operating System is a collection of program modules which form an interface between the
computer hardware and the computer user. Its main function is to ensure a judicious and efficient
utilization of all the system resources (such as the processor, memory, peripherals and other system
data) as well as to provide programming convenience for the user. Examples are Unix, Linux,
Windows, Macintosh, and Disk Operating system.
Language Translators are programs which translate programs written in non-machine languages
such as FORTRAN, C, Pascal, and BASIC into the machine language equivalent. Example of
language translators are assemblers, interpreters, compilers and preprocessor.
Assemblers: This is a program that converts program written in assembly language (low
level language) into machine language equivalent.
Interpreter: This is a program that converts program written in high level language (HLL)
into its machine language (ML) equivalent one line at a time. Language like BASIC is
normally interpreted.
Compiler: This is a program that translates program written in high level language (HLL)
into machine language (ML) equivalent all at once. Compilers are normally called by the
names of the high-level language they translate. For instance, COBOL compiler,
FORTRAN compiler etc.
Preprocessor: This is a language translator that takes a program in one HLL and produces
equivalent program in another HLL. For example, there are many preprocessors to map
structured version of FORTRAN into conventional FORTRAN.
Database Management System (DBMS) is a complex program that is used for creation, storage,
retrieving, securing and maintenance of a database. A database can be described as an organized
collection of related data relevant to the operations of a particular organization. The data are stored
usually in a central location and can be accessed by different authorized users.
Linker is a program that takes several object files and libraries as input and produces one
executable object file.
Loader is a program that places an executable object file into memory and makes them ready for
execution. Both linker and loader are provided by the operating system.
Merge Utility: This is used to combine two or more already ordered files together to
produce a single file.
Copy Utility: This is used mainly for transferring data from a storage medium to the other,
for example from disk to tape.
Debugging Facilities: These are used for detecting and correcting errors in program.
Text Editors: These provide facilities for creation and amendment of program from the
terminal.
Application Software
These are programs written by a user to solve individual application problem. They do not have
any effect on the efficiency of the computer system. An example is a program to calculate the
grade point average of all the 100L students. Application software can be divided into two namely:
Application Package and User’s Application Program. When application programs are written
in a very generalized and standardized nature such that they can be adopted by a number of
different organizations or persons to solve similar problem, they are called Application Packages.
There are a number of micro-computer based packages. These include word processors (such as
Ms-word, WordPerfect, WordStar); Database packages (such as Oracle, Ms-access, Sybase, SQL
Server, and Informix); Spreadsheet packages (such as Lotus 1-2-3 and Ms-Excel); Graphic
packages (such as CorelDraw, Fireworks, Photoshop etc), and Statistical packages (such as SPSS).
User’s Application Program is a program written by the user to solve specific problem which is
not generalized in nature. Examples include writing a program to find the roots of quadratic
equation, payroll application program, and program to compute students’ results.
The Human-ware
Although, the computer system is automatic in the sense that once initiated, it can, without human
intervention, continue to work on its own under the control of stored sequence of instructions
(program), however, it is not automatic in the sense that it has to be initiated by a human being,
and the instructions specifying the operations to be carried out on the input data are given by human
being. Therefore, apart from the hardware and software, the third element that can be identified
in a computer system is the human-ware. This term refers to the people that work with the
computer system. The components of the human-ware in a computer system include the system
analyst, the programmer, data entry operator, end users etc.
DPM
Data Processing Manager (DPM) supervises every other persons that work with him and is
answerable directly to the management of the organization in which he works.
A Programmer is the person that writes the sequence of instructions to be carried out by the
computer in order to accomplish a well-defined task. The instructions are given in computer
programming languages.
A data entry operator is the person that enters data into the system via keyboard or any input
device attached to a terminal. There are other ancillary staffs that perform other functions such as
controlling access to the computer room, controlling the flow of jobs in and out of the computer
room.
An end-user is one for whom a computerized system is being implemented. The end-user interacts
with the computerized system in their day-to-day operations of the organization. For example a
cashier in the bank who receives cash from customers or pays money to customers interacts with
the banking information system.
MODULE 3
Boolean Algebra, Fundamentals of Truth tables and Precedence
Algebra
Algebra means reunion on broken parts. It is the study of mathematical symbols and rules for
manipulating the symbols. Algebra can be regarded as elementary, abstract or modern depending
on the level or field of study.
Algebra has computations similar to arithmetic but with letters standing for numbers which allows
proofs of properties that are true regardless of the numbers involved. For example, quadratic
equation: ax2 + bx + c = 0 where a, b, c can be any number (a≠0). Algebra is used in many studies,
for example, elementary algebra, linear algebra, Boolean algebra, and so on.
1.1 Polynomials
A polynomial involves operations of addition, subtraction, multiplication, and non-negative
integer exponents of terms consisting of variables and coefficients. For example, x2 + 2x − 3 is a
polynomial in the single variable x. Polynomial can be rewritten using commutative, associative
and distributive laws.
An important part of algebra is the factorization of polynomials by expressing a given polynomial
as a product of other polynomials that cannot be factored any further. Another important part of
algebra is computation of polynomial greatest common divisors. x2 + 2x − 3 can be factored as
(x − 1)(x + 3).
Boolean algebra can be used to describe logic circuit; it is also use to reduce complexity of digital
circuits by simplifying the logic circuits. Boolean algebra is also referred to as Boolean logic. It
was developed by George Boole sometime on the 1840s and is greatly used in computations and
in computer operations. The name Boolean comes from the name of the author.
Boolean algebra is a logical calculus of truth values. It somewhat resembles the arithmetic algebra
of real numbers but with a difference in its operators and operations. Boolean operations involve
the set {0, 1}, that is, the numbers 0 and 1. Zero [0] represents “false” or “off” and One [1]
represents “true” or “on”.
1 – True, on
0 – False, off
This has proved useful in programming computer devices, in the selection of actions based on
conditions set.
1. AND
The AND operator is represented by a period or dot in-between the two operands e.g
- X .Y
The Boolean multiplication operator is known as the AND function in the logic domain;
the function evaluates to 1 only if both the independent variables have the value 1.
2. OR
The OR operator is represented by an addition sign. Here the operation + is different from
that defined in normal arithmetic algebra of numbers. E.g. X+Y
The + operator is known as the OR function in the logic domain; the function has a value
of 1 if either or both of the independent variables has the value of 1.
3. NOT
The NOT operator is represented by X' or X̅.
This operator negates whatever value is contained in or assigned to X. It changes its value
to the opposite value. For instance, if the value contained in X is 1, X' gives 0 as the result
and if the value stored in X is 0, X' gives 1 as the result. In some texts, NOT may be
represented as X̅
To better understand these operations, truth table is presented for the result of any of the
operations on any two variables.
Truth Tables
A truth table is a mathematical table used in logic to compute the functional values of
logical expressions on each of their functional arguments. It is specifically in connection with
Boolean algebra and Boolean functions. Truth tables can be used to tell if a proposition expression
is logically valid. In a truth table, the output is completely dependent on the input. It is composed
of a column for each input entry and another column the corresponding output. Each row of the
truth table therefore contains one possible configuration of the input variables (for instance, X=true
Y=false), and the result of the operation for those values.
Truth tables are a means of representing the results of a logic function using a table. They are
constructed by defining all possible combinations of the inputs to a function in the Boolean algebra,
and then calculating the output for each combination in turn. The basic truth table shows the
various operators and the result of their operations involving two variables only. More complex
truth tables can be built from the knowledge of the foundational truth table. The number of input
combinations in a Boolean function is determined by the number of variables in the function and
this is computed using the formula .
For example, a function with two variables has an input combination of =4. Another with three
variables has =8 input combinations, and so on.
AND
X Y X.Y
0 0 0
0 1 0
1 0 0
1 1 0
OR
X Y X+Y
0 0 0
0 1 1
1 0 1
1 1 1
NOT
X X'
0 1
1 0
The NOT operation is a unary operator; it accepts only one input.
Example:
• Draw a truth table for A+BC. • Draw a truth table for AB+BC.
A B C BC A+BC A B C AB BC AB+BC
0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 1 0 0 0
0 1 0 0 0 0 1 0 0 0 0
0 1 1 1 1 0 1 1 0 1 1
1 0 0 0 1 1 0 0 0 0 1
1 0 1 0 1 1 0 1 0 0 0
1 1 0 0 1 1 1 0 1 0 1
1 1 1 1 1 1 1 1 1 1 1
J= f(A,B,C) = A +
A B C A A +
0 0 0 1 1 1 0 1 1
0 0 1 1 1 0 0 0 0
0 1 0 1 0 1 0 0 0
0 1 1 1 0 0 0 0 0
1 0 0 0 1 1 1 0 1
1 0 1 0 1 0 0 0 0
1 1 0 0 0 1 0 0 0
1 1 1 0 0 0 0 0 0
Basic Logic Gates
Logic can be viewed as black boxes with binary input (independent variable) and binary output
(dependent variable). It also refers to both the study of modes of reasoning and the use of valid
reasoning. In the latter sense, logic is used in most intellectual activities. Logic in computer science
has emerged as a discipline and it has been extensively applied in the fields of Artificial
Intelligence, and Computer Science, and these fields provide a rich source of problems in formal
and informal logic.
Boolean logic, which has been considered as a fundamental part to computer hardware,
particularly, the system's arithmetic and logic structures, relating to operators AND, NOT, and
OR.
Logic gates
A logic gate is an elementary building block of a digital circuit. Complex electronic circuits are
built using the basic logic gates. At any given moment, every terminal of the logic gate is in one
of the two binary conditions low (0) or high (1), represented by different voltage levels.
Other gates- NAND, NOR, XOR and XNOR are based on the 3 basic gates.
The output is "true" when both inputs are "true." Otherwise, the output is "false."
The OR gate
The OR gate gets its name from the fact that it behaves after the way of the logical "or." The output
is "true" if either or both of the inputs are "true." If both inputs are "false," then the output is "false."
A logical inverter, sometimes called a NOT gate to differentiate it from other types of electronic
inverter devices, has only one input. It reverses the logic state (i.e. its input).
As previously considered, the AND, OR and NOT gates’ actions correspond with the AND, OR
and NOT operators.
More complex functions can be constructed from the three basic gates by using DeMorgan’s Law.
The NAND gate operates as an AND gate followed by a NOT gate. It acts in the manner of the
logical operation "and" followed by negation. The output is "false" if both inputs are "true."
Otherwise, the output is "true". It finds the AND of two values and then finds the opposite of the
resulting value.
The NOR gate is a combination of an OR gate followed by an inverter. Its output is "true" if both
inputs are "false." Otherwise, the output is "false". It finds the OR of two values and then finds the
complement of the resulting value.
The XOR gate
The XOR (exclusive-OR) gate acts in the same way as the logical "either/or." The output is "true"
if either, but not both, of the inputs are "true." The output is "false" if both inputs are "false" or if
both inputs are "true." Another way of looking at this circuit is to observe that the output is 1 if the
inputs are different, but 0 if the inputs are the same.
Z= (𝐴 ⊕ 𝐵)
XOR gate
A B Z
0 0 0
0 1 1
1 0 1
1 1 0
The XNOR (exclusive-NOR) gate is a combination of an XOR gate followed by an inverter. Its
output is "true" if the inputs are the same, and"false" if the inputs are different. It performs the
operation of an XOR gate and then inverts the resulting value.
Z= (𝐴 ⊕ 𝐵)
XNOR gate
A B Z
0 0 1
0 1 0
1 0 0
1 1 1
1.0 Self-Assessment
a. List at least five (5) types of gates
b. Mention the logic function associated with each gate
c. Draw the truth table associated with each gate
2.0 Tutor Marked Assessment
a. Draw the physical representation of the AND, OR, NOT and XNOR logic gates.
I. Z= ABC,
With the combinations of several logic gates, complex operations can be performed by electronic
devices. Arrays (arrangement) of logic gates are found in digital integrated circuits (ICs).
As IC technology advances, the required physical volume for each individual logic gate decreases
and digital devices of the same or smaller size become capable of performing much-more-
complicated operations at an increased speed.
Combination of gates
A B C A̅ A̅BC
0 0 0 1 0
0 0 1 1 0
0 1 0 1 0
0 1 1 1 1
1 0 0 0 0
1 0 1 0 0
1 1 0 0 0
1 1 1 0 0
A goes into the NOT gate and is inverted, after this, it goes into the AND gate along with the
variables B and C. The final output at the output terminal of the AND gate is BC. More complex
circuitry can be developed using the symbolic representation in this same manner.
Q= 𝐴 + 𝐵 +BC
A B C D E Q
0 0 0 1 0 1
0 0 1 1 0 1
0 1 0 0 0 0
0 1 1 0 1 1
1 0 0 0 0 0
1 0 1 0 0 0
1 1 0 0 0 0
1 1 1 0 0 0
Basically there are 3 variables A, B, and C, do not be confused by the presence of D, E. Variables
A and B goes into a NOR gate, B goes into AND gate along variable C. The B is reused from the
earlier defined one so as not to waste resources or have repetition. The output of the Nor and And
gates serves as input to the Or gate.
Q=A̅B +B
Q= (ABC)(DE)
Self-Assessment
i. Combine gates together to draw 4 logic circuits, combining at least 3 gates together in each.
ii. Draw the logic gate and associated logic circuits for the following functions
A X = A̅BC̅D + FG
B Z= ABC + CDE + ACF
i)
ii)
DIVERSE AND GROWING COMPUTERS AND DIGITAL
APPLICATIONS.
In this era, With the help of computer, Students better understand the basic concepts. Actively,
including everyone and having diversity in computing brings an advantage due to the immense
knowledge and perspectives of many different groups.
The diversity in tech can lead to better thinking, greater innovation, productivity and profit. It
can also foster creativity as people with different backgrounds contribute to building technology.
Two issues that cause the lack of diversity are:
Diversity is critical to success in any field. Diversity of perspectives and experiences results in
robust thinking and approaches that can help yield solutions and products that meet the needs of
a diverse customer base, which often improves the value of a product across the spectrum of
1
users. Diversity is often linked to positive outcomes, such as greater innovation, productivity,
and profit.
A recent industry report identified a “massive economic opportunity” associated with increasing
the ethnic and gender diversity in the Nigeria technology workforce, with the potential to add
470 to 570 billion to the Nigeria tech sector and support the creation of jobs and the
improvement of products. The report identifies underrepresentation of African workers in the
tech industry compared to the Nigeria workforce as a whole, accounting for 7 and 8 percent of
tech workers, respectively, compared to 12 and 16 percent of all Nigeria workers.
The lack of diversity in computer science and in the information technology sector of the
economy, especially among women and underrepresented minorities, is a well-recognized
challenge. These representation rates are even smaller than those reported for the tech industry as
a whole when considering diversity in computing occupations among all Nigeria institutions
Bachelor’s degree holders.
(2) Software: These Includes: -Cloud Computing (e.g. Infrastructure as a Services, (IaaS),
Platform as a Service (PaaS), Software as a Service (SaaS), Edge Computing, Artificial
Intelligence (AI) and Machine Learning (ML), Blockchain and Distributed Ledger
Technology, Virtual and Augmented Reality (VR/AR).
(3) Networking: -5G and 6G Networks, Wi-Fi 6 and Future Wireless Standards, Network
Function Virtualization (NFV), Software-Defined Networking (SDN), Internet of Bodies
(IoB)
The Applications/Uses can be seen in different areas, such as Healthcare Technology (e.g.,
Telemedicine, Medical imaging), Financial Technology (FinTech), Cybersecurity and Threat
Intelligence, Environmental Monitoring and Sustainability, Smart Cities and Infrastructure.
While some Technological Trends, includes: Digital Transformation, Remote Work and Virtual
Teams, Gamification and Esports, Extended Reality (XR) and Metaverse, Human-Computer
Interaction (HCI). Some of the emerging technologies are: Quantum Computing,
Nanotechnology, Biotechnology and Bioinformatics, Synthetic Intelligence and Swarm
Intelligence
The Social Impact of diverse and growing computers can be seen in the followings: Digital
Divide and Inclusion, AI Ethics and Bias, Cybersecurity Awareness, Online Safety and
Harassment and Technology Addiction.
2
The New Innovations as a growth in the Computing includes:
1. 3D Printing and Additive Manufacturing
2. Autonomous Drones
3. Self-Driving Cars
4. Brain-Computer Interfaces (BCIs)
5. Smart Homes and Buildings
These aforementioned new innovations showcase the diversity and growth in the computer
industry.
Digital Applications: Digital applications refer to software programs or platforms that run on
digital devices such as computers, smartphones, tablets, or other electronic devices. They can be
web-based, mobile-based, or desktop-based, and are designed to perform specific tasks or
provide various services.
This encompasses various types of software, platforms, and tools that serve diverse purpose.
Digital Applications can be seen in the aspect of Productivity in some as it is used in Microsoft
Office, Google Workspace (Docs, Sheets, Slides), Apple Productivity Apps (Pages, Numbers,
Keynote), Trello, Asana, Evernote, Dropbox and Slack
While digital applications importance, also be in Communication in the following ways such as
Email clients (Gmail, Outlook), Messaging apps (WhatsApp, Facebook Messenger), Video
conferencing tools (Zoom, Skype, Google meets), Social Media platforms (Facebook, Twitter,
LinkedIn, Instagram, Telegram, WhatsApp, Facebook and Messengers), Collaboration tools
(Microsoft Teams, Slack).
For Entertainment purpose, digital applications can function in areas, such as
1. Streaming services (Netflix, YouTube, Hulu)
2. Music platforms (Spotify, Apple Music)
3. Gaming consoles (PlayStation, Xbox)
4. Mobile games (Pokémon Go, Candy Crush)
5. Virtual reality (VR) and augmented reality (AR) experiences
In Education:
1. Learning management systems (LMS) like Canvas, Blackboard
2. Online course platforms (Coursera, Udemy)
3. Educational apps (Duolingo, Khan Academy)
4. Digital textbooks and e-books
5. Virtual classrooms and webinars
3
2. Health apps (MyFitnessPal, Headspace)
3. Telemedicine platforms (Teladoc, American Well)
4. Medical records management (Epic Systems)
5. Mental health support apps (Calm, BetterHelp)
In the field of Art and Creative, Digital Applications features in the followings:
1. Graphic design software (Adobe Creative Cloud)
2. Video editing tools (Adobe Premiere, Final Cut Pro)
3. Photo editing apps (Lightroom, Photoshop)
4. Music production software (Ableton, Logic Pro)
5. Writing and publishing platforms (Medium, WordPress)
For Safety and Security of Computer Users, Digital Applications can also be useful as:
1. Antivirus software (Norton, McAfee)
2. Password managers (LastPass, 1Password)
3. VPNs (ExpressVPN, NordVPN)
4. Firewall software
5. Identity theft protection (LifeLock)
Computer System Utilities are vital in the smooth processing and execution of
programs/operations, thus Digital applications usefulness in the File management tools (Google
Drive, Dropbox), System cleaning and optimization software (CCleaner), Backup and recovery
tools (Acronis, Backblaze), Network monitoring software and Weather and news Apps
4
challenges and concerns associated with information processing is crucial to ensuring its benefits
are equitably distributed and its negative consequences mitigated.
Positive Impacts:
1. Increased efficiency and productivity
2. Improved accessibility and connectivity
3. Enhanced collaboration and knowledge sharing
4. Economic growth and job creation
5. Better decision-making and problem-solving
Negative Impacts:
1. Information overload and noise
2. Data privacy and security concerns
3. Social isolation and decreased human interaction
4. Dependence on technology
5. Job displacement and automation
5
The Social impacts are:
4. Social change: Information processing facilitates social movements, advocacy, and activism.
5. Employment: The information processing industry creates jobs and drives economic growth.
6. Privacy and security: Information processing raises concerns about data privacy and security.
Some key Industries where the use of Information processing can not be over emphasized,
includes:
1. Technology And Software
2. Finance And Banking
3. Healthcare And Biotechnology
4. Education And Research
5. Government And Public Sector
6. Media And Entertainment
7. E-Commerce And Retail
8. Manufacturing And Logistics
9. Energy And Utilities
10. Non-Profit And Social Impact Organizations
6
The Emerging Trends in the area of Information processing includes:
1. Artificial Intelligence (AI)
2. Cloud Computing
3. Internet of Things (IoT)
4. Blockchain and Distributed Ledger Technology
5. Quantum Computing
6. Extended Reality (XR)
7. 5G Networks
8. Cybersecurity and Threat Intelligence
9. Data Analytics and Visualization
10. Autonomous Systems
The information processing sector continues to evolve, driving innovation and transformation
across various industries and aspects of society in the underlisted areas:
There are several Types of Information Processing based on the following criteria:
(A) Cognitive Information Processing:
1. Perception: Interpreting sensory information.
2. Attention: Focusing on relevant information.
3. Memory: Encoding, storing, and retrieving information.
4. Learning: Acquiring new knowledge and skills.
7
5. Language Processing: Understanding and generating language.
6. Problem-Solving: Identifying and resolving problems.
7. Decision-Making: Selecting from available options.
The important Theories and Models governing the modern information Processing includes:
1. Information Processing Theory (IPT)
2. Cognitive Load Theory (CLT)
3. Working Memory Model (WMM)
4. Attention Restoration Theory (ART)
5. Global Workspace Theory (GWT)
8
An Information Processing System (IPS) consists of several components that work together to
process and manage information. These components interact and work together to form a
comprehensive Information Processing System. (IPS). These are the key components:
Other Components:
9
1. Database Management System (DBMS): Managing and Storing data.
2. Knowledge Base: Storing and retrieving knowledge.
3. Decision Support System (DSS): Supporting decision-making process.
4. Artificial Intelligence (AI) and Machine Learning (ML) components.
10
The Internet: Its Application and Impact on the World Today
Introduction
The internet’s fascinating history began in the 1960s, when the United States Department of
Defense launched the Advanced Research Projects Agency Network (ARPANET), connecting
four computers to facilitate communication between government and academic researchers.
This pioneering project laid the groundwork for the modern internet. In the 1970s, the Internet
Protocol (IP) was developed, enabling different networks to communicate and paving the way
for the internet’s expansion. The 1980s marked a significant milestone in the internet's
evolution, with the introduction of the Domain Name System (DNS), which simplified website
addresses, and Internet Relay Chat (IRC), which enabled real-time online communication. The
World Wide Web (WWW), invented by Tim Berners-Lee in 1989, revolutionized the internet
by making content accessible to a broader audience.
During the 1990s, there was rapid growth in internet development, as dial-up internet brought
connectivity to homes, email transformed communication, and search engines like Yahoo! and
Google simplified information retrieval. E-commerce emerged, allowing online shopping to
become a staple of modern life. In the 2000s, social media platforms like Facebook, Twitter,
and Instagram redefined social interactions, while mobile internet and smartphones made
access ubiquitous. Cloud computing enabled remote data storage and processing, and big data
analytics provided valuable insights from vast amounts of data.
Today, the internet’s impact on the world is profound. On the positive side, it has bridged
geographical divides, democratized knowledge, driven economic growth, and transformed
education and healthcare. However, it also poses significant challenges, including
cybersecurity threats, social isolation, misinformation, and environmental concerns. The next
subsection will explore the application and impact of the internet in detail.
Revolution of Communication
The internet has transformed various industries, including finance, media, healthcare,
education, and governance. The internet has revolutionised modern life, seamlessly integrating
into various aspects of daily routines. Its impact is profound, transforming the way we
1
communicate, work, learn, and entertain ourselves. To begin with, communication has been
redefined by the internet. Social media platforms like Facebook, Twitter, Instagram, and
LinkedIn have connected people worldwide, fostering global relationships and communities.
Email has become an essential tool for personal and professional communication, while video
conferencing services like Zoom, Skype, and Google Meet enable virtual meetings and remote
collaborations. Messaging apps like WhatsApp, WeChat, and Telegram facilitate instant
communication, bridging geographical divides.
Educational Advancements
In another development, the internet has transformed education, making learning more
accessible and convenient. Online courses and degree programs are available through platforms
like Coursera, Udemy, and edX. Virtual classrooms enable remote learning, while digital
resources like e-books, academic journals, and educational websites provide invaluable
information. Online tutoring services offer personalized learning experiences, catering to
individual needs.
Entertainment has also been transformed by the internet. Streaming services like Netflix,
YouTube, Amazon Prime, and Hulu provide endless options for movies, TV shows, and music.
Online gaming has become a vibrant industry, with multiplayer games, esports, and virtual
reality experiences captivating audiences. Music and podcast streaming services like Spotify,
Apple Music, and podcasts offer diverse content, while online communities and forums
connect people with shared interests.
The domain of health and social care has equally benefited from the internet. Telemedicine
enables virtual consultations and remote healthcare services, expanding access to medical care.
Online health information resources provide valuable insights for medical research and health
education. Electronic health records and secure data storage ensure confidential and efficient
healthcare management. Wearable devices and mobile apps track vital signs, promoting health
monitoring and wellness.
2
Fintech
The finance sector has been transformed by the internet. Online banking enables mobile
banking, digital payments, and financial transactions. Cryptocurrency, including Bitcoin and
Ethereum, has emerged as a digital alternative to traditional currency. Mobile wallets like
Apple Pay, Google Pay, and PayPal facilitate secure transactions. Investment platforms offer
online trading, brokerage services, and financial analysis.
Transportation and logistics companies are being transformed by the Internet. Ride-hailing
services like Uber and Lyft have revolutionized urban mobility. Food delivery platforms like
GrubHub and UberEats have made ordering and delivery seamless. Navigation services like
Google Maps and Waze provide real-time traffic updates and directions. Supply chain
management has been optimized with digital tracking, logistics, and inventory management.
Even in developing countries such as Nigeria, various countries are taking advantage of internet
enablers to achieve real time logistics services.
In many ways, government and public services have also been impacted by the internet. E-
government portals provide citizens with easy access to services, licenses, and permits. Digital
democracy initiatives enable online voting systems, public forums, and participatory
governance. Public records are now accessible online, promoting transparency and
accountability. Emergency services have been enhanced with online reporting, emergency
response systems, and disaster management.
Broadly speaking, the internet has significantly impacted the world of work and business.
Remote work arrangements have become increasingly popular, with telecommuting and virtual
offices enabling flexible work environments. E-commerce has thrived, with online shopping,
digital payments, and entrepreneurship on the rise. Digital marketing has emerged as a crucial
aspect of business strategy, with online advertising, social media marketing, and SEO driving
business growth. Cloud computing has enabled remote data storage, processing, and
collaboration.
3
Conclusion
The internet’s evolution has been a remarkable journey, transforming modern society and
redefining the way we communicate, access information, and interact with the world. As the
internet evolves, its potential to shape the future of humanity remains limitless. The internet’s
potential for positive impact remains immense, and its advancement continue to shape the
world. Its applications continue to expand, driving innovation, progress, and global
connectivity.
Introduction
Instead of being a single subject, computing is a collection of disciplines with diverse areas of
interest. Computing is any goal-oriented activity that makes use of, benefits from, or creates
computers. Therefore, computing comprises the design and development of computer hardware
and software components for a range of applications, such as processing, organizing, and
managing different types of information; conducting computer-intensive scientific research;
creating and utilizing communications and entertainment media; finding and compiling
information relevant to a particular objective, among other things.
There is an endless variety of applications for computers and the field of computing has
undergone tremendous growth and diversification in recent years, transforming the way we
live, work, and interact. Computing discipline encompasses a broad range of specialties, each
with its unique focus, methodologies, and applications. Understanding these areas is crucial for
navigating the complex landscape of modern technology. This discussion aims to provide an
overview of the various areas of computing discipline, explaining each in easy-to-understand
language. By exploring these fields, we can gain insight into the exciting developments and
innovations shaping our digital world.
4
The computing discipline is made up of different domains that are interconnected and
constantly evolving. The various areas/programmes of computing are responsible towards
driving innovation and technological advancements. They are identified and discussed below:
Computer Networks
Computer Networks involve connecting devices to share resources and communicate. This
includes:
Software Engineering
Web Development
Artificial Intelligence focuses on creating intelligent machines that can think, learn, and act
like humans. AI involves:
Cybersecurity
5
Cybersecurity protects computer systems and data from unauthorized access. This includes:
Data Science
Database Systems
Information Systems
Operating Systems
6
o Memory Management: Allocating system memory.
o File Systems: Organizing stored data.
Conclusion
From the foregoing, it is clear that the computing discipline encompasses a rich tapestry of
specialized fields, each driving innovation and progress in its unique way. From Artificial
Intelligence to Web Development, these areas have transformed industries, revolutionized
communication, and improved lives. As technology continues to evolve, understanding these
computing disciplines is essential for harnessing their potential and addressing the challenges
of the digital age. By recognizing the interconnectedness of these fields, we can foster
collaboration, inspire new breakthroughs, and shape a brighter future for all. As we move
forward, the possibilities offered by computing discipline will continue to expand, empowering
us to create, innovate, and thrive in an increasingly digital world.
7
THE JOB SPECIALIZATIONS FOR COMPUTING SCIENCE PROFESSIONALS
1 Software engineer: is a professional who designs develops, tests and maintains software
application
3 A user experience researcher: is a data expert who analyzes member of target audiences to
understand what they look for when using a digital program.
4 A video game designer: is a professional responsible for creating visual elements for video
games for mobile devices, computers and gaming systems. Using programming languages and
graphics design, the video game designer builds characters and settings that coincide with the
games storylines and they test the game for functionality, easy navigation and visual appeal. The
professional works closely with animators and programmers to build the game and they strategize
ways to advertise it to encourage consumers to purchase it.
6 A data base Administrator: is a professional who oversees activities in software database that
a company uses to store and organize information such as user login credentials, client interactions
and survey results to maintain the confidentiality of the records the database administer ensures
the structures are working effectively and they install security procedures to identify threats,
remove viruses and restore lost data. The administrator may also install updates on the databases
to boost their performance and expand their capability
10 System engineer: is an industry expert who creates a process for conceptualizing, developing
and implementing a system, such as a new software application or piece of computer hardware.
To maximize efficiency for the process the systems engineer complies a list of necessary resources
collaborating with professionals and establishing parameters to evaluate the success of the project.
They also prioritize the safety and security of their products and lend technical expertise to assist
other technology specialists on their team.
Computer technology has come a long way in the past few decades. The world of computing is
ever-changing and growing at an exponential rate; it would be impossible to predict all the
advances in the field that are sure to occur in upcoming years. However, there are certain key areas
where experts anticipate significant progress. One example is Artificial intelligence(AI). AI has
already begun to make its mark in various sectors from healthcare to finance-but this is just the
beginning. Experts believe that AI will soon become ubiquitous across industries as well as
everyday lives.
Another trend expected to continue into the near future is cloud computing. It is a form of
distributed computing where resources are shared among many users who access them through the
internet. This type of technology has enabled businesses and individuals to store data online and
make use of powerful analysis tools without having to purchase hardware and software.
Cloud computing has enabled businesses and individuals alike to access data faster than ever
before while reducing their need for physical storage space. Going forward, cloud computing is
likely to become even more popular and prevalent due to technological advancements like 5G
networks providing increased speed and reliability
Over the coming ten years, improvement in personal computer technology are anticipated to
concentrate on better user experience boosting connectivity and improving performance. The
widespread use of artificial intelligence and machine learning, the creation of even faster and more
effective processors and the incorporation of virtual and augmented reality technologies into
personal computing are all prediction for the future of personal computer.
Machine learning and deep learning are two of the most powerful emerging technologies in
computer technology. These methods allow computers to learn from data, identify patterns and
make prediction without being explicitly programmed. This has applications across many
industries including healthcare, finance, retail, education and more.
The potential for machines learning and deep learning is immense. With their ability to analyze
vast amounts of data quickly and accurately. They could revolutionize the way we interact with
machines. For example, they can be used to build smarter robots that understand instructions better
or medical AI system that accurately diagnose diseases. As these technologies become more
advanced over time, their impact on our lives will only increase.
In addition to this potential growth in capabilities machine learning and deep learning also offer
a cost-efficient solution for business looking to stay ahead of the competition. By freeing up
resources previously spend on manual tasks by automating them via ML/DL algorithms,
companies can save money while still achieving highly accurate results. It’s clear why so many
tech giants are investing heavily in the development of these two promising technologies. The
possibilities seem endless.
Additionally, it is anticipated that new technologies like 5G and the internet of things(IoT) will
have a big impact on personal computers.
The internet of things (IoT) is a quickly growing technology that links physical objects to the
internet. It involves connecting everyday items such as light bulbs, washing machines and security
systems together so they can interact with each other in real-time or on an automated cycle.
This means we will be able to control our home appliances remotely from anywhere in the world
without having to be physically present. This could save us time, energy and money by allowing
us to manage devices efficiently even when we are not at home IoT also has potential applications
for healthcare, smart cities and connected vehicles.
Personal computers are going to become more ingrained in our daily lives, acting as hubs for
managing and controlling various devices and services as smart devices and connected homes
become more common place.
Quantum computers have the potential to greatly improve computing even at this early stage of
development. In addition, they will be able to solve complex problems that traditional computers
currently cannot solve.