0% found this document useful (0 votes)
17 views46 pages

COS101 Full Course Material

The document provides an overview of basic computing concepts, describing a computer as an Input-Process-Output (IPO) system that transforms raw data into meaningful information through various operations. It also outlines the historical development of computers, highlighting key figures and inventions from the abacus to modern computing, including contributions from notable individuals like Charles Babbage and Grace Hopper. Additionally, it explains the components of a computer system, categorizing them into hardware, software, and humanware, with a focus on the roles of the CPU, peripherals, and different types of software.

Uploaded by

johnmoyinoluwa2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views46 pages

COS101 Full Course Material

The document provides an overview of basic computing concepts, describing a computer as an Input-Process-Output (IPO) system that transforms raw data into meaningful information through various operations. It also outlines the historical development of computers, highlighting key figures and inventions from the abacus to modern computing, including contributions from notable individuals like Charles Babbage and Grace Hopper. Additionally, it explains the components of a computer system, categorizing them into hardware, software, and humanware, with a focus on the roles of the CPU, peripherals, and different types of software.

Uploaded by

johnmoyinoluwa2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

EDWARD CARES

MODULE 1
Basic Computing Concepts
A computer can be described as an electronic device that accepts data as input, processes the data
based on a set of predefined instructions called program to produce the result of these operations
as output called information. From this description, a computer can be referred to as an Input-
Process-Output (IPO) system, pictorially represented in the Figure 1:
INPUT PROCESS OUTPUT

Figure 1.1: IPO Representation of a computer System


Data are raw facts, such as a score in examination or the name of a student, for example 55 or
Malik respectively. There are three types of data – Numeric, alphabetic, and alphanumeric.
Numeric data consists of digits 0 – 9 (such as 31), while alphabetic data consist of any of the
English language alphabets in upper and lower cases (e. g. Toyin). An alphanumeric data can
consist of a number, an alphabet or a special character, such as a vehicle plate number (e. g. AE
731 LRN).
Information: data as described above contain no meaning, however, when it is transformed into
a more meaningful and useful form, it is called information. The transformation process involves
a series of operations to be performed by the computer on the raw data that are fed into the system.
The operation can be arithmetic (such as addition, subtraction, multiplication, and division),
logical comparison or character manipulation (as in text processing).
Logical comparison means testing whether one data item is greater than, equal to, or less than
another item, and based on the outcome of the comparison, a specified action can be taken. The
output of the processing can be in form of reports which can be displayed or printed.

The History of Computer


In the early days of man, fingers and toes were used for counting. Later on, sticks and pebbles
were used. Permanent records of the result of counting were kept by putting marks on the ground,
wall and so on using charcoal, chalk, and plant juice.
The historical development of computing focuses on the digital computer from the Abacus to the
modern electronic computer. Some of these people whose contributions have been widely
acknowledged to the development of Computer will be discussed:

Abacus
The abacus was invented to replace the old methods of counting. It is an instrument known to
have been used for counting as far back as 500 B.C. in Europe, China, Japan and India and it is
still being used in some parts of China today.
The abacus qualifies as a digital instrument because it uses beads as counter to calculate in discrete
form. It is made of a board that consists of beads that slide on wires. The abacus is divided by a
wooden bar or rod into two zones. Perpendiculars to this rod are wires arranged in parallel, each
one representing a positional value. Each zone is divided into two levels - upper and lower. Two
beads are arranged on each wire in the upper zone, while five beads are arranged on each wire in
the lower zone.

The abacus can be used to perform arithmetic operations such as addition and subtraction
efficiently.

Figure 1.2: Modern abacus.

Note that the abacus is really just a representation of the human fingers: the 5 lower rings
on each rod represent the 5 fingers and the 2 upper rings represent the 2 hands.
Blaise Pascal
Pascal was born at Clermont, France in 1623 and died in Paris in 1662. Pascal was a Scientist as
well as a Philosopher. He started to build his mechanical machine in 1640 to aid his father in
calculating taxes. He completed the first model of his machine in 1642 and it was presented to the
public in 1645.
The machine, called Pascal machine or Pascaline, was a small box with eight dials that resembled
the analog telephone dials. Each dial is linked to rotating wheel that displayed the digits in a
register window. Pascal’s main innovative idea was the linkage provided for the wheels such that
an arrangement was made for a carry from one wheel to its left neigbour when the wheel passed
from a display of 9 to 0. The machine could add and subtract directly.

Figure 1.3: Pascal's Pascaline [photo © 2002 IEEE]

A Pascaline opened up so you can observe the gears and cylinders which rotated to display
the numerical result

Joseph Marie Jacquard


In 1801 the Frenchman Joseph Marie Jacquard invented a power loom that could base its weave
(and hence the design on the fabric) upon a pattern automatically read from punched wooden cards,
held together in a long row by rope. Descendents of these punched cards have been in use ever
since.

Figure 1.4: Jacquard's Loom showing the threads and the punched cards

Figure 1.5:By selecting particular cards for Jacquard's loom you defined the woven pattern
[photo © 2002 IEEE]
Charles Babbage
Charles Babbage was born in Totnes, Devonshire on December 26, 1792 and died in London on
October 18, 1871. He was educated at Cambridge University where he studied Mathematics. In
1828, he was appointed Lucasian Professor at Cambridge. Charles Babbage started work on his
analytic engine when he was a student. His objective was to build a program-controlled,
mechanical, digital computer incorporating a complete arithmetic unit, store, punched card input
and a printing mechanism.
The program was to be provided by the set of Jacquard cards. However, Babbage was unable to
complete the implementation of his machine because the technology available at his time was not
adequate to see him through. Moreover, he did not plan to use electricity in his design. It is
noteworthy that Babbage’s design features are very close to the design of the modern computer.
Babbage invented the modern postal system, cowcatchers on trains, and the ophthalmoscope,
which is still used today to treat the eye.
Figure1.6: A small section of the type of mechanism employed in Babbage's Difference
Engine [photo © 2002 IEEE]

Augusta Ada Byron


Ada Byron was the daughter of the famous poet Lord Byron and a friend of Charles Babbage, (Ada
later become the Countess Lady Lovelace by marriage). Though she was only 19, she was
fascinated by Babbage's ideas and through letters and meetings with Babbage she learned enough
about the design of the Analytic Engine to begin fashioning programs for the still unbuilt machine.
While Babbage refused to publish his knowledge for another 30 years, Ada wrote a series of
"Notes" wherein she detailed sequences of instructions she had prepared for the Analytic Engine.
The Analytic Engine remained unbuilt but Ada earned her spot in history as the first computer
programmer. Ada invented the subroutine and was the first to recognize the importance of looping.

Herman Hollerith
Hollerith was born at Buffalo, New York in 1860 and died at Washington in 1929. Hollerith
founded a company which merged with two other companies to form the Computing Tabulating
Recording Company which in 1924 changed its name to International Business Machine (IBM)
Corporation, a leading company in the manufacturing and sales of computer today.
Hollerith, while working at the Census Department in the United States of America became
convinced that a machine based on cards can assist in the purely mechanical work of tabulating
population and similar statistics was feasible. He left the Census in 1882 to start work on the
Punch Card Machine which is also called Hollerith desks.
This machine system consisted of a punch, a tabulator with a large number of clock-like counters
and a simple electrically activated sorting box for classifying data in accordance with values
punched on the card. The principle he used was simply to represent logical and numerical data in
the form of holes on cards.

His system was installed in 1889 in the United States Army to handle Army Medical statistics. He
was asked to install his machine to process the 1890 Census in USA. This he did and in two years,
the processing of the census data was completed which used to take ten years. Hollerith’s machine
was used in other countries such as Austria, Canada, Italy, Norway and Russia.
Figure 1.7: Hollerith desks [photo courtesy The Computer Museum

John Von Neumann


Von Neumann was born on December 28, 1903 in Budapest, Hungary and died in Washington D.
C. on February 8, 1957. He was a great mathematician with significant contribution to the theory
of games and strategy, set theory and the design of high speed computing machines. In 1933, he
was appointed one of the first six professors of the school of mathematics in the institute for
Advanced Study at the Princeton University, USA, a position he retained until his death.
Neumann with some other people presented a paper titled “The Preliminary discussion of the
Logical Design of an Electronic Computing Instrument” popularly known as Von Neumann
machine. This paper contains revolutionary ideas on which the present-day computers are based.
The machine has Storage, Control, Arithmetic and input/output units. The machine was to be a
general-purpose computing machine. It was to be an electronic machine and introduced the
concept of stored program. This concept implied that the operations in the computer were to be
controlled by a program stored in the memory of the computer. This program was to consist of
codes that intermixed data with instructions.
As a result of this, it became possible for computations to proceed at electronic speed, perform the
same set of operations or instructions repeatedly and the concept of program counter, which
implied that whenever an instruction is fetched, the program counter which is a high-speed register
automatically contains the address of the instruction to be executed next.

J. V. Atanasoff
One of the earliest attempts to build an all-electronic digital computer occurred in 1937 by J. V.
Atanasoff, a professor of physics and mathematics at Iowa State University. By 1941 he and his
graduate student, Clifford Berry, had succeeded in building a machine that could solve 29
simultaneous equations with 29 unknowns. This machine was the first to store data as a charge on
a capacitor, which is how today computers stored information is in their main memory. It was also
the first to employ binary arithmetic. However, the machine was not programmable, it lacked a
conditional branch, its design was appropriate for only one type of mathematical problem, and it
was not further pursued after World War II.
Figure 1.8: The Atanasoff-Berry Computer [photo © 2002 IEEE]

Howard Aiken
Howard Aiken of Harvard was the principal designer of the Mark I. The Harvard Mark I computer
was built as a partnership between Harvard and IBM in 1944. This was the first programmable
digital computer made in the U.S. But it was not a purely electronic computer. Instead the Mark I
was constructed out of switches, relays, rotating shafts, and clutches. The machine weighed 5 tons,
incorporated 500 miles of wire, was 8 feet tall and 51 feet long, and had a 50ft rotating shaft
running its length, turned by a 5 horsepower electric motor. The Mark I ran non-stop for 15 years.

Figure 1.9: The Harvard Mark I: An electro-mechanical computer

Figure 1.10: One of the four paper tape readers on the Harvard Mark I

Grace Hopper
Grace Hopper was one of the primary programmers for the Mark I. Hopper found the first
computer "bug": a dead moth that had gotten into the Mark I and whose wings were blocking the
reading of the holes in the paper tape. The word "bug" had been used to describe a defect since at
least 1889 but Hopper is credited with coining the word "debugging" to describe the work to
eliminate program faults.

Figure 1.11: The first computer bug [photo © 2002 IEEE]


In 1953 Grace Hopper invented the first high-level language, "Flow-matic". This language
eventually became COBOL which was the language most affected by the infamous Y2K problem.
A high-level language is designed to be more understandable by humans than is the binary
language understood by the computing machinery. A high-level language is worthless without a
program -- known as a compiler -- to translate it into the binary language of the computer and
hence Grace Hopper also constructed the world's first compiler. Grace remained active as a Rear
Admiral in the Navy Reserves until she was 79.

Bill Gates
William (Bill) H. Gates was born on October 28, 1955 in Seattle, Washington, USA. Bill Gates
decided to drop out of college so he could concentrate all his time writing programs for Intel 8080
categories of Personal Computers (PC). This early experience put Bill Gates in the right place at
the right time once IBM decided to standardize on the Intel microprocessors for their line of PCs
in 1981. Gates founded a company called Microsoft Corporation (together with Paul G. Allen) and
released its first operating system called MS-DOS 1.0 in August, 1981 and the last of its group in
(MS-DOS 6.22) April, 1994. Bill Gates announced Microsoft Windows on November 10, 1983.

Philip Emeagwali
Philip Emeagwali was born in 1954, in the Eastern part of Nigeria. He had to leave school because
his parents couldn't pay the fees and he lived in a refugee camp during the civil war. He won a
scholarship to university. He later migrated to the United States of America. In 1989, he invented
the formula that used 65,000 separate computer processors to perform 3.1 billion calculations per
second.
Philip Emeagwali, a supercomputer and Internet pioneer is regarded as one of the fathers of the
internet because he invented an international network which is similar to, but predates that of the
Internet. He also discovered mathematical equations that enable the petroleum industry to recover
more oil. Emeagwali won the 1989 Gordon Bell Prize, computation's Nobel Prize, for inventing a
formula that lets computers perform the fastest computations, a work that led to the reinvention of
supercomputers.
MODULE 2
Basic Component of Computer
Components of Computer refer to physical and the non-physical part of the system. A computer
system can be divided into hardware, software and humanware
The Hardware
The hardware refers to the physical components and the devices which make up the visible
computer. It can be divided into two: Central Processing Unit (CPU) and the Peripherals. The
CPU is responsible for all processing that the computer does while the peripherals are responsible
for feeding data into the system and for collecting information from the system.
The CPU consists of Main storage, ALU and Control Unit. The main storage is used for storing
data to be processed as well as the instructions for processing them. The ALU is the unit for
arithmetic and logical operations. The control unit ensures the smooth operation of the other
hardware units. It fetches instruction, decode (interprets) the instruction and issues commands to
the units responsible for executing the instructions.

The peripherals are in three categories: Input devices, Output devices and auxiliary storage
devices.

The input device is used for supplying data and instructions to the computer. Examples are
terminal Keyboard, Mouse, Joystick, Microphone, Scanner, Webcam, etc.

Output device is used for obtaining result (information) from the computer. Examples are Printers,
Video Display Unit (VDU), loudspeaker, projector etc,

Auxiliary Storage Devices are used for storing information on a long-term basis. Examples are
hard disk, flash disk, magnetic tape, memory card, solid state drive SDD etc.

A simple model of the hardware part of a computer system is shown below:

Peripherals

Input
Unit Auxiliary Output
Storage Unit
Unit

Main Memory

Central
Processing Arithmetic
Unit
and Logic
Unit

Control Unit
Figure 2.1: Hardware part of a computer system

1.1 The Software


Software are basically referred to as programs. A program consists of sequence of instructions
required to accomplish well-defined tasks. Examples of such tasks include:
1. Finding the average score of a student
2. Computing the net pay of an employee
3. Solving a set of simultaneous linear equations
It is the software that enables the hardware to be put into effective use. There are two main
categories of software – System software and Application software.

3.3.1 System Software

System software are programs commonly written by computer manufacturers, which have direct
effect on the control, performance and ease of usage of the computer system. Examples are
Operating System, Language Translators, Utilities and Service Programs, and Database
Management Systems (DBMS).

Operating System is a collection of program modules which form an interface between the
computer hardware and the computer user. Its main function is to ensure a judicious and efficient
utilization of all the system resources (such as the processor, memory, peripherals and other system
data) as well as to provide programming convenience for the user. Examples are Unix, Linux,
Windows, Macintosh, and Disk Operating system.

Language Translators are programs which translate programs written in non-machine languages
such as FORTRAN, C, Pascal, and BASIC into the machine language equivalent. Example of
language translators are assemblers, interpreters, compilers and preprocessor.

 Assemblers: This is a program that converts program written in assembly language (low
level language) into machine language equivalent.

 Interpreter: This is a program that converts program written in high level language (HLL)
into its machine language (ML) equivalent one line at a time. Language like BASIC is
normally interpreted.

 Compiler: This is a program that translates program written in high level language (HLL)
into machine language (ML) equivalent all at once. Compilers are normally called by the
names of the high-level language they translate. For instance, COBOL compiler,
FORTRAN compiler etc.

 Preprocessor: This is a language translator that takes a program in one HLL and produces
equivalent program in another HLL. For example, there are many preprocessors to map
structured version of FORTRAN into conventional FORTRAN.

Database Management System (DBMS) is a complex program that is used for creation, storage,
retrieving, securing and maintenance of a database. A database can be described as an organized
collection of related data relevant to the operations of a particular organization. The data are stored
usually in a central location and can be accessed by different authorized users.

Linker is a program that takes several object files and libraries as input and produces one
executable object file.

Loader is a program that places an executable object file into memory and makes them ready for
execution. Both linker and loader are provided by the operating system.

Utility and Service Programs


These are programs which provide facilities for performing common computing tasks of a routine
nature. The following are some of the examples of commonly used utility programs:
 Sort Utility: This is used for arranging records of a file in a specified sequence (alphabetic,
numerical or chronological) of a particular data item within the records. The data item is
referred to as the sort key.

 Merge Utility: This is used to combine two or more already ordered files together to
produce a single file.

 Copy Utility: This is used mainly for transferring data from a storage medium to the other,
for example from disk to tape.

 Debugging Facilities: These are used for detecting and correcting errors in program.

 Text Editors: These provide facilities for creation and amendment of program from the
terminal.

 Benchmark Program: This is a standardized collection of programs that are used to


evaluate hardware and software. For example, a benchmark might be used to compare the
performance of two different computers on identical tasks, assess the comparative
performance of two operating systems etc.

Application Software
These are programs written by a user to solve individual application problem. They do not have
any effect on the efficiency of the computer system. An example is a program to calculate the
grade point average of all the 100L students. Application software can be divided into two namely:
Application Package and User’s Application Program. When application programs are written
in a very generalized and standardized nature such that they can be adopted by a number of
different organizations or persons to solve similar problem, they are called Application Packages.
There are a number of micro-computer based packages. These include word processors (such as
Ms-word, WordPerfect, WordStar); Database packages (such as Oracle, Ms-access, Sybase, SQL
Server, and Informix); Spreadsheet packages (such as Lotus 1-2-3 and Ms-Excel); Graphic
packages (such as CorelDraw, Fireworks, Photoshop etc), and Statistical packages (such as SPSS).
User’s Application Program is a program written by the user to solve specific problem which is
not generalized in nature. Examples include writing a program to find the roots of quadratic
equation, payroll application program, and program to compute students’ results.

The Human-ware
Although, the computer system is automatic in the sense that once initiated, it can, without human
intervention, continue to work on its own under the control of stored sequence of instructions
(program), however, it is not automatic in the sense that it has to be initiated by a human being,
and the instructions specifying the operations to be carried out on the input data are given by human
being. Therefore, apart from the hardware and software, the third element that can be identified
in a computer system is the human-ware. This term refers to the people that work with the
computer system. The components of the human-ware in a computer system include the system
analyst, the programmer, data entry operator, end users etc.

Organizational Structure of a Typical Computer Installation


The following diagram shows the organizational structure of a typical computer installation

DPM

System Development Operations Team

System Analysts Programmers Operators Control Clerks

Data Entry Operator

Figure 2.2: Organizational Structure of a typical computer installation

Data Processing Manager (DPM) supervises every other persons that work with him and is
answerable directly to the management of the organization in which he works.

A System Analyst is a person that understudies an existing system in operation in order to


ascertain whether or not computerization of the system is necessary and/or cost-effective. When
found necessary, he designs a computerized procedure and specifies the functions of the various
programs needed to implement the system.

A Programmer is the person that writes the sequence of instructions to be carried out by the
computer in order to accomplish a well-defined task. The instructions are given in computer
programming languages.
A data entry operator is the person that enters data into the system via keyboard or any input
device attached to a terminal. There are other ancillary staffs that perform other functions such as
controlling access to the computer room, controlling the flow of jobs in and out of the computer
room.

An end-user is one for whom a computerized system is being implemented. The end-user interacts
with the computerized system in their day-to-day operations of the organization. For example a
cashier in the bank who receives cash from customers or pays money to customers interacts with
the banking information system.
MODULE 3
Boolean Algebra, Fundamentals of Truth tables and Precedence
Algebra
Algebra means reunion on broken parts. It is the study of mathematical symbols and rules for
manipulating the symbols. Algebra can be regarded as elementary, abstract or modern depending
on the level or field of study.

Algebra has computations similar to arithmetic but with letters standing for numbers which allows
proofs of properties that are true regardless of the numbers involved. For example, quadratic
equation: ax2 + bx + c = 0 where a, b, c can be any number (a≠0). Algebra is used in many studies,
for example, elementary algebra, linear algebra, Boolean algebra, and so on.

1.1 Polynomials
A polynomial involves operations of addition, subtraction, multiplication, and non-negative
integer exponents of terms consisting of variables and coefficients. For example, x2 + 2x − 3 is a
polynomial in the single variable x. Polynomial can be rewritten using commutative, associative
and distributive laws.
An important part of algebra is the factorization of polynomials by expressing a given polynomial
as a product of other polynomials that cannot be factored any further. Another important part of
algebra is computation of polynomial greatest common divisors. x2 + 2x − 3 can be factored as
(x − 1)(x + 3).

1.2 Boolean Algebra


Boolean algebra is the branch of algebra in which the values of the variables are true values
denoted by 1 and 0 or true and false respectively.

Boolean algebra can be used to describe logic circuit; it is also use to reduce complexity of digital
circuits by simplifying the logic circuits. Boolean algebra is also referred to as Boolean logic. It
was developed by George Boole sometime on the 1840s and is greatly used in computations and
in computer operations. The name Boolean comes from the name of the author.

Boolean algebra is a logical calculus of truth values. It somewhat resembles the arithmetic algebra
of real numbers but with a difference in its operators and operations. Boolean operations involve
the set {0, 1}, that is, the numbers 0 and 1. Zero [0] represents “false” or “off” and One [1]
represents “true” or “on”.

1 – True, on
0 – False, off
This has proved useful in programming computer devices, in the selection of actions based on
conditions set.

Basic Boolean operations

1. AND
The AND operator is represented by a period or dot in-between the two operands e.g
- X .Y

The Boolean multiplication operator is known as the AND function in the logic domain;
the function evaluates to 1 only if both the independent variables have the value 1.

2. OR
The OR operator is represented by an addition sign. Here the operation + is different from
that defined in normal arithmetic algebra of numbers. E.g. X+Y
The + operator is known as the OR function in the logic domain; the function has a value
of 1 if either or both of the independent variables has the value of 1.
3. NOT
The NOT operator is represented by X' or X̅.
This operator negates whatever value is contained in or assigned to X. It changes its value
to the opposite value. For instance, if the value contained in X is 1, X' gives 0 as the result
and if the value stored in X is 0, X' gives 1 as the result. In some texts, NOT may be
represented as X̅

To better understand these operations, truth table is presented for the result of any of the
operations on any two variables.

Truth Tables

A truth table is a mathematical table used in logic to compute the functional values of
logical expressions on each of their functional arguments. It is specifically in connection with
Boolean algebra and Boolean functions. Truth tables can be used to tell if a proposition expression
is logically valid. In a truth table, the output is completely dependent on the input. It is composed
of a column for each input entry and another column the corresponding output. Each row of the
truth table therefore contains one possible configuration of the input variables (for instance, X=true
Y=false), and the result of the operation for those values.

Applications of truth table

1. The truth table can be used in analyzing arguments.


2. It is used to reduce basic Boolean operations in computing
3. It is used to test the validity of statements. In validating statements, the following three
steps can be followed:
a. Represent each premise (represented as inputs) with a symbol (a variable).
b. Represent the conclusion (represented as the final result) with a symbol (a variable).
c. Draw a truth table with columns for each premise (input) and a column for the
conclusion (result).

Truth tables are a means of representing the results of a logic function using a table. They are
constructed by defining all possible combinations of the inputs to a function in the Boolean algebra,
and then calculating the output for each combination in turn. The basic truth table shows the
various operators and the result of their operations involving two variables only. More complex
truth tables can be built from the knowledge of the foundational truth table. The number of input
combinations in a Boolean function is determined by the number of variables in the function and
this is computed using the formula .

Number of input combinations = . Where n is number of variable(s).

For example, a function with two variables has an input combination of =4. Another with three
variables has =8 input combinations, and so on.

AND
X Y X.Y
0 0 0
0 1 0
1 0 0
1 1 0

OR
X Y X+Y
0 0 0
0 1 1
1 0 1
1 1 1

NOT
X X'
0 1
1 0
The NOT operation is a unary operator; it accepts only one input.

Example:
• Draw a truth table for A+BC. • Draw a truth table for AB+BC.
A B C BC A+BC A B C AB BC AB+BC
0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 1 0 0 0
0 1 0 0 0 0 1 0 0 0 0
0 1 1 1 1 0 1 1 0 1 1
1 0 0 0 1 1 0 0 0 0 1
1 0 1 0 1 1 0 1 0 0 0
1 1 0 0 1 1 1 0 1 0 1
1 1 1 1 1 1 1 1 1 1 1

• Draw a truth table for A(B+D).


A B D B+D A(B+D)
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 1 0
1 0 0 0 0
1 0 1 1 1
1 1 0 1 1
1 1 1 1 1

J= f(A,B,C) = A +
A B C A A +

0 0 0 1 1 1 0 1 1
0 0 1 1 1 0 0 0 0
0 1 0 1 0 1 0 0 0
0 1 1 1 0 0 0 0 0
1 0 0 0 1 1 1 0 1
1 0 1 0 1 0 0 0 0
1 1 0 0 0 1 0 0 0
1 1 1 0 0 0 0 0 0
Basic Logic Gates

Logic can be viewed as black boxes with binary input (independent variable) and binary output
(dependent variable). It also refers to both the study of modes of reasoning and the use of valid
reasoning. In the latter sense, logic is used in most intellectual activities. Logic in computer science
has emerged as a discipline and it has been extensively applied in the fields of Artificial
Intelligence, and Computer Science, and these fields provide a rich source of problems in formal
and informal logic.

Boolean logic, which has been considered as a fundamental part to computer hardware,
particularly, the system's arithmetic and logic structures, relating to operators AND, NOT, and
OR.

Logic gates

A logic gate is an elementary building block of a digital circuit. Complex electronic circuits are
built using the basic logic gates. At any given moment, every terminal of the logic gate is in one
of the two binary conditions low (0) or high (1), represented by different voltage levels.

There are 3 basic logic gates: AND, OR, NOT.

Other gates- NAND, NOR, XOR and XNOR are based on the 3 basic gates.

The AND gate


The AND gate is so called because, if 0 is called "false" and 1 is called "true," the gate acts in the
same way as the logical "and" operator. The following illustration and table show the circuit
symbol and logic combinations for an AND gate.

The output is "true" when both inputs are "true." Otherwise, the output is "false."
The OR gate

The OR gate gets its name from the fact that it behaves after the way of the logical "or." The output
is "true" if either or both of the inputs are "true." If both inputs are "false," then the output is "false."

The NOT gate

A logical inverter, sometimes called a NOT gate to differentiate it from other types of electronic
inverter devices, has only one input. It reverses the logic state (i.e. its input).

As previously considered, the AND, OR and NOT gates’ actions correspond with the AND, OR
and NOT operators.
More complex functions can be constructed from the three basic gates by using DeMorgan’s Law.

The NAND gate

The NAND gate operates as an AND gate followed by a NOT gate. It acts in the manner of the
logical operation "and" followed by negation. The output is "false" if both inputs are "true."
Otherwise, the output is "true". It finds the AND of two values and then finds the opposite of the
resulting value.

The NOR gate

The NOR gate is a combination of an OR gate followed by an inverter. Its output is "true" if both
inputs are "false." Otherwise, the output is "false". It finds the OR of two values and then finds the
complement of the resulting value.
The XOR gate

The XOR (exclusive-OR) gate acts in the same way as the logical "either/or." The output is "true"
if either, but not both, of the inputs are "true." The output is "false" if both inputs are "false" or if
both inputs are "true." Another way of looking at this circuit is to observe that the output is 1 if the
inputs are different, but 0 if the inputs are the same.

Z= (𝐴 ⊕ 𝐵)

XOR gate

A B Z

0 0 0

0 1 1

1 0 1

1 1 0

The XNOR gate

The XNOR (exclusive-NOR) gate is a combination of an XOR gate followed by an inverter. Its
output is "true" if the inputs are the same, and"false" if the inputs are different. It performs the
operation of an XOR gate and then inverts the resulting value.

Z= (𝐴 ⊕ 𝐵)

XNOR gate

A B Z
0 0 1
0 1 0
1 0 0
1 1 1

1.0 Self-Assessment
a. List at least five (5) types of gates
b. Mention the logic function associated with each gate
c. Draw the truth table associated with each gate
2.0 Tutor Marked Assessment
a. Draw the physical representation of the AND, OR, NOT and XNOR logic gates.

b. Draw the logic circuit and truth table for

I. Z= ABC,

II. W= (P.Q̅) (R+S̅)

Combinatorial Logic Circuits

With the combinations of several logic gates, complex operations can be performed by electronic
devices. Arrays (arrangement) of logic gates are found in digital integrated circuits (ICs).

As IC technology advances, the required physical volume for each individual logic gate decreases
and digital devices of the same or smaller size become capable of performing much-more-
complicated operations at an increased speed.

Combination of gates

A B C A̅ A̅BC

0 0 0 1 0

0 0 1 1 0

0 1 0 1 0

0 1 1 1 1

1 0 0 0 0

1 0 1 0 0

1 1 0 0 0

1 1 1 0 0
A goes into the NOT gate and is inverted, after this, it goes into the AND gate along with the
variables B and C. The final output at the output terminal of the AND gate is BC. More complex
circuitry can be developed using the symbolic representation in this same manner.

Q= 𝐴 + 𝐵 +BC

A B C D E Q

0 0 0 1 0 1

0 0 1 1 0 1

0 1 0 0 0 0

0 1 1 0 1 1

1 0 0 0 0 0

1 0 1 0 0 0

1 1 0 0 0 0

1 1 1 0 0 0

Basically there are 3 variables A, B, and C, do not be confused by the presence of D, E. Variables
A and B goes into a NOR gate, B goes into AND gate along variable C. The B is reused from the
earlier defined one so as not to waste resources or have repetition. The output of the Nor and And
gates serves as input to the Or gate.

Q=A̅B +B
Q= (ABC)(DE)

Self-Assessment
i. Combine gates together to draw 4 logic circuits, combining at least 3 gates together in each.
ii. Draw the logic gate and associated logic circuits for the following functions
A X = A̅BC̅D + FG
B Z= ABC + CDE + ACF

Tutor Marked Assessment


Write out the logic function of the gates below:

i)

ii)
DIVERSE AND GROWING COMPUTERS AND DIGITAL
APPLICATIONS.

Diverse And Growing Computers


"Diverse and Growing Computers" simply refers to the wide range of computing devices,
systems and technologies that are constantly evolving to meet various needs and applications.

Similarly, It is the increasing representation of underrepresented groups (such as women, people


of color, individuals with disabilities in the field of computer science and technology industry,
and the development of more inclusive design teams and technologies.
Thus, Data diversity refers to the range of different types of elements in a dataset. The causes
behind the growing importance of computer could be traced to their versatility, Efficiency, ability
to process vast amounts of information at incredible speeds, crucial technology, highway and
telecommunication services. Etc. computer save our valuable time in any work.
Diversity in computing is very important to drastically increase creativity, income, connection
and untimely better projects that fit all sorts of people.

In this era, With the help of computer, Students better understand the basic concepts. Actively,
including everyone and having diversity in computing brings an advantage due to the immense
knowledge and perspectives of many different groups.

The diversity in tech can lead to better thinking, greater innovation, productivity and profit. It
can also foster creativity as people with different backgrounds contribute to building technology.
Two issues that cause the lack of diversity are:

1. Pipeline: - The lack of early access to resources.


2. Culture: Exclusivity and discrimination in the workplace.
The lack of diversity can also be attributed to limited early exposed to resources, as students who
do not already have computer skills upon entering college are at a disadvantage in computing
majors. There is also the issue of discrimination and harassment faced in the workplace which
affects all underrepresented groups. For example, studies have shown that 50% of women
reported experiencing sexual harassment in tech companies.
Impacts of Enrollment Growth on Diversity in Computing, These showcase the diversity and
growth in the computing field, spanning from hardware and software advancements to social
impact and emerging technologies

Diversity is critical to success in any field. Diversity of perspectives and experiences results in
robust thinking and approaches that can help yield solutions and products that meet the needs of
a diverse customer base, which often improves the value of a product across the spectrum of

1
users. Diversity is often linked to positive outcomes, such as greater innovation, productivity,
and profit.

A recent industry report identified a “massive economic opportunity” associated with increasing
the ethnic and gender diversity in the Nigeria technology workforce, with the potential to add
470 to 570 billion to the Nigeria tech sector and support the creation of jobs and the
improvement of products. The report identifies underrepresentation of African workers in the
tech industry compared to the Nigeria workforce as a whole, accounting for 7 and 8 percent of
tech workers, respectively, compared to 12 and 16 percent of all Nigeria workers.

The lack of diversity in computer science and in the information technology sector of the
economy, especially among women and underrepresented minorities, is a well-recognized
challenge. These representation rates are even smaller than those reported for the tech industry as
a whole when considering diversity in computing occupations among all Nigeria institutions
Bachelor’s degree holders.

The diverse and growing as it evolves in the following aspects of computing:


(1) Hardware: Emerging Architectures (e.g., quantum computing, neuromorphic computing),
Artificial Intelligence Processors (e.g., TPUs, GPUs), Internet of Things (IoT) Devices,
Wearable Technology (e.g., smartwatches, fitness trackers), Autonomous Vehicles.

(2) Software: These Includes: -Cloud Computing (e.g. Infrastructure as a Services, (IaaS),
Platform as a Service (PaaS), Software as a Service (SaaS), Edge Computing, Artificial
Intelligence (AI) and Machine Learning (ML), Blockchain and Distributed Ledger
Technology, Virtual and Augmented Reality (VR/AR).

(3) Networking: -5G and 6G Networks, Wi-Fi 6 and Future Wireless Standards, Network
Function Virtualization (NFV), Software-Defined Networking (SDN), Internet of Bodies
(IoB)

The Applications/Uses can be seen in different areas, such as Healthcare Technology (e.g.,
Telemedicine, Medical imaging), Financial Technology (FinTech), Cybersecurity and Threat
Intelligence, Environmental Monitoring and Sustainability, Smart Cities and Infrastructure.
While some Technological Trends, includes: Digital Transformation, Remote Work and Virtual
Teams, Gamification and Esports, Extended Reality (XR) and Metaverse, Human-Computer
Interaction (HCI). Some of the emerging technologies are: Quantum Computing,
Nanotechnology, Biotechnology and Bioinformatics, Synthetic Intelligence and Swarm
Intelligence
The Social Impact of diverse and growing computers can be seen in the followings: Digital
Divide and Inclusion, AI Ethics and Bias, Cybersecurity Awareness, Online Safety and
Harassment and Technology Addiction.

2
The New Innovations as a growth in the Computing includes:
1. 3D Printing and Additive Manufacturing
2. Autonomous Drones
3. Self-Driving Cars
4. Brain-Computer Interfaces (BCIs)
5. Smart Homes and Buildings
These aforementioned new innovations showcase the diversity and growth in the computer
industry.

Digital Applications: Digital applications refer to software programs or platforms that run on
digital devices such as computers, smartphones, tablets, or other electronic devices. They can be
web-based, mobile-based, or desktop-based, and are designed to perform specific tasks or
provide various services.
This encompasses various types of software, platforms, and tools that serve diverse purpose.
Digital Applications can be seen in the aspect of Productivity in some as it is used in Microsoft
Office, Google Workspace (Docs, Sheets, Slides), Apple Productivity Apps (Pages, Numbers,
Keynote), Trello, Asana, Evernote, Dropbox and Slack
While digital applications importance, also be in Communication in the following ways such as
Email clients (Gmail, Outlook), Messaging apps (WhatsApp, Facebook Messenger), Video
conferencing tools (Zoom, Skype, Google meets), Social Media platforms (Facebook, Twitter,
LinkedIn, Instagram, Telegram, WhatsApp, Facebook and Messengers), Collaboration tools
(Microsoft Teams, Slack).
For Entertainment purpose, digital applications can function in areas, such as
1. Streaming services (Netflix, YouTube, Hulu)
2. Music platforms (Spotify, Apple Music)
3. Gaming consoles (PlayStation, Xbox)
4. Mobile games (Pokémon Go, Candy Crush)
5. Virtual reality (VR) and augmented reality (AR) experiences

In Education:
1. Learning management systems (LMS) like Canvas, Blackboard
2. Online course platforms (Coursera, Udemy)
3. Educational apps (Duolingo, Khan Academy)
4. Digital textbooks and e-books
5. Virtual classrooms and webinars

In Health and Wellness:


1. Fitness trackers (Fitbit, Apple Watch)

3
2. Health apps (MyFitnessPal, Headspace)
3. Telemedicine platforms (Teladoc, American Well)
4. Medical records management (Epic Systems)
5. Mental health support apps (Calm, BetterHelp)

In the Banking and Finance:


1. Mobile banking apps (Bank of America, Chase)
2. Digital payment platforms (PayPal, Venmo)
3. Investment apps (Robinhood, Fidelity)
4. Accounting software (QuickBooks, Xero)
5. Cryptocurrency exchanges (Coinbase, Binance)

In the field of Art and Creative, Digital Applications features in the followings:
1. Graphic design software (Adobe Creative Cloud)
2. Video editing tools (Adobe Premiere, Final Cut Pro)
3. Photo editing apps (Lightroom, Photoshop)
4. Music production software (Ableton, Logic Pro)
5. Writing and publishing platforms (Medium, WordPress)

For Safety and Security of Computer Users, Digital Applications can also be useful as:
1. Antivirus software (Norton, McAfee)
2. Password managers (LastPass, 1Password)
3. VPNs (ExpressVPN, NordVPN)
4. Firewall software
5. Identity theft protection (LifeLock)

Computer System Utilities are vital in the smooth processing and execution of
programs/operations, thus Digital applications usefulness in the File management tools (Google
Drive, Dropbox), System cleaning and optimization software (CCleaner), Backup and recovery
tools (Acronis, Backblaze), Network monitoring software and Weather and news Apps

INFORMATION PROCESSING AND ITS ROLES IN THE SOCIETY:


What is Information Processing?
Information processing refers to the collection, storage, retrieval, manipulation, and
dissemination of information using various technologies, such as computers, software, and
communication networks.
Information processing plays a vital role in modern society and its impact is multifaceted.
It has transformed modern society and its impact will continue to evolve. Addressing the

4
challenges and concerns associated with information processing is crucial to ensuring its benefits
are equitably distributed and its negative consequences mitigated.

Roles of Information Processing.


1. Decision-making: Accurate information processing enables informed decisions in various
sectors, including business, healthcare, and governance.

2. Communication: Effective information processing facilitates communication, collaboration,


and knowledge sharing.
3. Education: Information processing supports learning, skill development, and access to
knowledge.

4. Innovation: Processing information drives research, development, and innovation.


5. Economic growth: Efficient information processing contributes to economic growth,
competitiveness, and job creation.

6. Healthcare: Accurate information processing improves healthcare outcomes, disease diagnosis,


and treatment.
7. Governance: Information processing enables transparent governance, public engagement, and
policy-making.
8. Research and Development: Facilitates scientific research, data analysis, and innovation.
Some of the Important Positive, Negative and Social Impacts of Information processing, includes
the followings:

Positive Impacts:
1. Increased efficiency and productivity
2. Improved accessibility and connectivity
3. Enhanced collaboration and knowledge sharing
4. Economic growth and job creation
5. Better decision-making and problem-solving

Negative Impacts:
1. Information overload and noise
2. Data privacy and security concerns
3. Social isolation and decreased human interaction
4. Dependence on technology
5. Job displacement and automation

5
The Social impacts are:

1. Accessibility: Information processing makes information accessible to a wider audience.

2. Connectivity: Global connectivity fosters international collaboration and cultural exchange.

3. Empowerment: Access to information empowers individuals, promoting autonomy and self-


determination.

4. Social change: Information processing facilitates social movements, advocacy, and activism.

5. Employment: The information processing industry creates jobs and drives economic growth.

6. Privacy and security: Information processing raises concerns about data privacy and security.

The followings are the Information Processing Professionals:


1. Data Analysts
2. Software Developers
3. Information Architects
4. Data Scientists
5. IT Managers
6. Cybersecurity Experts
7. Network Administrators
8. Database Managers
9. Artificial Intelligence/Machine Learning Engineers
10. Information Systems Managers

Some key Industries where the use of Information processing can not be over emphasized,
includes:
1. Technology And Software
2. Finance And Banking
3. Healthcare And Biotechnology
4. Education And Research
5. Government And Public Sector
6. Media And Entertainment
7. E-Commerce And Retail
8. Manufacturing And Logistics
9. Energy And Utilities
10. Non-Profit And Social Impact Organizations

6
The Emerging Trends in the area of Information processing includes:
1. Artificial Intelligence (AI)
2. Cloud Computing
3. Internet of Things (IoT)
4. Blockchain and Distributed Ledger Technology
5. Quantum Computing
6. Extended Reality (XR)
7. 5G Networks
8. Cybersecurity and Threat Intelligence
9. Data Analytics and Visualization
10. Autonomous Systems

The information processing sector continues to evolve, driving innovation and transformation
across various industries and aspects of society in the underlisted areas:

(A) Hardware and Architecture:


1. Artificial Intelligence (AI) processors
2. Quantum Computing
3. Neuromorphic Computing
4. Graphene-based Computing
5. 3D Stacked Processing
6. Heterogeneous System Architecture (HSA)
7. High-Performance Computing (HPC)

(B) Software and Programming:


1. Machine Learning (ML) frameworks
2. Deep Learning algorithms
3. Natural Language Processing (NLP)
4. Internet of Things (IoT) development
5. Cloud Computing platforms
6. Containerization (e.g., Docker)
7. Serverless Computing.

There are several Types of Information Processing based on the following criteria:
(A) Cognitive Information Processing:
1. Perception: Interpreting sensory information.
2. Attention: Focusing on relevant information.
3. Memory: Encoding, storing, and retrieving information.
4. Learning: Acquiring new knowledge and skills.

7
5. Language Processing: Understanding and generating language.
6. Problem-Solving: Identifying and resolving problems.
7. Decision-Making: Selecting from available options.

(B) Computational Information Processing:


1. Data Processing: Manipulating and transforming data.
2. Algorithmic Processing: Executing step-by-step instructions.
3. Parallel Processing: Processing multiple tasks simultaneously.
4. Distributed Processing: Processing tasks across multiple systems.
5. Cloud Computing: Processing data remotely through cloud services.

(C) Neural Information Processing:


1. Artificial Neural Networks (ANNs): Modeling human brain function.
2. Deep Learning: Multilayer neural networks for complex tasks.
3. Natural Language Processing (NLP): Processing human language.
4. Image Processing: Interpreting and manipulating visual data.
5. Signal Processing: Analyzing and manipulating signals.

(D) Sensory Information Processing: -


1. Visual Processing: Interpreting visual information.
2. Auditory Processing: Interpreting sound information.
3. Tactile Processing: Interpreting touch information.
4. Olfactory Processing: Interpreting smell information.
5. Gustatory Processing: Interpreting taste information.

Other Types are


1. Emotional Processing: Recognizing and managing emotions.
2. Social Information Processing: Interpreting social cues.
3. Spatial Processing: Understanding spatial relationships.
4. Temporal Processing: Understanding time and sequence.
5. Multimodal Processing: Integrating multiple information sources.

The important Theories and Models governing the modern information Processing includes:
1. Information Processing Theory (IPT)
2. Cognitive Load Theory (CLT)
3. Working Memory Model (WMM)
4. Attention Restoration Theory (ART)
5. Global Workspace Theory (GWT)

8
An Information Processing System (IPS) consists of several components that work together to
process and manage information. These components interact and work together to form a
comprehensive Information Processing System. (IPS). These are the key components:

(A) Hardware Components:


1. Input Devices: Keyboard, mouse, scanner, microphone, etc.
2. Central Processing Unit (CPU): Brain of the computer, executes instructions.
3. Memory (RAM): Temporary storage for data and programs.
4. Storage Devices: Hard disk, solid-state drive, flash drive, etc.
5. Output Devices: Monitor, printer, speaker, etc.

(B) Software Components:


1. Operating System (OS): Manages hardware and software resources.
2. Application Software: Programs that perform specific tasks (e.g., word processing).
3. Utility Software: Programs that manage and maintain the system (e.g., antivirus).
4. Firmware: Permanent software stored in non-volatile memory.

(C) Data Components:


1. Input Data: Raw data entered into the system.
2. Processed Data: Transformed data after processing.
3. Output Data: Resultant data after processing.
4. Stored Data: Permanent data stored in storage devices.

(D) Process Components:


1. Input Process: Capturing and converting input data.
2. Processing: Executing instructions and transforming data.
3. Output Process: Presenting processed data.
4. Storage Process: Storing and retrieving data.

(E) Communication Components:


1. Network Interface: Connecting to other systems or networks.
2. Communication Protocols: Rules governing data exchange.
3. Data Transmission: Sending and receiving data.

(F) Human Components:


1. User: Interacting with the system.
2. Operator: Managing and maintaining the system.
3. Analyst: Designing and developing the system.

Other Components:

9
1. Database Management System (DBMS): Managing and Storing data.
2. Knowledge Base: Storing and retrieving knowledge.
3. Decision Support System (DSS): Supporting decision-making process.
4. Artificial Intelligence (AI) and Machine Learning (ML) components.

10
The Internet: Its Application and Impact on the World Today

Introduction

The internet’s fascinating history began in the 1960s, when the United States Department of
Defense launched the Advanced Research Projects Agency Network (ARPANET), connecting
four computers to facilitate communication between government and academic researchers.
This pioneering project laid the groundwork for the modern internet. In the 1970s, the Internet
Protocol (IP) was developed, enabling different networks to communicate and paving the way
for the internet’s expansion. The 1980s marked a significant milestone in the internet's
evolution, with the introduction of the Domain Name System (DNS), which simplified website
addresses, and Internet Relay Chat (IRC), which enabled real-time online communication. The
World Wide Web (WWW), invented by Tim Berners-Lee in 1989, revolutionized the internet
by making content accessible to a broader audience.

During the 1990s, there was rapid growth in internet development, as dial-up internet brought
connectivity to homes, email transformed communication, and search engines like Yahoo! and
Google simplified information retrieval. E-commerce emerged, allowing online shopping to
become a staple of modern life. In the 2000s, social media platforms like Facebook, Twitter,
and Instagram redefined social interactions, while mobile internet and smartphones made
access ubiquitous. Cloud computing enabled remote data storage and processing, and big data
analytics provided valuable insights from vast amounts of data.

Today, the internet’s impact on the world is profound. On the positive side, it has bridged
geographical divides, democratized knowledge, driven economic growth, and transformed
education and healthcare. However, it also poses significant challenges, including
cybersecurity threats, social isolation, misinformation, and environmental concerns. The next
subsection will explore the application and impact of the internet in detail.

The Internet: Its Application and Impact

Revolution of Communication

The internet has transformed various industries, including finance, media, healthcare,
education, and governance. The internet has revolutionised modern life, seamlessly integrating
into various aspects of daily routines. Its impact is profound, transforming the way we

1
communicate, work, learn, and entertain ourselves. To begin with, communication has been
redefined by the internet. Social media platforms like Facebook, Twitter, Instagram, and
LinkedIn have connected people worldwide, fostering global relationships and communities.
Email has become an essential tool for personal and professional communication, while video
conferencing services like Zoom, Skype, and Google Meet enable virtual meetings and remote
collaborations. Messaging apps like WhatsApp, WeChat, and Telegram facilitate instant
communication, bridging geographical divides.

Educational Advancements

In another development, the internet has transformed education, making learning more
accessible and convenient. Online courses and degree programs are available through platforms
like Coursera, Udemy, and edX. Virtual classrooms enable remote learning, while digital
resources like e-books, academic journals, and educational websites provide invaluable
information. Online tutoring services offer personalized learning experiences, catering to
individual needs.

Arts and Entertainment

Entertainment has also been transformed by the internet. Streaming services like Netflix,
YouTube, Amazon Prime, and Hulu provide endless options for movies, TV shows, and music.
Online gaming has become a vibrant industry, with multiplayer games, esports, and virtual
reality experiences captivating audiences. Music and podcast streaming services like Spotify,
Apple Music, and podcasts offer diverse content, while online communities and forums
connect people with shared interests.

Health and Social Care

The domain of health and social care has equally benefited from the internet. Telemedicine
enables virtual consultations and remote healthcare services, expanding access to medical care.
Online health information resources provide valuable insights for medical research and health
education. Electronic health records and secure data storage ensure confidential and efficient
healthcare management. Wearable devices and mobile apps track vital signs, promoting health
monitoring and wellness.

2
Fintech

The finance sector has been transformed by the internet. Online banking enables mobile
banking, digital payments, and financial transactions. Cryptocurrency, including Bitcoin and
Ethereum, has emerged as a digital alternative to traditional currency. Mobile wallets like
Apple Pay, Google Pay, and PayPal facilitate secure transactions. Investment platforms offer
online trading, brokerage services, and financial analysis.

Transportation and Logistics

Transportation and logistics companies are being transformed by the Internet. Ride-hailing
services like Uber and Lyft have revolutionized urban mobility. Food delivery platforms like
GrubHub and UberEats have made ordering and delivery seamless. Navigation services like
Google Maps and Waze provide real-time traffic updates and directions. Supply chain
management has been optimized with digital tracking, logistics, and inventory management.
Even in developing countries such as Nigeria, various countries are taking advantage of internet
enablers to achieve real time logistics services.

The World of Work

In many ways, government and public services have also been impacted by the internet. E-
government portals provide citizens with easy access to services, licenses, and permits. Digital
democracy initiatives enable online voting systems, public forums, and participatory
governance. Public records are now accessible online, promoting transparency and
accountability. Emergency services have been enhanced with online reporting, emergency
response systems, and disaster management.

Broadly speaking, the internet has significantly impacted the world of work and business.
Remote work arrangements have become increasingly popular, with telecommuting and virtual
offices enabling flexible work environments. E-commerce has thrived, with online shopping,
digital payments, and entrepreneurship on the rise. Digital marketing has emerged as a crucial
aspect of business strategy, with online advertising, social media marketing, and SEO driving
business growth. Cloud computing has enabled remote data storage, processing, and
collaboration.

3
Conclusion

The internet’s evolution has been a remarkable journey, transforming modern society and
redefining the way we communicate, access information, and interact with the world. As the
internet evolves, its potential to shape the future of humanity remains limitless. The internet’s
potential for positive impact remains immense, and its advancement continue to shape the
world. Its applications continue to expand, driving innovation, progress, and global
connectivity.

The Different Areas/Programs of the Computing Discipline

Introduction

Instead of being a single subject, computing is a collection of disciplines with diverse areas of
interest. Computing is any goal-oriented activity that makes use of, benefits from, or creates
computers. Therefore, computing comprises the design and development of computer hardware
and software components for a range of applications, such as processing, organizing, and
managing different types of information; conducting computer-intensive scientific research;
creating and utilizing communications and entertainment media; finding and compiling
information relevant to a particular objective, among other things.

There is an endless variety of applications for computers and the field of computing has
undergone tremendous growth and diversification in recent years, transforming the way we
live, work, and interact. Computing discipline encompasses a broad range of specialties, each
with its unique focus, methodologies, and applications. Understanding these areas is crucial for
navigating the complex landscape of modern technology. This discussion aims to provide an
overview of the various areas of computing discipline, explaining each in easy-to-understand
language. By exploring these fields, we can gain insight into the exciting developments and
innovations shaping our digital world.

Areas/Programs of the Computing Discipline

4
The computing discipline is made up of different domains that are interconnected and
constantly evolving. The various areas/programmes of computing are responsible towards
driving innovation and technological advancements. They are identified and discussed below:

Computer Networks

Computer Networks involve connecting devices to share resources and communicate. This
includes:

o Local Area Networks (LANs): Connecting devices within a building.


o Wide Area Networks (WANs): Connecting devices across cities or countries.
o Internet of Things (IoT): Connecting everyday devices to the internet.

Software Engineering

Software Engineering develops reliable, efficient software. This involves:

o Design Patterns: Reusing proven solutions.


o Testing: Ensuring software quality.
o Version Control: Managing software changes.

Web Development

Web Development builds websites and web applications. This includes:

o Front-end Development: Creating user interfaces.


o Back-end Development: Building server-side logic.
o Web Design: Crafting visually appealing websites.

Artificial Intelligence (AI)

Artificial Intelligence focuses on creating intelligent machines that can think, learn, and act
like humans. AI involves:

o Machine Learning: Teaching computers to learn from data.


o Natural Language Processing: Enabling computers to understand human language.
o Robotics: Building robots that can interact with their environment.

Cybersecurity

5
Cybersecurity protects computer systems and data from unauthorized access. This includes:

o Encryption: Securing data with codes.


o Firewalls: Blocking unauthorized access.
o Threat Detection: Identifying potential security risks.

Data Science

Data Science extracts insights from large datasets. This involves:

o Data Mining: Discovering patterns in data.


o Data Visualization: Presenting data in a clear format.
o Statistical Analysis: Interpreting data using statistics.

Database Systems

Database Systems manage and store data efficiently. This includes:

o Database Design: Creating structured databases.


o Data Retrieval: Accessing data quickly.
o Data Security: Protecting sensitive data.

Human-Computer Interaction (HCI)

Human-Computer Interaction designs user-friendly interfaces. This involves:

o User Experience (UX) Design: Creating intuitive interfaces.


o User Interface (UI) Design: Building visually appealing interfaces.
o Accessibility: Ensuring technology is usable by everyone.

Information Systems

Information Systems integrate technology into organizations. This includes:

o Business Intelligence: Analysing data for business decisions.


o Enterprise Resource Planning (ERP): Managing business operations.
o Supply Chain Management: Coordinating logistics.

Operating Systems

Operating Systems manage computer hardware resources. This includes:

o Process Management: Managing running programs.

6
o Memory Management: Allocating system memory.
o File Systems: Organizing stored data.

Conclusion

From the foregoing, it is clear that the computing discipline encompasses a rich tapestry of
specialized fields, each driving innovation and progress in its unique way. From Artificial
Intelligence to Web Development, these areas have transformed industries, revolutionized
communication, and improved lives. As technology continues to evolve, understanding these
computing disciplines is essential for harnessing their potential and addressing the challenges
of the digital age. By recognizing the interconnectedness of these fields, we can foster
collaboration, inspire new breakthroughs, and shape a brighter future for all. As we move
forward, the possibilities offered by computing discipline will continue to expand, empowering
us to create, innovate, and thrive in an increasingly digital world.

7
THE JOB SPECIALIZATIONS FOR COMPUTING SCIENCE PROFESSIONALS

1 Software engineer: is a professional who designs develops, tests and maintains software
application

2 Cyber security analyst: is a professional responsible for protecting an organization’s computer


systems and networks from cyber threats, working closely with upper management. The cyber
security recommends safeguards to keep information confidential and authorize internal
employees to use parts of the network. They also develop procedures to respond to emergencies
and restore or backup digital item

3 A user experience researcher: is a data expert who analyzes member of target audiences to
understand what they look for when using a digital program.

4 A video game designer: is a professional responsible for creating visual elements for video
games for mobile devices, computers and gaming systems. Using programming languages and
graphics design, the video game designer builds characters and settings that coincide with the
games storylines and they test the game for functionality, easy navigation and visual appeal. The
professional works closely with animators and programmers to build the game and they strategize
ways to advertise it to encourage consumers to purchase it.

5 A business intelligence analyst: Is a professional who evaluates the operations of a company to


identify ways to make it more successful. With expertise in data science, the business intelligence
analyst determines if the company is making progress toward its goals by assessing the resources.
It uses and the challenges it’s faced. Another responsibility is performing competitor analysis,
which helps the professional stay informed about the industry and develop strategies to exceed
competing business.

6 A data base Administrator: is a professional who oversees activities in software database that
a company uses to store and organize information such as user login credentials, client interactions
and survey results to maintain the confidentiality of the records the database administer ensures
the structures are working effectively and they install security procedures to identify threats,
remove viruses and restore lost data. The administrator may also install updates on the databases
to boost their performance and expand their capability

7 A frontend developer: is a programmer who specializes in the design and implementation of


the user interface (UI) of websites and applications: They primarily focus on the part of a website
that users interact with directly using language such as HTML, CSS and JavaScript frontend
developers ensure that the visual elements are responsive and functional, creating a seamless and
engaging user experience. They also collaborate with designers and back-end developers to
integrate and optimize web applications.
8 Full stack developer: is a software developer who is proficient in both front-end and back-end
development. They work on all layers of a web applications including the user interface (UI) server
database and server-side applications. Full stack developers are skilled in various technologies and
programming language enabling them to handle all aspect of development from designing the user
experience to managing databases and server logic their versatility allow them to contribute to both
the client-side and server-side of application

9 A mobile application developer: is a software engineer who specializes in creating applications


for mobile devices such as smartphones and tablets They design, build, test and maintain mobile
apps for various platforms primarily iOS and Android mobile developers use programming
languages like swift, kotlin, or Java and frameworks like React Native or flutter to develop user-
friendly, functional and efficient mobile applications.

10 System engineer: is an industry expert who creates a process for conceptualizing, developing
and implementing a system, such as a new software application or piece of computer hardware.
To maximize efficiency for the process the systems engineer complies a list of necessary resources
collaborating with professionals and establishing parameters to evaluate the success of the project.
They also prioritize the safety and security of their products and lend technical expertise to assist
other technology specialists on their team.

11 A network security engineer: is an IT professional who installs safeguards to protect a


computer network from harm which can include viruses and malware. Network security engineers
analyze the performance of the computer to identify malfunctions and prevent them recurring, and
they conduct tests to see how vulnerable the network is to external threats.

THE FUTURE OF COMPUTING

Computer technology has come a long way in the past few decades. The world of computing is
ever-changing and growing at an exponential rate; it would be impossible to predict all the
advances in the field that are sure to occur in upcoming years. However, there are certain key areas
where experts anticipate significant progress. One example is Artificial intelligence(AI). AI has
already begun to make its mark in various sectors from healthcare to finance-but this is just the
beginning. Experts believe that AI will soon become ubiquitous across industries as well as
everyday lives.

Another trend expected to continue into the near future is cloud computing. It is a form of
distributed computing where resources are shared among many users who access them through the
internet. This type of technology has enabled businesses and individuals to store data online and
make use of powerful analysis tools without having to purchase hardware and software.

Cloud computing has enabled businesses and individuals alike to access data faster than ever
before while reducing their need for physical storage space. Going forward, cloud computing is
likely to become even more popular and prevalent due to technological advancements like 5G
networks providing increased speed and reliability

Over the coming ten years, improvement in personal computer technology are anticipated to
concentrate on better user experience boosting connectivity and improving performance. The
widespread use of artificial intelligence and machine learning, the creation of even faster and more
effective processors and the incorporation of virtual and augmented reality technologies into
personal computing are all prediction for the future of personal computer.

Machine learning and deep learning are two of the most powerful emerging technologies in
computer technology. These methods allow computers to learn from data, identify patterns and
make prediction without being explicitly programmed. This has applications across many
industries including healthcare, finance, retail, education and more.

The potential for machines learning and deep learning is immense. With their ability to analyze
vast amounts of data quickly and accurately. They could revolutionize the way we interact with
machines. For example, they can be used to build smarter robots that understand instructions better
or medical AI system that accurately diagnose diseases. As these technologies become more
advanced over time, their impact on our lives will only increase.

In addition to this potential growth in capabilities machine learning and deep learning also offer
a cost-efficient solution for business looking to stay ahead of the competition. By freeing up
resources previously spend on manual tasks by automating them via ML/DL algorithms,
companies can save money while still achieving highly accurate results. It’s clear why so many
tech giants are investing heavily in the development of these two promising technologies. The
possibilities seem endless.

Additionally, it is anticipated that new technologies like 5G and the internet of things(IoT) will
have a big impact on personal computers.

The internet of things (IoT) is a quickly growing technology that links physical objects to the
internet. It involves connecting everyday items such as light bulbs, washing machines and security
systems together so they can interact with each other in real-time or on an automated cycle.

This means we will be able to control our home appliances remotely from anywhere in the world
without having to be physically present. This could save us time, energy and money by allowing
us to manage devices efficiently even when we are not at home IoT also has potential applications
for healthcare, smart cities and connected vehicles.

Personal computers are going to become more ingrained in our daily lives, acting as hubs for
managing and controlling various devices and services as smart devices and connected homes
become more common place.
Quantum computers have the potential to greatly improve computing even at this early stage of
development. In addition, they will be able to solve complex problems that traditional computers
currently cannot solve.

You might also like