Comp Scii
Comp Scii
In order to process data, computers follow a set of instructions known as a computer program.
To carry out functions, computers follow what's called the Fetch, Execute and Decode cycle.
Inspects the Carries out the instruction. This could involve going
program/code and works back to memory to grab some data performing a
out what it needs to do calculation and storing some info in memory.
On modern CPUs, this happens billions of times per second, and is known as a CPU's Clock Speed.
Eg: 3Ghz Processor = 3 billion cycles per second.
Components of a CPU
Sends signals to control how Carries out calculations and makes logical decisions
data moves around the CPU,
and coordinates CPU
operations
Tiny, super-fast pieces of onboard memory inside the CPU each with a very specific purpose.
Small amount of very fast memory located in/close to CPU. It provides fast access to frequently used
instructions and data. Information fetched/written in cache is faster than when done in RAM.
Today, computers are Stored - Program, meaning that the programs are changeable.
In 1945 a mathematician & physicist called Jon von Neumann described the first design for computers that can store
changeable programs. This is what's known today as the Von Neumann Architecture.
Key Characteristics: 1 5
• Central Processing Unit (CPU)
• Single Control Unit (CU) Data Line
holds address of • Arithmetic Logic Unit (ALU)
where data is to 4
be fetched
• Onboard Cache
holds results of • Internal Clock
calculations from/stored into
performed by ALU
Holds memory
address of next
instruction to be holds any data
executed which has been
fetched from
memory/about to
be written into 6
memory
The performance of a CPU is affected by various factors. The three most important are:
1. Clock Speed
• Measured in Hertz (Hz).
• Number of cycles per second.
• Modern processors operate at billions of cycles per second, or Gigahertz (GHz).
• 3.2 GHz clock speed = 3.2 billion instructions can be fetched per second!
2. Cache Size
• Temporary storage of data & instructions being read/written from.
• Stores copies of recent data and instructions.
• Faster to read/write from than Main Memory.
• More efficient than having to go back and forth to Main Memory.
3. Number of Cores
• A core is in simple terms, a complete CPU system.
• A Quad - Core CPU, for example, would have 4 separate processing units, each with its own Registers, CU, ALU etc.
• A CPU with more cores, would have more power to run multiple programs at the same time.
When asked what this is, many people would call this a CPU, which is fine. But for exams, this consists of
more than one core, or central processing unit, and so would be incorrect. A more appropriate name for
it, would be a Chip Multiprocessor, or CMP.
An Embedded System is a computer system with a dedicated function within a larger and mechanical and electrical system. 98% of processors are manufactured as part of an
embedded system. A very early example of embedded systems is the Apollo guidance computer that sent men to the moon. Common examples of Embedded Systems are:
• MP3 players
• Mobile Phones
• Traffic lights
• Servers
• Factory controllers
• Hybrid vehicles
• MRI.
As Embedded Systems are dedicated to a specific purpose, design engineers, when creating it, can optimise it to reduce:
• The size of the product
• The cost of the product
• Increase its reliability for the given task
Quiz Corrections
1. A small amount of fast temporary memory within the processor: Cache.
2. Control Unit (CU): sends signals to the CPU, input and output devices.
3. Clock: generates a pulse to co-ordinate all components
Primary
• Storage is Volatile (lost when computer is switched off)
(Except for ROM).
• Stores data/instructions currently in use.
• Faster read/write times.
• Data directly accessed by CPU.
• Relatively Small storage Capacity (Gb).
• Eg: RAM, ROM, Cache.
Secondary
• Storage is Non - Volatile (data is saved when computer is off)
• Stores long - term data, or data used to run the computer, such
as the OS.
• Slower read/write times.
• Data can only be accessed through Main Memory.
• Relatively Large storage Capacity (Gb).
• Eg: CDs, DVDs, Floppy Disks, Hard Drives, SSDs.
Primary storage is mainly required because of its Speed, which can reach tens or hundreds that of Secondary storage. A 3 Ghz processor, would need to fetch and execute 3 billion instructions per second, and if it were
to load each and every one of them from the computer's Hard Drive, it would quickly wear it out. That's why, upon booting, the operating system is loaded into the Primary Storage, i.e. RAM for faster data retrieval.
When you open a program, it gets loaded into the memory, from the hard drive. But what does the
computer do when the RAM is full? It turns to Virtual memory.
Virtual Memory is space in a computer's secondary storage that is used to store running
programs when the RAM is full.
When you turn your computer on, the bootstrap program will load the Operating System from the
disk into RAM. As instructions are fetched one at a time, that means some of the instructions are
not likely to be fetched in the near future. Therefore, one solution is to transfer instructions that are
not being used to a space on the hard disk, and this is known as Virtual Memory. When these
instructions are needed again, a different program can be swapped out of RAM into virtual memory
to make room for the instructions that are now needed. This gives the impression that a computer
has more memory than it actually has. However, the speed that the hard drive operates at is usually
much slower than the RAM, and so the RAM must be slowed down to match the speed of the Hard
Drive.
Secondary storage is needed because ROM is read only and RAM is volatile.
Secondary storage is needed for:
• Storage of programs and data when the power is turned off.
• Semi-permanent storage of data that can change.
• Backup of data files.
• Archive of data files.
Every Secondary storage device consists of 2 main parts, the Drive and the Media.
• Drive: The device that reads and writes data from secondary storage.
• Media: The medium that the data is actually stored on.
There are 3 main types of secondary storage:
Optical Storage
1. Compact Disk
• Comes with a Optical Drive and a media of:
• Compact Disk Read-only or CD-R
• Compact Disk Read-Write or CD-RW
For Read-only drives, the surface of the disk is physically burnt to create pits and
lands, representing 1s and 0s. Physically burning is what makes a disk read-only.
Magnetic Storage
1. Hard Drives
• Cheap
• Large Capacity
• Slow
• Fragile
The Magnetic property of Hard Drives is what allows data to be stored on it.
Magnets have 2 poles, and this is what allows data on a Hard Drive to be stored as
1s and 0s. A mechanical Drive Head moves over the spinning disk to Read/Write
data. But since it is a physical component, it is slower and more likely to fail.
SSDs work by having a floating gate between two oxide layers. As an electron
passes through this floating gate, there is a change in charge, and this is what's read
as a 1 or a 0. However, the oxide layer deteriorates over time, making read/writes
unreliable after a lot of use, and giving it a limited lifespan.
There are a few factors to be considered when choosing a storage media for any purpose.
• Capacity: how much data needs to be stored?
• Speed: how quickly can data be read and transferred?
• Portability: if data needs to be transported, are size, shape and weight important?
• Durability: how robust is the media? Will it be damaged by shocks, and extreme conditions?
• Reliability: does it need to be used over and over again without failing?
• Cost: how expensive is the media per byte of storage?
Speed Reasonable, 30-150MB/s Slow, CD: 0.15MB/s, Blu-Ray: 4.5MB/s Very Fast, can reach up to 7000MB/s.
Portability Medium, can be bulky. High, very light and thin. Very High, takes very little space and can
be carried in small pockets.
Durability Medium, safe to touch but Very Low, must avoid dirt & scratches Very High, contains no moving parts and
shouldn't be dropped. or may become unreadable. has high protection.
Reliability High, will fail, but lasts long. Medium, as must be used with care. Very High, but does eventually fail.
Cost Cheap, compared to other devices Very Cheap. Expensive compared to other options.
of similar capacity. However, cost is steadily declining.
Best Uses Computer additional storage, Movies, video games, software Computer Boot Drive, Hard Drive
local/cloud servers, portable data. installations, courses. Replacement, Embedded systems
(phones, watches, cameras etc). SD cards -
additional storage
• KB to B to b
0 0 0 1 0 1 1 0 = 22 6
7
0110
0111
6
7
25KB * 1000 = 25,000B
128 64 32 16 8 4 2 1 8 1000 8
25,000B * 8 = 200,000b 16 + 4 + 2 = 22 9 1001 9
10 1010 A
Unit Symbol Binary Value Decimal Value Approximation Written
11 1011 B
Bit b 0 or 1 2-3 ⅛ byte Eighth of a byte
12 1100 C
Nibble - 4 bits 2-1 ½ byte Half a byte
13 1101 D
Byte B 8 bits 101 1 byte One byte
14 1110 E
Kilobyte KB 1024 bytes 103 1,000 bytes Thousand Bytes
Megabyte MB 1024 KB 106 1,000,000 bytes Million Bytes 15 1111 F
Gigabyte GB 1024 MB 109 1,000,000,000 bytes Billion Bytes 16 10000 10
Terabyte TB 1024 GB 1012 1,000,000,000,000 bytes Trillion Bytes
Petabyte PB 1024 TB 1015 1,000,000,000,000,000 bytes Quadrillion Bytes
We live in an analogue world, where everything is unique. When you turn a light on, if you slowed down time, you'd see
the light gradually get brighter and brighter. In computers however, data is made up of 1s and 0s. Whatever is stored on
a computer must be stored as binary data in order for the computer to be able to process it. So the light being on would
be represented as a 1, and the light off, would be a 0.
Image Files
Size of Image file:
Sample Rate 14 Height in pixels
Duration 17 Width in pixels
Bit Depth 3 Number of bits needed to store
each pixel
*Carry the 1
Denary → Binary
Subtract Binary digits from top down till you no longer can:
Binary 1 1 1 1 0 0 1 1
Denary: 243 243 - 128 = 115 115 - 64 = 51 51 - 32 = 19 19 - 16 = 3 8 4 3-2 =1
Binary → Denary
1 Byte = 8 bits
0 0 0 1 0 1 1 0 = 22
128 64 32 16 8 4 2 1
16 + 4 + 2 = 22
Denary → Hexadecimal
24:
Binary 128 64 32 16 Hex 8 4 2 1
1. Convert to Binary
0 0 0 1 1 0 0 0
2. Split into 2 Nibbles
Nibble 8 4 2 1 8 4 2 1
3. Give a Hexadecimal for each Nibble
Hex 1 18 8
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
230:
Binary 128 64 32 16 Hex 8 4 2 1
1 1 1 0 0 1 1 0
Nibble 8 4 2 0 8 4 2 1
Hex E E6 6
Hexadecimal → Denary
D7
1. Convert 1st digit to Nibble
128 64 32 16 ← 4 bits
1. Multiply the value by 24 (4 bits left shift)
1 1 0 1 = 208
3. Convert new Nibble to Denary
D = 13 = 1101
7=7 4. Convert 2nd digit to Denary
208+ 7 = 215 5. Add the two numbers
A Character Set is a defined list of characters (Eg: A:2&#=) recognised by the computer hardware and software,
with each character being represented by a single denary/binary value.
A 2-bit binary system can store 4 possible values: 00, 01, 10, 11.
But given that there are over 100 unique characters, including punctuation, lowercase and uppercase
letters, and symbols, we would need at least 7 bits to be able to represent each character with its own
number. In order to keep things simple, it's important that the binary values for different characters are
the same across different devices. This is where character sets, like ASCII and Unicode come into play,
to set a standard across different devices.
1. ASCII
• Stands for: American Standard Code for Information Interchange
• 7-bit character set; containing 128 different characters and binary values to represent them.
• Extended to 8-bit version called Extended-ASCII, which can store 256 characters.
• The first 31 numbers are reserved for special characters & instructions.
• Used in very old computers and second/third generation computers.
2. Unicode
• Short for Universal Encoding.
• Originally a 16-bit character set, but with more symbols and characters across different
languages, it has currently become a 24-bit character set, containing over 16 million characters!
• Stored as 6-digit hexadecimal, as 24-bit binary is slow, cumbersome and prone to error.
• Many different version called Extended-ASCII, which can store 256 characters.
• The first 31 numbers are reserved for special characters & instructions.
Graphics on a screen are made up pixels. The more pixels on the screen, the higher the resolution and the better
the quality of the picture will be. The higher the image resolution, the more memory is needed to store the
graphic.
Vectors
• Stores the calculations required to display each pixel.
• Scaling the image or zooming into it does not affect the image quality; the
new calculations are performed for the larger number of pixels.
• Commonly used to store CAD drawings, shapes and clipart.
• Common formats include PDF, AI, SVG.
Bitmaps
• Stores the individual binary values of each pixel.
• Scaling the image or zooming into it reveals the pixelation, showing its limited size.
• Commonly used to store photographs and artworks.
• Common formats include JPEG, PNG, GIF.
For a black and white image, the image can be represented as a string of 1s and 0s respectively. But in order to
reproduce the image, the computer needs to know its height, width and number of bits per pixel. This can be
stored in an image's metadata, which is a binary string at the beginning of the image data.
2-bit Bitmaps
For an image to use 2 bits, means that it has 4 possible different colours. The image file size can be found out by
multiplying the height, width and number of bits per pixel (2).
Metadata
Height 14 pixels
Width 17 pixels
Colour depth 2 bits
00 01 11 00
3-bit Bitmaps
For an image to use 3 bits, means that it has 8 possible different colours. The image file size can be found out by
multiplying the height, width and number of bits per pixel (3).
Metadata
Height 14 pixels
Width 17 pixels
Colour depth 3 bits
Sound, being made from vibrations, is an analogue wave. For computers to store sound in binary, it needs
to be converted into a digital signal.
In order to convert the sound wave to a digital wave, the computer makes measurements of the wave at
regular intervals. The wave is split up into smaller waves, and each sample is given an approximate value.
The number of intervals within a certain time period, is known as the sample rate, or simply the sound
detail. Each sample can store a certain level of data to represent the amplitude. This is known as the bit
depth, and the range is called the bandwidth.
There are 2 main factors which affect the file size of a sound file:
1. Sample Rate (Speed): How often (frequency) you record the amplitude of a sound wave.
Higher Frequency = Smoother sound.
2. Bit Depth (Detail): Represents how many different gradations of amplitude can be
represented in a digital wave form.
Compression is the reducing of a file size. It helps a computer maximise the quantity of files able to be stored on a
device. It also allows easier transferring/streaming of files across the internet.
Lossy Compression
A form of compression that reduces the file size while reducing the quality of the image.
For example, you can lower the range of colours within the image, or colour large areas with a single colour. Both of
which lose quality in order to cut down file size. Algorithms like JPEG and GIF, although quite different, work with similar
compression techniques.
Lossy compression is mainly used for images, videos and audio files, where a little reduction in quality isn't noticeable.
Lossless Compression
A form of compression that reduces the file size while retaining the quality of the image.
For example, instead of storing the same binary values for every pixel, we could store the binary for 'white', followed
by the number of white pixels we have in a row. This method doesn’t lose any data during the compression and works
best with images of solid colour patches.
Lossless compression is also useful for executables and documents, as the code/text must be kept in its entirety.
1. Bandwidth
• A network's bandwidth is the amount of data that can be transmitted successfully in a given time.
• It's a measure of how much data can be sent along the transmission media.
• For example, water can travel at the same speed through a straw and a pipe, but the pipe may have space for more water at that speed.
• It's measured in bits/megabits per second (b/s or Mb/s).
2. No. of Users
• Too many users/devices can slow down the speed of the network if the bandwidth is insufficient.
3. Transmission Media
• Wired connections have a higher bandwidth than wireless connections.
• Fibre Optic cables have a higher bandwidth than copper cables.
4. Error Rate
• Less reliable connections increase the number of errors when data is transferred, making data be resent till it arrives correctly.
• The signal quality of wireless connections depends on the range of devices from the wireless access point etc.
• The signal quality of copper cables depends on the grade of material used, which have an effect on the signal interference.
• Cable length may introduce errors.
5. Latency
• The latency is the delay between transmitting and receiving the data.
• It is caused by bottlenecks in the network infrastructure.
• Eg: Not using switches to appropriately segment network traffic.
• Hardware, such as switches and transmission media may not operate at the same speed.
A server is a computer on a network that does not serve as a workstation, but is dedicated to
serving files and managing various other services.
A client:
• Makes requests to the server for data and connections.
Advantages Disadvantages
• Easier to manage security files. • Can be expensive to setup and maintain.
• Easier to take backups of all shared data. • Requires IT specialists to maintain.
• Easier to install software updates to all • The server is a single point of failure.
computers. • Users will lose access if the server fails.
Peer-to-Peer Model.
A peer is a computer on a network and is equal to all peers. They serve their own files to
each other and is responsible for their own security and backup. They normally have their
own printers, and can receive print jobs if they're switched on, in order to communicate with
the connected printer.
Switch
• Connects and sends data between multiple computers on a LAN.
• Uses network interface controller (NIC) address to route traffic.
• Divides the network into segments by forwarding traffic to the correct location.
• Learns where devices are on the network by reading the 'from' address on the
data package received, and routing the data to the 'to' address.
• Builds an internal index of the framework of connections.
• Broadcasts data to all connections if no specific 'to' address is provided.
Router
• Connects and sends data between networks.
• Creates a WAN from a number of LANs.
• Unable to connect to a WAN without one.
• Uses IP (Internet Protocol) address to route traffic.
The Internet is a collection of interconnected networks around the world (WAN). It is not to be confused
with the World Wide Web, which is just a service that runs on the internet.
The DNS
How the Internet works: • Those further routers are
connected to their own LANs, When you enter a URL, it has to go through a DNS. Here are the individual steps showing what happens in a DNS.
• A home router is connected to an other routers and other servers.
Internet Service Provider (ISP) • The ISP is connected to a Domain 1. The URL is received by a DNS resolver server.
router, by telephone or fibre. Name Server (DNS) and other routers 2. The server then queries a DNS root name server.
on the backbone of the Internet. 3. The root server responds with the address of the top-level domain server for .com.
4. The resolver then makes a request to the .com TLD server.
5. The TLD server then responds with the IP address of the domain's name server, google.com.
6. Lastly, the recursive resolver sends a query to the domain's name server.
7. The IP address for google.com (8.8.8.8) is then returned to the resolver from the name server.
8. The DNS resolver then responds to the web browser with the IP address of google.com.
Web Servers
Most of us of the Internet is to browse websites. Websites need to be available across the globe 24/7 while being secure against
hackers. To do this, they are held on special servers called Web Servers. Web servers:
• Connects to the Internet via a router.
• Each has a unique IP address.
• Store websites (hosting).
• Deals with client requests e.g. HTTP get requests for a page/resource.
When you enter the URL for a website, your browser sends the URL to a Domain Name Server (DNS). The DNS converts the
domain name to its corresponding IP address and sends it back to the browser. The browser, now having the IP address of the
web server it wants to reach, sends a GET request to the server, which returns the desired webpage/resource to the browser. Looks for domain
endings, like .com
Some web servers store data/programs to be accessed by anyone on the internet. This is called cloud storage, or simply The
Cloud. The Cloud can be accessed by any device, from anywhere and at any time. It has a much larger potential storage capacity Gives the IP address
and automatic backing up of data. of Domain's Name
Server (google.com)
A Network Topology is the arrangement of all the elements making up a network. Older examples of different
topologies include:
• Bus networks, where each PC would be connected to a central backbone. This came at the risk of the
network crashing and each user losing their connection if the backbone was damaged.
• Ring networks, where all of the PCs would be connected in a ring. This topology had the same problem with
bus networks, where if any of the connections were to break, all users would lose their connection. It could
be solved by using a double ring connection, but that needed more cables and additional technology.
Star Network Topology Full Mesh Network Topology Partial Mesh Network Topology
A Star Network Topology is the most popular type of network. It consists of a central A Full Mesh Network Topology is where every device is connected to every other A good alternative to F.M.Topologies is a variant called a Partial Mesh Network
switch, which all devices are connected to. This is useful as if any connection were to break, device (A switch is used this diagram since each computer needs a component Topology. Here, not every computer is connected to every other computer, only
it would only affect that computer. The switch is also intelligent, as it makes sure that traffic that allows multiple connections). The main advantage is that if any of the some computers are directly connected to each other, creating multiple routes
goes only where intended. This is an advantage over a hub, which were used in older star connections were to break, traffic can still be routed via another route. However, between devices and multiple paths to reach a destination. This is a great
networks and directs traffic to all users on the network, lowering the available bandwidth Full Mesh Networks require a lot of cabling and switch hardware, making them compromise to a full mesh network as it allows for similar functions with less
and reducing security. However, if the switch was to break, then all the computers would very expensive to set up and impractical for large networks. cables.
lose their connection.
Ethernet
Ethernet is a standard for network technologies used for communication on wired local area networks. Ethernet is important in star networks as duplex
It includes a number of protocols (rules to manage) communication and provides a reliable, error-free communication is possible. Duplex communication is sending
communication between two points on a network. It also carries data and a MAC address, the source and receiving data at the same time because different wires
and destination address for the data to be sent. It is a wired connector, and has constantly been are used to transfer and receive. That is why the CSMA/CD
evolving as network topologies have evolved. In older bus networks, a shared backbone cable means a protocol isn't needed, because data frames can't collide. In
protocol such as CSMA/CD (Carrier-Sense Multiple Access Collision Detection) is needed to listen for mesh networks, ethernet can also be used, but additional
communications before transmitting and detecting when two computers transmit at the same time. protocols are needed for routing between switches.
Wi-Fi
Wi-Fi is a common standard used for wireless network connections nowadays. Number of protocols
(rules to manage) communication and provides a reliable, error-free communication between two
points on a network. It also carries data and a MAC address, the source and destination address for the
data to be sent. It is a wireless connector, and has constantly been evolving as network topologies have
evolved. In older bus networks, a shared backbone cable means a protocol such as CSMA/CD (Carrier-
Sense Multiple Access Collision Detection) is needed to listen for communications before transmitting
and detecting when two computers transmit at the same time.
Wireless networks are identified by a unique Service Set Identifier (SSID). The SSID has to be
used by all devices which want to connect to that network. It can be set to automatically
broadcast to any wireless device within range of a wireless access point.
There are a few steps that can be made in order to protect your wireless network,
• The SSID is automatically set, but can be custom set.
• It can be made hidden in order to make it harder to detect.
• The SSID can be protected with a password, so even if it is found devices won't be able to
gain access to the wireless network.
Encryption
Wireless networks broadcast data, making it freely available. That is why it must be encrypted
to be secure. This is done be scrambling the data into cipher text using a "master key" created
from the SSID of the network and the password. Data is decrypted by the receiver using the
same master key, so this key is not transmitted. Protocols used for wireless encryption include
WEP, WPA, WPA2. A handshaking protocol is used to ensure that the receiver has a valid
master key before transmission to the device begins.
Every device on a network has a Network Interface Card (NIC). Every NIC has a Media Access Control (MAC) address. It is used to
route frames on a local area network (LAN). Traditional MAC addresses are 12 digits or 48-bit hex numbers.
Internet Protocol address or IP Address is a unique number which is used to "address" / identify a host computer or node which
communicates over IP on The Internet. There are currently two versions of IP in use today: IPv4 and IPv6
IPv4
There are two parts to every IP address. The first part is used to identify the network the traffic needs to
go to, and the second part identifies the specific host device within that network the traffic needs to go
to. In this example, we can see the first 3 bytes, 107.56.94, is being set aside to represent the network,
and the final byte, 111, is being used to represent the host within the network. The split between
network and host identifier doesn't always have to be made in the same way as shown here.
IPv6
It is 128 bit, and is written as eight groups separated by colons (:), each group made up of four hex values
representing 16-bits.
Standards are common grounds between different appliances. We can already see a lot of standards in our daily
life, like the type of wall plug or a connector like USB. A wall plug is a good example of how a standard is very
inconsistently applied, as it varies from country to country. If we had this level of inconsistency in computer science
then software and hardware would never be able to communicate or work together, which is why standards are
vital for this field and crop up in many different areas.
A good example of a standard is character sets, with the likes of ASCII and Unicode. Different devices need to be
able to recognise the letter "A" using the same binary number, or else complications would arise. Another good
example is HTML for the displaying of websites on the World Wide Web. Now, any device that uses the web can be
sure to open a webpage as it only can run on HTML.
A protocol is a set of standards, agreed rules which allow two devices to communicate. If two devices
are using the same protocol, they will be able to communicate in some form. Without standard devices,
LANs and WANS wouldn't be able to transfer data. Many different protocols exist, each with their own
purpose, and these are the only ones needed for the GCSE:
Networks are very complex and there are a lot of things to be considered when
setting up a new network, such as:
• Different applications for different tasks: web pages, email, and file transfer.
• Encryption, security and authentication of users and data.
• Connection to remote servers, and maintaining open
• connections.
• Peer to peer and client server models.
• Splitting data transfer into smaller packets and frames.
• Sequencings packets on arrival.
• Sending packets between routers on a WAN.
• Sending frames between devices on a LAN.
• Error checking packets and frames on arrival, and requesting
• data to be resent if necessary.
• Using different cables: fibre optic, twister pair, coaxial.
• Using wireless with frequencies and channels.
• Simplex and duplex transmissions.
That is why layering is used to divide the complex network into smaller, simpler
tasks that are connected and work together. Each layer provides a service to
the layer above it and the hardware and/or software in each one has a defined
job/responsibility.
For example, if we wanted to send a webpage over the internet, we could have one layer that
sends the webpage itself using HTTPS, another on that handles errors at each stage using TCP,
another one to correctly route traffic using IP and another layer to construct the appropriate
MAC frames and send those out correctly over. We can write software for each of the layers
individually without knowing anything about the other layers.
Malware is software which is specifically designed to disrupt, damage or gain unauthorised access to a computer
system. It is normally used for fraud or identity theft. They often exploit vulnerabilities in operating system software.
E.g. Viruses, Worms, Trojans Horses, Ransomware, Spyware, Adware, Scareware etc.
Phishing is the fraudulent practice of sending emails purporting to be from reputable companies in order to include
individuals to reveal personal information. It is normally by disguising oneself as a trustworthy source in an electronic
communication such as an email or fake website and baiting the 'customer' to reveal passwords etc.
E.g. To find passwords and credit card numbers.
Brute force attack is a trial and error method of attempting passwords and pin numbers. Automated software is used
to generated a large number of consecutive guesses:
E.g. By trying every word in the dictionary.
Denial of Service (DoS) attack is when flooding a server with useless traffic causing the server to become overloaded
and unavailable.
Data interception and theft is the unauthorised act of stealing computer-based information from an unknowing
victim with the intent of compromising privacy or obtaining confidential information.
E.g. To sniff usernames and passwords.
SQL injection is a technique used to view or change data in a database by inserting additional code into a text input
box, creating a different search string.
E.g. "Smith" or "=".
Ransomware A piece of software that maliciously encrypts all the data on a network, only reversing the procedure
when it's demands are met, e.g. payment in bitcoins.
Virus is A piece of software often hidden inside another piece of software that may lay undetected. Once active it can
spread rapidly from computer to computer and corrupt a file system.
Penetration Testing
Penetration testing is a tool used to test that networks are secure. Tests are performed under a
controlled environment by a qualified person, who deliberately tries to break into a system or
simulate a genuine cyber-attack. It checks for current vulnerabilities and explores potential ones in
order to expose weaknesses in the system so they cannot be maliciously exploited.
The person carrying out the simulated attack may use software and hardware tools to help them in
their duties. Hardware can be used to create large volumes of simulated traffic, and specialist
software can be created to simulate viruses and other malware.
Anti-malware software
The most common form of anti-malware software is given the generic title of "anti-virus software",
although in practice anti-virus packaged can be very powerful and will do much more than just
prevent viruses.
The anti-virus package will load when the computer is turned on and will constantly check for
symptoms of an attack. If a virus or other piece of malware is detected, it will be prevented from
operating and the file will be "quarantined" so that it can't cause any harm. Many viruses actively
try to shut down the anti-malware software and may not even cause an issue until they detect that
the anti-malware software is not operating.
Firewalls
A firewall can be a piece of software that performs a ‘block’ between a potential attacker and the
computer system. The firewall software can be held on a server, or a standalone computer that will
carry all traffic that is going to and coming from the systems internet connection.
All traffic on the network is sent in packets, and the packets each contain information in their header
The firewall software can monitor application and network usage and has the ability to block access
from certain computer users and disable traffic that may be perceived as a threat. A firewall is not
always 100% effective – an attacker could exploit a vulnerability which bypasses the firewall. Many
anti-malware packages have this feature built in.
Although rare, a firewall may be a dedicated piece of hardware that has the sole job of checking
every single packet and will block any inappropriate traffic.
Passwords
A password is typically a string of characters used to gain access to a service or system. It is also
possible to use a biometric password, where a fingerprint reader, iris scanner or even facial
recognition software is used to validate that the user is actually genuine. Special hardware "dongles"
can also be used which should be inserted into the computer before anyone can access the
computer.
When text based passwords are used, a password policy may be enforced by the computer system
which will force a user to have a "strong" password." Password length may be checked and any short
passwords will be rejected. The longer the number of characters, the more difficult it is to actually
guess the password. The password policy may also force users to change their passwords regularly
and may prevent them from using a password again.
Encryption
Encryption is where data is translated into code so that only authorised users, or users with the key
can decrypt and read. Users must have the key in order to decrypt the coded file.
A good example, although far too simple to be effective on a computer network, is the Caesar
Cipher. This was invented by Julius Caesar and designed to keep his messages secret. It works by
encrypting messages through movement of each letter a certain number of places to the left or right
in the alphabet. The key tells us how many places that the letters have been moved.
An Operating System is the interface between a user and the computer hardware. It manages the memory for the processor, loading programs into memory.
It manages the file store, making decisions about where files are going to be stored and where they're going to be loaded from again. It manages the drivers
for connected peripherals.
It allows us to interact with the computer by using applications that run on the OS. Eg: Word, Chrome, Excel etc. There are also a number of utility programs,
some of which are built into the operating system that help to maintain the computer. Eg: encryption software, compression software, defragmentation
software.
User Interfaces
GUI Menu
• Graphical User Interface • Successive menus presented to the user.
• Windows, icons, menus, pointers (WIMP} • Single options chosen at each stage.
• Visual interaction • Often with buttons on a keypad.
• Interactive • Eg: cashpoints and on chip-and-pin devices.
• Intuitive
• Optimised for mouse and touch gesture input
User Interfaces
• Graphical (GUI)
• Command Line
• Menu
• Natural Language
Operating systems have many functions that run behind the scenes. A few of them are:
1. Multitasking
When you have more than one program open and running at the same time. The processor
allocates a small amount of time to each process and cycles between them. As this happens
so quickly it appears as if multiple programs are executing simultaneously.
2. Memory Management
When the operating system handles loading programs from the hard drive into main memory
(RAM). If you open an application, it gets loaded into a slot in the memory. However, if you
also open a program that is much larger, the operating system doesn't move the programs
around in memory as this would be slow, instead the new program is split. This is called
memory fragmentation
3. Device Drivers
Your computer has to be able to output to a wide range of devices. A documented printed
from a word processor should look the same no matter what make or model of printer you
send it. The technology behind each printer though is very different. To overcome this
inconsistency we use device drivers. Device drivers translate operating system instructions
into commands that the hardware will understand. Each peripheral needs a device driver,
and many are already built into the operating system.
4. User management
Modern Operating Systems support having more than 1 user, each with their own settings,
preferences etc. The operating system will retain settings for each user, such as icons,
desktop backgrounds etc.
A client server network may impose a fixed or roaming profile for its users. A fixed profile is
where the settings are the same for all users on the network. A roaming profile allows users
to customise the settings and preferences, which follows whenever they log in. An operating
system also manages login requests to the network.
5. File Management
File extensions (.pptx, .xlsx, .psd, .exe) tells the operating system which application to load
the file into, Eg: .docx tells the OS to load Word. The operating system may present a logical
structure of files into folders, and allow the user to rename, delete, copy and move files.
Encryption Utilities
Encryption utilities are used to scramble plain text into cipher
text, making it unreadable to anyone who doesn't have the
decryption key. While these utilities can be incredibly
powerful for protecting sensitive data, they can also be
vulnerable to attacks if the encryption is weak or the key is
compromised. One weakness of encryption is that it can't
protect data while it's in use. Once data has been decrypted,
it's vulnerable to being intercepted or tampered with.
Another weakness is that if the key is lost or forgotten, the
encrypted data may be permanently unreadable. This is a risk
for individuals who use encryption to protect their own data,
as well as for organizations that rely on encryption to secure
their networks and systems.
Defragmentation Utilities:
Defragmentation utilities are designed to reorganize files on
a hard disk, which can speed up access to data and improve
system performance. However, defragmentation can also be
time-consuming and resource-intensive, especially on large
or heavily used disks. In addition, defragmentation can cause
wear and tear on mechanical hard drives, shortening their
lifespan. It's generally not recommended to defragment solid-
state drives, as they don't suffer from the same
fragmentation issues and the process can actually degrade
their performance.
Compression Utilities
Compression Utilities reduce file size to take up less disk
space and download quicker over the internet. Compressed
files must be extracted before they can be read. Data can
either be lost during compression or represented in a
different way using binary. JPEG reduces quality of image or
sound, ZIP retains original data in compressed format.
Example of a simplistic compression process where data is
stored by the number of 1s and 0s.
Conclusion:
Utility system software includes encryption, defragmentation, and compression utilities.
• Encryption utilities use algorithms to scramble plain text into cipher text, which can only be decrypted with a key encryption.
• Defragmentation utilities reorganize files on a hard disk to reduce movement of read-write head and speed up access to files,
but should not be used on solid-state drives as it reduces their lifespan.
• Compression utilities reduce the size of a file for quicker download over the internet and can either lose data or represent it
in a different way using binary. Simplistic compression can store data by the number of 1s and 0s to significantly reduce the
amount of data stored.
Ethics is not so much about if something is legal or illegal but more about whether something is morally right or wrong.
The Internet presents many legal, cultural and ethical issues. Some of the benefits of the Internet are:
• Vast repository of knowledge
• Communication
• Education
• Research
• E-commerce
It also has many drawbacks, such as:
• Increase in piracy
• Distribution of illegal images
• Offensive content
• Fraud
• Hate speech
• Dissemination of fake news
A lot of concern has been arisen about personal privacy with the prevalence of surveillance systems and cameras today.
And with face and number plate recognition technology, though it has helped with crime immensely, many view it as
another loss of privacy.
Examples of other such concerns are:
• Cameras and surveillance systems have perfected number plate and face recognition so that people can easily be
tracked.
• Electronic tagging can identify where convicted criminals are with GPS tracking.
• Black boxes in cars can monitor how people drive for insurance and accident investigation purposes.
• Mobile phone signals can be tracked, which also allows technology such as Find My iPhone to enable you to find a
lost phone or tablet.
• GPS technology is used to automatically tag date, time and location where photographs are taken, and social
networking tools allow this data to easily be shared with friends and family.
• Schools will monitor internet and computer activity of their students.
• A workplace, where work patterns and phone calls are routinely recorded.
• Modern browser record a history of sites visited and have options to retain passwords and credit card details to
make browsing easier.
This may give rise to piracy issues if other computer users access this data. An increasing range of smart devices in our
homes are voice-activated and they're capable of listening in on our daily activities. Some people argue that the
government are collecting valuable information for counter-terrorism operations using this technology.
For Against
Data encryption makes it private. Invasion of privacy.
Voice input is convenient: the next step in user TV may also be taking video footage.
interfaces.
Processing sound data allows for additional Data is sent over the Internet for processing.
functionality.
Can assist disabled users. The data may not be used just for the purpose intended.
Intelligence may save lives.
If you have nothing to hide then privacy is not an issue.
Data is not recorded for other purposes.
Tim Berners-Lee
Sir Tim Berners-Lee, inventor of the World Wide Web, criticised moves by legislators in the UK and US which he sees as
an assault on the privacy of web users. In the United States, he is concerned that the Principle of Net Neutrality, which
treats all internet traffic equally, could be watered down by the Trump administration and the Federal Communications
Commission. He also said he was shocked by the direction the US Congress and Senate had taken when they voted to
scrap laws preventing internet service providers from selling users' data.
"Privacy online is as important as the trust between a doctor and a patient, we are talking about a human right, my ability
to communicate with people on the web and to do so without being spied on. The idea that all ISPs should be required to
spy on citizens and hold the data for six months is appalling."
There are many laws which have relevance to the field of computer science and technology.
The three main ones are:
• Data Protection Act 2018
• Computer Misuse Act 1990
• Copyright Designs and Patents Act 1988
The Act also makes it an offence to make, adapt, supply or obtain articles for use in unlawfully
gaining access to computer material or impairing the operation of a computer.
Access is defined in the Act as:
• Altering or erasing the computer programme or data
• Copying or moving the programme or data
• Using the programme or data
• Outputting the programme or data from the computer in which it is held (whether by having
it displayed or in any other manner)
• Unlawful access is committed if the individual intentionally gains access; knowing he is not
entitled to do so; and aware he does not have consent to gain access.
Many people have a confidence behind a screen they would not have in person, resulting in cyberbullying. The rapid spread of
technology in developing countries has led to positive cultural changes, such as the rise of democracy and poverty alleviation.
However, the diffusion of technology must be carefully controlled to prevent negative cultural consequences. Challenges of
inequality for the uneven distribution of technology within a country also remain. To participate in a high-tech marketplace,
developing nations require access to computers and individuals with technical expertise.
Problems arise when nations attempt to make overly rapid advances in education, producing graduates without a satisfactory
infrastructure to support the education system. Developed nations must moderate their influence and carefully orchestrate any
interference in the third-world development. Traditionally, most computer applications are designed by developers in developed
countries.
Environmental issues are those where the manufacturing and use of computers has had a negative impact on the
environment.
Resources are needed to in order for computers to be produced, distributed and used, and resources are finite and will
eventually need replenishing. Metals and plastics are used to manufacture most components in a computer. Many
computer components are either hard to recycle or contain toxic materials, such as lead.
Energy is expended in distributing equipment and in using it. Many computers, such as web servers, domain name
servers and data centres, need to be left running continuously. This requires lots of energy to maintain. Additionally,
businesses, organisations, schools and homes all now have greater access to technology.
• people have new smartphones every couple of years.
• many organisations replace computers after three or four years.
• many people replace older technology before it fails simply because they perceive it to be old-fashioned or out of
date.
• All of this means that computers have a heavy impact on the environment, which is unlikely to decrease in the
near future. However, many devices are now more power efficient than their predecessors and some companies
have come up with innovative ways to save power.
Technology is gradually replacing traditional man-labour jobs. Eg: Banks. The recent years has seen a
rise in online banking, which could slowly replace the role of physical banks as digital currencies
gets more popular. This and similar events might replace the jobs of people.
In April 2017, this scenario came into the news, as a future Labour government claimed that it would
bring in laws preventing banks closing high street branches. The Labour Party said that it was part
of the party's plan to rejuvenate the high street and to protect local communities. The Consumers'
Association reports that 1,046 local branches closed in the UK between December 2015 and January
2017. The Conservatives claimed Labour's plans would see corporation tax at 28 per cent and lead
to 500 billion pounds of extra debt. Labour said it would replace government's access to banking
protocol with legislation to prevent closures. The Labour Party said that the big four banks made
more than 11 billion in profit from their high street banks in 2015 and can afford to provide this vital
customer service instead of prioritising cost-saving measures that damage communities and small
businesses. Labour pointed to research that suggested lending to small business dropped by 63 per
cent in areas with recent branch closures and the loss of the local branch significantly diminishes the
ability of deprived communities and households to access even basic financial services.
Open Source programs are freely distributable and are open to edit the source code to further
modify and develop the software.
• Users can modify and distribute the software.
• Can be installed on any number of computers.
• Support provided by the community.
• Users have access to the source code. May not be fully tested.
Proprietary software is the opposite; it is firmly protected by the Copyright Designs and Patents Act.
Users buy a licence to use the software, and this usually restricts the number of users or machines
that the software can be installed on.
• Users cannot modify the software. Protected by Copyright Design and Patents Act.
• Usually paid for and licensed per user or per computer. Supported by developers.
• Users do not have access to the source code.
• Tested by developers prior to release. Although they may run beta programmes.
Flowcharts
- Definition: Flow diagram represents a sequence of steps in an algorithm.
Example:
- Start at the top.
- Input two numbers from the keyboard.
- Check if number one is greater than number two.
- Branch based on the comparison.
- Different outputs based on the decision.
Flowchart Symbols:
- Terminal: Start or end of a process.
- Process: Initialization, processing, or calculation.
- Decision: Yes or no outcomes.
- Input/Output: Input and output of data.
- Subroutine Call: Calling a separate flowchart.
- Line: Represents control passing between shapes.
Pseudocode
- *Definition:*Represents algorithms in a language between English and programming code.
- *Example:*
- Input two numbers.
- Check if number one is greater than number two.
- Print the largest number.
- *Characteristics:*
- More generic code applicable to various programming languages.
Refining Algorithms
- Process: Modify existing algorithms when needed.
- Example: Refining a flowchart for finding the largest of three numbers.
- Steps:
- Identify areas requiring modification.
- Refinement example provided.
Abstraction is a computational thinking method that removes unnecessary details and includes only
the relevant details. It is used in problem-solving and computer science to focus on what's
important. For example, when saving a file, the user only needs to know how to create, open, move,
save, and delete files, instead of where and how it's stored.
In Interface Design, abstraction is crucial when creating software for a satellite navigation device.
The user interface should include inputs and output maps, with the map being important but not
essential. Keys should be generic and easily recognized by the user. The data structures should be
designed to store and manipulate the data in the programming language, to only show those
relevant, like distance or weather.
Flow Charts can be used to represent the algorithm in a visual form, which is an abstraction of the
actual underlying code. In programming, variables and calculations are likely to be included, but
these are abstracted from the programmer.
Decomposition simply means breaking a complex problem down into smaller, more manageable parts. It's one of the
three principles of computational thinking, along with abstraction and algorithmic thinking, which you need to be aware
of at GCSE. It's used all throughout our daily lives, like when brushing teeth.
The advantages of problem decomposition include
• making a problem easier to solve.
• Different people can work on different parts of the problem at the same time, reducing development time and cost.
In a computer game, you might have some artists working on the graphics, a special effects team that's able to work on the
particle effects, and you may have an audio team working on the sound. You also have program components that,
developed in one program, can then be easily used in other programs, making future iterations of the game much easier
and quicker to develop.
Algorithmic thinking
Algorithmic thinking is one of the three core principles of computational thinking, essential for solving
problems in a systematic manner, particularly highlighted in the context of GCSE curriculum. It involves
breaking down a problem into manageable steps and creating a set of rules or an algorithm that, when
followed, leads to a solution. For example, multiplication follows a set of learned rules, which, whether
executed by humans or computers, yields consistent results.
Taking the example of a word search puzzle, rather than searching randomly, one could employ
algorithmic thinking to more efficiently locate a word like "algorithm." A methodical approach would
involve starting from the top left corner, looking for the first letter, and then sequentially checking
adjacent letters to see if they continue to form the desired word. This approach systematically checks
each letter and its neighbours, improving the chances of success compared to random searching.
Another example, if you agree to meet your friends somewhere you have never been before, you would
probably plan your route before you step out of your house. You might consider the routes available and
which route is ‘best’ - this might be the route that is the shortest, the quickest, or the one which goes
past your favourite shop on the way. You would then follow the step-by-step directions to get there.
In this case, the planning part is like algorithmic thinking, and following the directions is like
programming.
Decomposition
Decomposition is another critical aspect where the problem is broken down into simpler parts. For a
word search, you first identify the starting letter and then validate its neighbours until the word is
completed or disproved.
Abstraction
Abstraction involves focusing on essential details while omitting the superfluous. In our word search,
this might mean focusing only on the specific letters that form the word we are searching for, ignoring
other letters in the grid. Data structures such as lists within lists in Python can be utilized for efficient
storage and referencing of grid positions.
Efficiency can be further optimized by looping through rows and columns systematically. Conditional
logic is employed to ensure searches do not exceed grid boundaries, and nested loops or conditions help
in determining the continuation of the word in the right direction.
Ultimately, employing algorithmic thinking not only simplifies problem-solving by providing a clear
methodology and sequence of steps but also aids in the creation of scalable solutions that can be
applied to various problems, making it a powerful tool in programming and computational tasks.
Linear Search
Linear Search is when u start from the beginning of a data set, each item is checked in turn to see if it is the one being searched for.
Loops through every position, starting at 1, in the provided array.
Checks if the value at the position is equal to the provided value.
Returns the position the value is located in the array -1 to account for positions starting at 0.
If it's not equal, it tries it again with the next increment
Binary Search
Binary Search is a method of finding items in a data set by calculating a midpoint and checking if the item is the desired one. If the
item is lower than the midpoint, it is ignored and the algorithm is repeated on the left half of the list. If the item is greater than the
midpoint, it is ignored and the algorithm is repeated on the right half. This method requires the data to be in order of a key field and is
more efficient than a linear search on average.
For example, in a data set of 8 breakfast cereals with indexes 0 to 7, we calculate a midpoint by adding the left pointer and right
pointer. We divide the result by 2 to find the middle, which is 3, and then check the item at index 3. If the left and right pointers are
the same, the item is either at the pointer or not in the list. This goes on till the item is found. This method is more efficient than a
linear search on average.
Bubble Sort
Bubble Sort is when an unorganised list is organised by checking the first two characters and switching their place if the
first is larger than the second. This goes on with the second and third and back to the start and so on until the whole list
is in order.
Merge Sort
Merge Sort is a "divide and conquer" sorting algorithm. It divides the input array into two halves, calls itself for
the two halves, and then merges the two sorted halves. The merge step is crucial and it is what gives the
algorithm its name and efficiency. The process of merging two halves involves comparing elements of both
halves one by one and placing the smaller element into the resulting array, ensuring the resulting array is
sorted.
• Consider the array [38, 27, 43, 3, 9, 82, 10].
• Split this into [38, 27, 43] and [3, 9, 82, 10].
• Keep splitting recursively: [38], [27, 43], [3, 9], [82, 10].
• Then, [27], [43], [3], [9], [82], [10].
• Now, merge [27] and [43] into [27, 43]; merge [3] and [9] into [3, 9]; and so on.
• Continue merging back up: merge [27, 43] and [3, 9] into [3, 9, 27, 43], and [82, 10]
into [10, 82].
• Finally, merge [3, 9, 27, 43] and [10, 82] into a single sorted array.
Insertion Sort
Insertion Sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time. It
is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge
sort, but has its advantages in certain scenarios.
• Start from the beginning of the array and assume that the first element is
already sorted.
• Take the next element and scan backwards through the sorted portion of the
array for a proper place to insert it.
• Shift all greater elements one position to the right to make room for the new
element.
• Insert the new element at its correct position.
• Repeat until the whole array is sorted.
-
• Consider the array [22, 27, 16, 2, 18, 6].
• Start with the first element [22], it's "sorted".
• Take the next element 27. Since 27 > 22, it stays where it is.
• Next, take 16. Compare to 27 (move 27 up), then 22 (move 22 up), and insert
16 before 22.
• Now, take 2 and move 27, 22, and 16 up, then insert 2 in the first position.
• Continue this way for 18 and 6, inserting each in its correct position within
the sorted part of the array.
Merge sort is generally preferred for sorting large datasets or linked lists due to its consistent O(n log n)
performance, while insertion sort is efficient for small datasets or arrays that are already partially sorted
since it has a best-case time complexity of O(n).
1. Syntax Errors
Syntax errors occur when the code does not follow the rules of the programming language. Every
programming language has its own set of syntactic rules, which must be adhered to for the compiler
or interpreter to understand and execute the code. Syntax errors are usually caught by the compiler
or interpreter during the translation of source code into machine code.
Eg:
• Misspelling a keyword like `int` as `itn` in C.
• Forgetting to close a parenthesis or bracket.
• Using an `=` sign when you meant to use `==` in an if statement.
Syntax errors prevent the program from running until they are corrected.
2. Logic Errors
Logic errors occur when a program compiles and runs, but it does not perform as intended due to a
mistake in the way the logic or algorithm is constructed. These errors do not typically produce error
messages, making them sometimes hard to track down. They manifest as incorrect outputs based on
the given inputs.
Eg:
• Calculating average as the sum of the numbers instead of sum divided by the count of
numbers.
• Writing a loop that terminates under the wrong condition, causing incorrect or infinite loops.
• Misplacing a calculation step inside a loop, causing it to execute more times than needed.
Logic errors require careful debugging and testing to identify and resolve.
3. Runtime Errors
Runtime errors occur during the execution of a program, and they can cause the program to stop
abruptly. These errors are typically not detectable by the compiler, as they are dependent on the
runtime conditions such as user input, file availability, and system resources.
Eg:
• Trying to access an array element out of its bounds.
• Attempting to open a non-existent file.
• Dividing a number by zero.
• Running out of memory.
Runtime errors need to be handled by implementing checks and exception handling in the code to
ensure the program can deal with unforeseen issues gracefully.
4. Compilation Errors
Compilation errors are specific to compiled languages like C and Java, and they occur during the
compilation phase. These errors often relate to the syntax, but they can also involve semantic errors
where the syntax is correct, but the semantics do not fit—for example, type mismatches.
Eg:
• Declaring a variable with a type that doesn't exist.
• Using a variable without declaring it first.
• Passing the wrong type of arguments to a function.
5. Semantic Errors
Semantic errors occur when statements are not meaningful within the language, even though they
are syntactically correct. These are often harder to diagnose because the program compiles and
runs.
Eg:
Detecting and fixing programming errors involves a combination of thorough testing, code review,
and debugging. For syntax and compilation errors, the development environment typically provides
feedback on where the errors are and sometimes how to fix them. Logic errors, on the other hand,
require careful planning and understanding of the intended program behaviour to rectify.
Trace tables are a tool used to help understand and debug computer programs by tracking the changes in variable values as the
program executes. They are particularly useful for examining how a program's state changes step-by-step during execution,
especially during loops and conditional statements. Essentially, trace tables act as a manual simulation of a program's execution,
where you document each step and the resulting state of important variables.
Selection (also known as decision-making) involves choosing between two or more paths in a program
based on certain conditions. This is typically implemented using if, elif, and else statements. Selection
helps in executing different parts of the code depending on the input or the state of the program.
Iteration (or looping) allows the program to execute a block of code repeatedly. This is particularly
useful when you need to perform a task many times, such as processing items in a list, repeating
operations until a condition is met, or simply counting. There are two main types of iteration: count-
controlled and condition-controlled.
• In count-controlled iteration, the loop is executed a specific number of times as determined by a
counter. This type of loop uses a counter to keep track of how many times to repeat the block of
code. The loop always stops when the counter reaches a certain value.
• In condition-controlled iteration, the loop continues to execute as long as a condition remains
true. The execution is based on the state of variables in the condition, and it's not predetermined
how many times the loop will run—it stops only when the condition evaluates to False.
1. Opening Files
You use the open() function to open a file. The open() function returns a file object and is most commonly
used with two arguments: open(filename, mode).
SELECT population
FROM world
WHERE name='Albania'
Arrays/Lists
Data from simple text files can be efficiently loaded into arrays or lists for rapid access, although this
method doesn't allow simultaneous access by multiple users on different systems. This format is
typically used for small data sets due to its quick data manipulation capabilities.
This summary encapsulates the main points and explanations about various methods of storing and
retrieving data, highlighting the suitability of each method for different scenarios and volumes of data.
Arrays are data structures capable of storing multiple data items. In the context of Python, they are
often treated similarly to lists, but it's important to note the distinction in contiguity. One-
dimensional arrays in Python, like the example with a 'countries' array, employ zero-based indexing.
Python allows dynamic resizing, abstracting the process from the programmer by relocating the
entire data structure in memory.
Two-dimensional arrays, resembling tables, involve two sets of indexes for rows and columns. The
'countries' example showcases accessing elements like 'Angola' at index [0][0], and integers in other
positions. The key takeaway is that arrays, whether one or two-dimensional, exhibit a static size
once initialized.
In exams, OCR uses the array keyword, adhering to zero-based indexing. The syntax involves
declaring the array size and specifying element names with corresponding indexes. This syntax
extends similarly to two-dimensional arrays, where commas separate indexes for rows and columns.
While the GCSE specification covers one and two-dimensional arrays, it's noted that higher
dimensions, such as three-dimensional arrays, are feasible. Visualizing a three-dimensional array as
a cube involves providing three index values for height, length, and depth.
The discussion briefly touches on the possibility of arrays with even higher dimensions, like four or
five-dimensional arrays. A conceptual visualization involves sets of cubes, requiring additional index
values for each dimension. This complexity demonstrates the flexibility of computers, even though
humans naturally perceive a three-dimensional world. While such high-dimensional arrays are
unlikely at the GCSE level, understanding the basics opens the door to more advanced concepts,
often encountered at higher education levels like A Level.
The random module selects a random choice from a given input or a set of data
Import random
1. Type Check: Ensures the data entered matches the expected data type. If a program anticipates a
number and receives a character, it may crash. To prevent errors, programmers should handle inputs
as strings initially rather than converting them directly to numbers. For example, rather than using
choice = int(input("Enter your choice:")), which can crash if the input isn't a number, handling the input
as a string first allows for safer processing.
2. Range Check: Verifies that data falls within a specified range. For instance, if a user must input a
quantity, it should be between a defined minimum and maximum, such as 1 to 10. This check
prevents values outside of the acceptable range.
3. Presence Check: Confirms that necessary data has been entered. For applications like online forms,
required fields (e.g., email address) must not be left blank. The program must be equipped to handle
situations where no data is provided, distinguishing between null, empty strings, and zeroes.
4. Format Check: Ensures data is formatted correctly. This is crucial for dates, postcodes, and item
codes that require specific formats. For instance, dates might need to be entered as DD/MM/YYYY.
Proper formatting is necessary for accurate processing and storage.
5. Length Check: Validates that data meets certain length requirements, either a specific length or a
minimum length. This is common for barcodes, which might need to be exactly 13 digits, and for
passwords, which typically require a minimum number of characters to ensure security.
Using these validation techniques helps make programs more robust, user-friendly, and error-resistant,
enhancing overall functionality and user experience.
• Range Check
Is the input in the correct range? (Eg: 1-10, A-Z)
choice = int(input("Enter your choice")) if choice > 1 and choice < 10:
• Presence Check
Has mandatory/required data been entered? (Eg: Reject blank input)
choice = int(input("Enter your choice")) if choice == "":
• Format Check
Is the data in the correct format?
choice = int(input("Enter your choice")) if choice ==
• Length Check
Does the input have the correct (or min/max) number of characters?
choice = str(input("Enter your choice")) if len(choice) < 4 or len(choice) > 15:
Anticipating and handling potential misuse or errors in programming is crucial to ensure the robustness and
reliability of software. Here are some key considerations:
1. Division by Zero: Programs must prevent division by zero, which is mathematically undefined. A simple
conditional check before performing division can prevent crashes.
2. Communication Errors: Programs that rely on internet connectivity should handle potential connection
issues gracefully. This could include enabling user cancellations, reporting errors, or resuming operations
once the connection is restored.
3. Peripheral Errors: Devices like printers may encounter issues such as running out of paper or ink. Programs
should verify that outputs (like printing a receipt) were successful and allow retry options.
4. Disk Errors: Programs must handle disk-related errors such as missing files, insufficient storage space, or
data corruption. This involves verifying the existence and integrity of files before reading or writing.
5. End-of-File Handling: When processing large amounts of data from files, programs should correctly identify
and handle the end of the file to avoid errors related to unavailability of subsequent records.
6. Authentication and Security: Protecting data access through authentication (e.g., usernames and
passwords) is essential. Mechanisms for recovering lost credentials should be secure. Data encryption and
protection against automated data submissions (using tools like reCAPTCHA) and SQL injection attacks are
also critical.
Each of these areas addresses potential vulnerabilities that could affect the program's operation, emphasizing the
importance of proactive error handling and security measures to maintain functionality and protect user data.
Writing maintainable code is crucial for the longevity and functionality of programs,
especially when revisiting or sharing the code with others. It ensures that both the
original programmer and others can understand and expand the code efficiently.
Key points for writing maintainable code:
1. Readability and Organization:
• Use comments to outline different sections and purposes of the code,
enhancing comprehension.
• Sensible naming of functions and variables aids in indicating their roles and
the type of data they handle.
• Avoid single-letter variable names and employ indentation consistently to
structure the code clearly.
2. Examples of Poor and Good Maintenance:
• A poorly maintained program might be hard to decipher, even what basic
operations it performs, such as handling inputs.
• In contrast, a well-maintained version of the same program uses comments
effectively, has clear function boundaries, and has descriptive names, making
the program instantly more readable and understandable.
3. Coding Practices:
• Integrate comments not for every line, but to clarify sections of the code,
especially for complex logic or unusual methods.
• Utilize whitespace and indentation to visually separate code sections and
enhance readability.
• Implement functions to avoid redundant code and maintain structure.
4. Structural Tips:
• Declare constants at the beginning of the program.
• Use descriptive names for all elements and provide explanations where
necessary, particularly when variables are declared.
These practices not only aid in making the code understandable but also facilitate
the addition of new features or collaboration among team members. By adhering to
these guidelines, programmers can ensure their code remains accessible and
functional over time.
Number of wires had to be plugged for each single instruction of a problem, thousands of them for
each program, and this took several days to set up. Typically, programs were changed only once
every few weeks due to the complexity involved.
Later plug boards were permanently programmed with a repertoire of between 50 to 100
commonly used instructions that could be entered as a sequence of instructions. Typically, programs
are written in binary on paper tape. This is clearly extremely difficult for programs to write because
using zeros and ones is very complicated, and therefore, a new method of writing code needs to be
found.
Low level languages such as assembly were the next step. Allow programmers to express programs
using simple commands. These commands could be easily translated machine code. These languages
where closely mapped to machine architecture. Written for a specific processor.
Editors - a program that allows a user to create or modify digital content. Specifically, code editors
or text editors are software tools that programmers use to write and edit source code.
Error diagnostics - the process and methods used to identify, interpret, and resolve errors in
software or hardware. In programming, compilers and interpreters often provide error diagnostics
to help developers understand syntax errors, runtime errors, and logical errors in their code.
Run-time environment - the environment in which a program or application runs. It includes all the
software and hardware components that are necessary to execute the program. This can include the
operating system, system libraries, and hardware abstraction layers.
Translators - a type of software that converts code written in one programming language into
another language. The three main types of translators are compilers, interpreters, and assemblers.
Compilers translate the entire source code into machine code before execution. Interpreters
translate and execute code line-by-line at run time. Assemblers convert assembly language, which is
closer to machine language but still readable by humans, into machine code.
667556-qu
estion-pa...