0% found this document useful (0 votes)
6 views21 pages

CSC218 Class - 2 - Note - 2 (Final) - 035000

The document outlines the evolution of programming languages from their inception in 1883 with Ada Lovelace's Analytical Engine to modern languages like Python and JavaScript. It details significant milestones, key figures, and the purposes of various programming languages, highlighting their impact on technology and software development. Additionally, it discusses the importance of number systems in computer communication and the ongoing evolution of programming languages in response to technological advancements.

Uploaded by

r9yvmcj2pb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views21 pages

CSC218 Class - 2 - Note - 2 (Final) - 035000

The document outlines the evolution of programming languages from their inception in 1883 with Ada Lovelace's Analytical Engine to modern languages like Python and JavaScript. It details significant milestones, key figures, and the purposes of various programming languages, highlighting their impact on technology and software development. Additionally, it discusses the importance of number systems in computer communication and the ongoing evolution of programming languages in response to technological advancements.

Uploaded by

r9yvmcj2pb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

(Class 2 - Note 2)

CSC 218
(FOUNDATIONS OF SEQUENTIAL PROGRAMMINING)

EVOLUTION OF PROGRAMMING LANGUAGES


2.1 Computers and Numbers
2.1.1 Decimal Numbers
2.1.2 Binary Numbers
2.1.3 Octal Numbers
2.1.4 Base 16 (Hexadecimal)
2.1.5 Converting Between Number Bases
2.1.5.1 Converting from Base 10 to any Base
2.1.5.2 Converting from any Base to Base 10 (Decimal)
2.1.5.3 Hexadecimal to Binary
2.1.5.4 Binary to Hexadecimal
2.1.5.5 Conversions Between other Bases
2.2 Programming Language Classifications
2.2.1 Low Level Languages (LLL)
2.2.1.1 Machine Language
2.2.1.2 Assembly Language
2.2.1.3 High Level Language (HLL)
2.3 Generations of Programming Language

History and Evolution of Major Programming Languages


In this world of technological advancement, there may be very few environments where there is
no application of programming languages. When it comes to technology, the programming language is in
the heart of it all. It is an integral part of the machine or system which allows it to function and behave as it
should. It is a vital organ to any system that lets it perform its concerned logical actions. It has laid the
foundation in the field of worldwide connectivity. So, without the birth of programming language,
technological advancement would not have been possible.

A brief history of programming languages and their usages.

1883: First Programming Language

A woman named Ada Lovelace devised the first programming language in 1883. She worked with Charles
Babbage (who we all know as Father of Computers) in his first mechanical computer called Analytical
Engine. Lovelace analyzed how numbers operated in computers and discovered that they could represent a
pattern or flow. Then, she devised a pattern for the Babbage Engine to compute the Bernoulli numbers,
which is considered to be the first programming language. With this, Ada was credited with developing
the first programming language.

1957-1959: FORTRAN, COBOL and LISP:

In these 3 years, programming languages like FORTRAN, COBOL, and LISP were created. FORTRAN
was designed for complex mathematical, scientific and statistical operations. Fortran stands for formula
translation and the credit for its creation goes to John Backus. This language is still widely utilized in

1
mathematical analysis and computations. COBOL developed by Dr. Grace Murray Hopper is a Common
Business Oriented Language. This language could operate on any type of computer. The applications of this
language are in banking, telephone systems, credit card processing, hospital and government computers,
automotive systems, and traffic signals. LISP is a list processing language devised by John McCarthy
created for its application in Artificial Intelligence. Designed for easy processing of data strings

1970: Pascal

The credit for the creation of Pascal language goes to Niklaus Wirth. It was named Pascal to pay tribute to
French mathematician, philosopher, and physicist Blaise Pascal. It is a high-level programming
language considered a pretty easy learning curve. The main purpose of its development was to teach
computer programming. The teaching involved structured programming and data structures. The derivative
of Pascal called Object Pascal was commonly used for windows application development. It was also used
by Apple Lisa and Skype.

1972: C

All the programmers and people from technical backgrounds would have heard of the C language. It is
considered to be the most initial high-level language by many. This high-level language is closer to natural
language putting away the complex machine codes. It was developed by Dennis Richie in Bell labs.

The main purpose of its creation was for the Unix Operating System which is popular nowadays as an open-
source operating system. This language was a foundation for many different languages that would be created
in the future. This language was the base for Java, C#, JavaScript, Ruby, Go, Perl, Python, and PHP. It was
primarily used in cross-platform programming, system programming, Unix programming, and game
development.

1983: C++

C++ is considered the extension of C language with the capability of Object-Oriented Programming. The
enhancements such as classes, virtual functions, and templates were made. The credit for the creation of this
language goes to Bjarne Stroustrup, It is one of the most popular and widely used languages. This
language is highly used for game development tools to program game engines as well as in high-
performance software like PhotoShop. C++ was also used to develop other scripting languages and its
frameworks like NodeJS. The primary application is in commercial application development, embedded
software, client/server applications, etc.

1983: Objective C

Objective C is the object-oriented extension over C language devised by Brad Cox and Tom Love in
Stepstone. It is a general-purpose, high-level language with the addition of message-passing functionality
based on Smalltalk language. The main purpose of this language was for Apple programming. It was mainly
used for coding software for macOS and iOS, Apple operating systems.

1987: Perl

2
Perl is a scripting language designed mainly for editing text. It was developed by Larry Wall in Unisys. It
was developed for report processing on Unix systems. It is known for its high power performance,
modularity and versatility. The primary uses of Perl are for computer graphics CGI, database applications,
web programming, system administration, and graphics programming. This language is used by Amazon,
IMDb, etc.

1991: Python

Python is considered one of the easiest programmings to learn. The learning curve is quite flat and
recommended for coding beginners. The fact is that this language is very close to human language. The
coding mechanism is very simple and programmers can do multiple tasks with just single coding
expressions. This language was developed by Guido Van Rossum. The frameworks coded in python
language are used by popular social media apps like Instagram. This language is widely used nowadays for
machine learning and AI. It is also used for web applications, software development, and information
security.

1993: Ruby

In 1993, many popular and powerful languages were developed. Ruby was one of those programming
languages. This language was designed by Yukihiro Matsumoto originally for teaching purposes. This
language is influenced by Perl, Ada, Lisp, Smalltalk, etc. The language is very productive and enjoyable. It
is used for web application development and also the foundation for Ruby on Rails. It is used by Twitter,
Hulu, etc. The execution of the code is slow, but it allows programmers to put the code together quickly and
run a program.

1993: Java

Java is one of the most widely used and popular programming languages all across the globe. This language
was developed by Sun Microsystems. This language has huge use cases. Primarily, this language was
intended for networking tools and mobile devices but was enhanced to facilitate information across the
World Wide Web. This language can be found anywhere from huge power servers to small mobile devices.
The applications of this language are very widely distributed from network programming, web applications,
software development, mobile application development to GUI development. It is used in the development
of native Android OS apps as well.

1993: PHP:

PHP is a widely known and used language for dynamic web programming. This programming language is
mainly used to compile on web servers to serve the web pages and required resources. It was created by
Rasmus Lerdorf. It was originally named Personal Home Page but was later called Hypertext Preprocessor.
The major application of this language was to build dynamic web pages and server-side development. This
language was widely used by popular companies like Facebook, Wikipedia, WordPress, etc.

1993: JavaScript

JavaScript is a web scripting language known to all web developers. It was developed by Brendan Eich. It
is primarily used along with web pages for web browser interactions. We can say that almost every web
3
page uses JavaScript. It is a high-level scripting language used to make web pages dynamic at the client-side
without putting pressure on the server-side. It is used for web form submission and validations, user
interface interactivity, animations, and tracking.

JavaScript takes the load of the server-side by running in the user's computer and doing most of the client-
side computation which does not require data from the server. Today, JavaScript has expanded itself to
different frameworks and can be used to develop web applications, websites, server-side programs, desktop
applications, and mobile applications as well. JavaScript has no boundaries in the field of programming in a
modern programming environment.

2000+

Many new programming languages and frameworks were created after the 2000s. Most of the languages and
frameworks were based on the older programming languages with powerful extensions and security. Some
of the honorable mentions are React JS, Angular, C#, Scala, Go, Swift, etc. C# is an extension of C++
language with the simplicity of Visual Basic incorporated into it. C# was primarily used to develop the
Microsoft products and desktop applications.

Scala is the programming language that integrates functional programming with object-oriented
programming. Swift is another powerful programming language devised by Apple as the replacement of
Objective C. Swift just like objective C is used to develop the Apple software and applications with more
simplicity and efficiency.

The modern world cannot be free of programming language now. They have formed a foundation for
powerful systems and technology to grow. Programming languages inspired innovation, development, and
ability to turn the physical world into virtual, making tasks easier and enjoyable. Most of the programming
languages that are prevalent today are built upon the concept of older programming languages.

The new ones make the work simpler and efficient with fewer chances of errors and a high level of security.
Machine learning, data mining, Artificial Intelligence all use programming languages as a core ingredient.
Many businesses rely heavily on software programs to run day to day tasks efficiently. So, this programming
environment will only develop in the future with more modular, simplistic, and powerful coding languages.

Conclusion: Programming languages are still in the process of evolution in both industry as well as in
research. Only time will tell where this journey of programming languages will reach and what this
technology will look like when it reaches its pinnacle.

---

4
This table highlights key milestones in the evolution of programming languages, from the theoretical
beginnings to modern languages used in various domains.

S/No Inventor Name Year of Pgming Lang Purpose of Usage Disadvantages


Invention Invented Invention
1 Ada Lovelace 1843 Analytical First computer Theoretical Limited
Engine program calculations practical
Language application
2 Konrad Zuse 1940 Plankalkül High-level Theoretical Not widely
programming calculations adopted
3 John Backus 1957 Fortran Scientific computing Numerical analysis, Limited string
simulations handling
4 Grace Hopper 1950 COBOL Business applications Business data Verbose syntax
processing
5 John McCarthy 1958 Lisp Artificial intelligence AI research, Steep learning
computer science curve
6 Ken Thompson, 1970s C Systems Operating systems, Error-prone,
Dennis Ritchie programming embedded systems security risks
7 Alan Kay | 1970 Smalltalk Object-oriented GUI development, Performance
programming education issues
8 Niklaus Wirth 1970
9
10
11 Dennis Ritchie 1972 C Systems Operating systems, Error-prone,
programming embedded systems security risks
12 Alain 1972 Prolog Logic programming Artificial Steep learning
Colmerauer intelligence, expert curve
systems
13 John Kemeny, 1975 BASIC (ANSI Simple programming Education, beginners Limited
Thomas Kurtz Standard) functionality
14 Niklaus Wirth | 1977 Modular Modula-2 Systems Limited
programming | programming, adoption
education
15 Bjarne 1983 C++ Object-oriented Games, systems Complexity,
Stroustrup extension of C programming compatibility
issues
16 Brian 1983 AWK (updated) Text processing Data processing, Limited
Kernighan, Rob report generation functionality
Pike
17 Bjarne 1983 C++ Object-oriented Games, systems Complexity,
Stroustrup extension of C programming compatibility
issues
18 Guido van 1991 Python General-purpose Web development, Slow
Rossum programming data analysis, AI performance
19 Rasmus Lerdorf 1995 PHP Web development Web applications, Security
server-side scripting concerns
29 James Gosling 1995 Java Platform-independent Android apps, web Verbose syntax
programming development
21 Brendan Eich 1995 JavaScript Client-side scripting Web development, Browser
front-end compatibility
development issue
22 Rasmus Lerdorf 1996 PHP Server-side scripting Web development, Security
(initial web applications concerns
release)
23 James Gosling 1996 Java Platform-independent Android apps, web Verbose
(Java 1.0 programming development, syntax,
release) enterprise software performance
issues
24 Brendan Eich 1995 JavaScript Client-side scripting Web development, Browser
(initial front-end compatibility
release. development, mobile issues, security
5
populariz app development concerns
ed in late
1990s
25 Guido van 2000 Python General-purpose Web development, Slow
Rossum (Python (modern) programming data analysis, AI, performance
2.0 machine learning
release)
26 Rich Hickey 2007 Clojure Functional Data analysis, Steep learning
programming on concurrency, AI curve
JVM
27 Ryan Dahl 2009 Node.js Server-side Web development, Callback hell,
JavaScript runtime real-time applications performance
issues
28 Jeff Dean, 2010s Go (Golang) Concurrent and Cloud computing, Limited
Sanjay (populariz parallel programming network libraries,
Ghemawat ed) programming, learning curve
distributed systems
29 Rob Pike 2014 Swift iOS and macOS app Mobile app Limited cross-
(Swift development development platform
initial support
release)
was
influence
d by
many
including
Go
30 Graydon Hoare 2010 Rust Systems Systems Steep learning
(initial programming with programming, web curve,
developm safety guarantees development, compatibility
ent), 2015 embedded systems issues
(stable
release

1970 -1985
This period saw the emergence of influential languages like C, Prolog, and C++, which shaped the programming landscape
and continue to impact software development today.

1996 to bate
This period saw the rise of languages like Java, Python, JavaScript, and newer languages like Go and Rust, which are
designed for specific use cases and have gained popularity in various domains.
Learners will apply number-base convertion techniques for Binary, Octal, Decimal, Hexadecimal, etc to establish reasons for
number bases conversion in computer communication

1. Different number systems (such as decimal, binary, octal, and hexadecimal) are used to represent data in computers.
2. Converting between these bases allows us to express data in a format that suits the specific context or requirement.

Evolution of Programming Languages:


Computer and Number System - (as a basis of communicating with the Computer, the
Machine Lang., Assembly Lang., and High Level Languages).
Over the years, computer languages have evolved from machine language to high-level languages.
To communicate with computers, data must be converted into forms more readily acceptable to computers.
Computers only understands simple language that consists of 0s and 1s, with a 1 representing the presence
of electrical signal in the signal path, while a 0 represents the absence of electrical signal. Instructions are

6
therefore coded into computer’s memory as 0s and 1s. This method of instructing computer is called
machine language.
As the computer operates using a program coded in 0s and 1s, it is difficult for programmers to write a
program in 0s and 1s. Programmers find it easier to write programs in a language approaching that of
English. This language is called high level language.

2.1 COMPUTERS AND NUMBERS


When digital computers store and process data, they make use of numbers in base two. Several other
number-bases also have uses in computing and so the general idea of number bases, together with the
methods for converting from one base to another must be developed. Thus, understanding and being able
to convert between number bases is critical when dealing with low-level programming, data manipulation,
computer architecture, and networking, among other areas.

Computer Number Systems


Its obvious that; to a computer, all information is written as series of 0s and 1s; because when we type
words onto a computer, it will translate them into numbers. Computer number systems are how we
represent numbers in a computer system architecture.

Humans have been counting for a long time. To do so, we use systems that relate unique symbols to specific
values. This is called a number system, and it is the technique that we use to represent and manipulate
numbers. A number system must have unique symbols for every value, be consistent, provide comparable
values, and be easily reproducible.

A computer understands the positional number system where there are only a few symbols called digits, and
these symbols represent different values depending on the position they occupy in the number.

Number systems are one of the most fundamental concepts that computer scientists must learn. It’s an
important step for anyone who wants to become a computer scientist or programmer.

You are probably most familiar with the decimal system that forms the basis of how humans count. The
decimal system has a base of 10, because it provides 10 symbols to represent all numbers: 0, 1, 2, 3, 4, 5,
6, 7, 8, 9

Humans use the decimal system because we have 10 fingers to count on, but machines don’t have that
luxury. So, we’ve created other number systems that perform the same functions. Computers represent
information differently than humans, so we need different systems to represent numbers.

Computers support the following number systems:

 Binary
 Octal
 Decimal
 Hexadecimal

Reasons for Number Bases Conversion in Computer Communication


Converting between number-bases is essential in various aspects of computer communication and
programming. Let’s explore why:
Representation of Data:
3. Different number systems (such as decimal, binary, octal, and hexadecimal) are used to represent
data in computers.
4. Converting between these bases allows us to express data in a format that suits the specific context
or requirement.

7
Low-Level Programming:
1. In low-level programming languages (like assembly), direct manipulation of memory addresses and
registers often involves binary or hexadecimal representations.
2. Understanding and converting between these bases are crucial for efficient memory
management and bitwise operations.
Data Manipulation:
1. When performing bitwise operations (AND, OR, XOR), it’s common to work with binary
representations.
2. Converting between bases enables efficient manipulation of data at the bit level.
Computer Architecture:
1. Computer architecture relies on binary representation for instructions and data
2. Converting between bases helps engineers design efficient processors and memory systems.

Networking and Communication Protocols:


1. Network protocols often use hexadecimal or binary representations for addressing, headers, and data.
2. Converting between bases ensures proper communication across networks.

Error Detection and Correction:


1. Checksums and error-correcting codes (such as CRC) use binary
representations.
2. Converting between bases is essential for verifying data integrity.
In summary, understanding and being able to convert between number bases are critical skills for anyone
working with computers, whether in programming, networking, or system design

Advantages of Hexadecimals
Hexadecimal representation has several advantages in the context of computer science and programming:
1. Compactness:
(a) Hexadecimal (base16) uses fewer digits to represent large values compared to decimal (base 10).
For example, the decimal number 255 is represented as FF in hexadecimal, which is more concise.
2. Direct Mapping to Binary:
(a) Each hexadecimal digit corresponds to a 4-bit binary sequence.
(b) This direct mapping makes it easy to convert between hexadecimal and binary.
For example: Hexadecimal 1A corresponds to binary 0001 1010.
3. Memory Addresses:
(a) Memory addresses in computer systems are often expressed in hexadecimal.
(b) Hexadecimal provides a convenient way to represent memory locations and offsets.
4. Color Representation:
(a) Hexadecimal is commonly used to represent colors in web design and graphics.
(b) Each color channel (red, green, blue) can be expressed as a 2-digit hexadecimal value
(e.g., #FF0000 for red).
5. Bitwise Operations:
(a) When performing bitwise operations (AND, OR, XOR), hexadecimal is useful.
(b) It simplifies working with individual bits and flags.
6. Debugging and Hex Dumps:
(a) Hexadecimal is used in debugging tools and hex dumps.
(b) It allows developers to inspect memory content and binary data more easily.
7. Representation of Binary Data:
(a) Binary files (such as executables, images, and audio) are often displayed in hexadecimal format.
(b) Hexadecimal provides a concise and readable representation of raw binary data.
8
In summary, hexadecimal is a versatile representation that balances readability, compactness, and direct
correspondence to binary. It’s a fundamental concept in computer science and programming

COMPUTER AND CONVERSIONS


2.1.1 Decimal Numbers
Decimal numbers also known as denary numbers or number to base 10 are the numbers in everyday use
because ten is the basis of the number system. To write a number in decimal, we make use of the ten digit
symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Let say we have on our hands the decimal number 2,153. Let's have
a look at just what the number means:
TH H T U
2 1 5 3
Basically, it means 2 Thousands, 1 Hundred, 5 Tens and 3 Units. This could also be expressed in the powers
of 10 as follows:

NUMBER 2 1 5 3
3 2 1
PLACE HOLDER 10 10 10 100
3 2 1
RESULT 2*10 1*10 5*10 3*100 2153
= 2000 = 100 = 50 = 3
In the decimal system the place-holder for each digit is a power of 10. Moving from right to left, in the
table, corresponds t o an increase in magnitude by a factor of 10 at every step.

2.1.2 Binary Numbers


The binary (or base 2) number system uses the two digits 0 and 1 to represent numbers and is of particular
importance in computing. In a computer’s memory, elements can be in one of two states, OFF or ON
corresponding to the digits 0 and 1 respectively. These elements represent one binary digit or bit. All
internal processing and calculations in computing are done in binary. In an analogous manner for base 2,
we use a weighted sum of powers of 2 to express numbers. The place-holder for each digit is therefore a
power of 2 and moving from right to left corresponds to an increase in magnitude by a factor of 2 at every
step. For example, consider the following table.

The binary number in the table, 1101 is sometimes written with the subscript “2” to indicate that it is a base
2 number, i.e. 11012. To obtain the decimal representation of 1101 we multiply each binary digit by its
column’s weight and sum the values. Starting from the right,

Hence 11012 = 1310.

2.1.3 OCTAL NUMBER


Octal numbers are numbers to base 8. There are eight symbols used in the octal system, 0, 1, 2, 3, 4, 5, 6,
and 7. Its place holder increase in powers of 8. Octal numbers are used as a shorthand for binary. Octal used
to be popular when computers employed 12-bit, 24-bit or 36-bit words for data and addressing. However, as
modern computers all use 16-bit, 32-bit or 64-bit words octal is rarely used nowadays. Consider the table
given below:
PLACE HOLDER 82 81 80
WEIGHT 64 8 1
OCTAL NUMBER 1 5 5

9
The octal number in the table, 155 is sometimes written with the subscript “8” to indicate that it is a base 8
number, i.e. 1558. To obtain the decimal representation of 155 we multiply each octal digit by its column’s
weight and sum the values. Starting from the right,
5 x 80 + 5 x 81 + 1 x 82 = 5 + 40 + 64 = 10910
Hence, 1558 = 10910

2.1.4 BASE 16 (HEXADECIMAL)


The hexadecimal (often called hex) or base 16 number system uses sixteen symbols, 0, 1, 2, 3, 4, 5, 6, 7, 8,
9, A, B, C, D, E, F, to represent numbers. The first ten digits are the same as in the decimal system while the
remaining six, A to F, correspond to the numbers from 10 to 15 respectively. A computer carries out all its
operations in binary but as numbers become large the binary representation requires increasingly more digits
(0’s and 1’s) and becomes difficult for humans to read and write. For this reason computers often display
information, such as memory addresses, in hexadecimal as their format is more compact. In base 16, we use
a weighted sum of powers of 16 to express numbers. The place-holder for each digit is therefore a power of
16 and moving from right to left corresponds to an increase in magnitude by a factor of 16 at every step.
Consider the table given below:

The hex number in the table, 12BF can be written with the subscript “16”, to indicate that it is a base 16
number, i.e. BF1216. To obtain the decimal representation of BF12 16, we multiply each hex digit by its
column’s weight, noting that B represents 11 and F corresponds to 15, and sum the values, i.e.

Hence, 12BF16 = 479910

2.1.5 CONVERTING BETWEEN NUMBER BASES


We will look at converting integers between different number systems; with focus on those ‘bases’ most
commonly used in computing, i.e. 2 (binary), 10 (decimal) and 16 (hex). We will also present some results
for other bases including octal (base 8). The ability to convert back and forth between different bases is a
fundamental skill required of anyone working in the area of computing.

2.1.5.1 Converting from Base 10 to Any Base


Converting from base 10 (decimal) to any other base is easy. Start with the decimal number to be
converted and repeatedly divide by the new base number retaining the remainder at each step. We
shall illustrate with some examples.
(i). Base 10 (Decimal) to Base 2 (Binary)
Example Convert the decimal number 475 to a binary number.
Solution
Start by dividing 475 by 2 and keep the remainder. Repeat the process until we can no longer
perform a division.
475 / 2 = 237, remainder 1
237 / 2 = 118, remainder 1
118 / 2 = 59, remainder 0
59 / 2 = 29, remainder 1
29 / 2 = 14, remainder 1
14 / 2 = 7, remainder 0
7 / 2 = 3, remainder 1
3 / 2 = 1, remainder 1
1 / 2 = 0, remainder 1
Now read the binary number from the bottom to the top: 111011011.
Hence 47510 = 1110110112
10
(ii). Base 10 (Decimal) to Base 16 (Hexadecimal)
Example: Convert the decimal number 795 to a hex number.
Solution
Start by dividing 795 by 16 and keep the remainder. Repeat the process until we can no longer
perform a division.
795 / 16 = 49, remainder 11 (= B in hex)
49 / 16 = 3, remainder 1
3 / 16 = 0, remainder 3
Now read the hex number from the bottom to the top: 31B. Hence 79510 = 31B16

(iii). Base 10 (Decimal) to Base 8 (Octal)


Example : Convert the decimal number 5361 to an octal number.
Solution
Start by dividing 5361 by 8 and keep the remainder. Repeat the process until we can no longer
perform a division. The octal number system is similar to decimal except that it only uses the eight
digits from 0 to 7.
5361 / 8 = 670, remainder 1
670 / 8 = 83, remainder 6
83 / 8 = 10, remainder 3
10 / 8 = 1, remainder 2
1 / 8 = 0, remainder 1
Now read the octal number from the bottom to the top: Hence, 536110 = 123618
2.1.5.2 Converting from Any Base to Base 10 (Decimal)
Converting to base 10 (decimal) from any other base is also fairly straightforward. We shall consider the
place value method. The method is based on the “place values” of the digits in the number being converted.

To convert to base 10 we calculate as follows:

(i). Binary to Decimal


Example:
(a). Convert the binary number 11001 to a decimal number.
(b). Convert the binary number 11011101 to a decimal number.
Solution
(a) The place values of digits in a binary number are powers of 2. To convert 11001 proceed as follows:

As base 2 only uses the numbers 0 and 1 this approach essentially involves adding the non-zero
place values together.
(b) From the right, adding the place values, corresponding to the non-zero digits, in 11011101 gives:

(ii). Hexadecimal to Decimal


Example:

11
(a). Convert the hexadecimal number 3B2 to a decimal number.
(b). Convert the hexadecimal number 4BAE to a decimal number.
Solution
(a) The place values of digits in a hex number are powers of 16. To convert 3B2 to its decimal
representation, starting from the right, multiply each digit in 3B2 by the appropriate power of 16.

(Iii) Decimal to Hexadecimal Conversion


Converting a decimal number to hexadecimal involves dividing the decimal number by 16 and noting down
the remainders in hexadecimal notation. Let’s break it down step by step:
1. Repeated Division and Remainder Algorithm:
(a) Divide the given decimal number by 16.
(b) Note down the remainder in hexadecimal notation (using digits 0-9 and letters A-F).
(c) Repeat the process with the quotient until the quotient becomes 15 or less.
(d) The remainders obtained in reverse order give the hexadecimal equivalent.
Example: Let’s convert the decimal number 255 to hexadecimal:
** Divide 255 by 16: Quotient = 15, Remainder = 15 (which corresponds to F in hexadecimal).
** Since the quotient is 15 (which is less than 16), we stop.
** The hexadecimal representation of 255 is FF.

General Steps:
(a) Divide the decimal number by 16.
(b) Take the remainder as the rightmost digit in the hexadecimal representation.
(c) Repeat the process with the quotient until the quotient becomes 15 or less.
(d) Combine the remainders in reverse order to get the complete hexadecimal number.
Remember that the letters A, B, C, D, E, and F are used for the values 10, 11, 12, 13, 14, and 15,
respectively.

Hexadecimal Representation in Computer


The largest value that can be represented in one hexadecimal digit is 15. Hexadecimal uses the digits 0-9
and the letters A-F to represent values from 0 to 15. Here’s the mapping:

 0 corresponds to 0 in decimal.
 1 corresponds to 1 in decimal.
 …
 9 corresponds to 9 in decimal.
 A corresponds to 10 in decimal.
 B corresponds to 11 in decimal.
 C corresponds to 12 in decimal.
 D corresponds to 13 in decimal.
 E corresponds to 14 in decimal.
 F corresponds to 15 in decimal.

So, when you see a single hexadecimal digit like F, it represents the decimal value 15.

2.1.5.4 Binary Number to Hexadecimal Conversion


12
Converting a binary number to hexadecimal involves grouping the binary digits into sets of four
and then assigning each group a corresponding hexadecimal digit. Let’s break it down step by step:

Grouping Binary Digits:


1. Start from the right (the least significant bit) of your binary number.
2. Separate the binary digits into groups of four. If the total number of digits is not a multiple of four, add
leading zeros to make it a complete group.
For example, let’s convert the binary number 10110101:
Grouped as: 1011 0101

Conversion to Hexadecimal:
Convert each group of four binary digits into the equivalent hexadecimal symbol:
1011 → B
0101 → 5
Merge the hexadecimal digits in the same order as the binary digit groups:
The hexadecimal representation of 10110101 is B5.
Change Numbers Above 9 into Letters:
Conclusively, in hexadecimal, numbers greater than 9 are represented by letters A-F.
** For example, A represents 10, B represents 11, and so on.
So, if you encounter a hexadecimal digit greater than 9, replace it with the corresponding letter.
In summary, converting binary to hexadecimal is straightforward once you understand the grouping and
mapping. It allows us to represent binary numbers in a more concise manner.

Hexadecimal Number to Binary


Converting a hexadecimal number to binary is straightforward. Let’s break it down into steps:
Write Down the Hex Number:
Start with the given hexadecimal number.

Hex to Binary Conversion:


1. Each hex digit represents four binary digits (also known as nibbles).
2. Assign powers of 2 to each digit (from right to left):
3. The rightmost digit corresponds to (2^0).
4. The next digit corresponds to (2^1).
5. The next digit corresponds to (2^2).
6. The leftmost digit corresponds to (2^3).

Example: Let’s convert the hexadecimal number 1A to binary:


1. 1A:
2. 1 → (2^3) (8 in decimal) → Binary: 1000
3. A → (2^1) (2 in decimal) → Binary: 0010
4. Combine the binary digits: 1001 0010
General Steps:
1. Convert each hex digit to its binary equivalent.
2. Combine the binary digits to get the complete binary representation.
Remember that the letters A, B, C, D, E, and F represent the values 10, 11, 12, 13, 14, and 15, respectively.
So, when you encounter a letter in the hex number, replace it with the corresponding binary value.

Note: In mathematics and computing, the hexadecimal numeral system is a positional numeral system that
represents numbers using a radix of 16. Unlike the decimal system representing numbers using 10
symbols…

13
2.1.5.5Conversion Between other Bases
The general steps for converting a base 10 or "normal" number into another base are:
 First, divide the number by the base to get the remainder. This remainder is the first, ie least
significant, digit of the new number in the other base
 Then repeat the process by dividing the quotient of step 1, by the new base. ...
 Repeat this process until your quotient becomes less than the base. …

i). Binary to Octal


Example 10: Convert the binary number 1110101000101 to an octal number.
Solution
• Starting from the right hand side split the number into groups of three. If necessary pad on the
left with zeros to obtain a group of three.
• Convert each group of three to its octal equivalent using the binary placeholder weightings, i.e.
1, 2 and 4. For example, on the right we have, 4 + 0 + 1 = 5.

Hexadecimal to Octal
Example 11: Convert the hexadecimal number 8B6E to an octal number.
Solution
One method is to convert the hex number to binary and then convert from binary to octal.
Write each hex digit as a four bit binary number.

Starting from the right, split the binary representation into groups of three. Pad the leftmost triple
with zeros if required.

Octal to Hexadecimal
Example: Convert the octal number 6473 to a hex number.
Solution
All we have to do is reverse the process in the previous example.
Write each octal digit as a three bit binary number.

Starting from the right, split the binary representation into groups of four.
14
Pad the leftmost group with zeros if required.
Convert each decimal number to its hex equivalent, e.g. 10112 = 11 10.
Convert each binary number to its decimal equivalent, e.g. 1 1 10 = B16.

2.2 Programming Language Classifications


2.2.1 Low Level Languages (LLL)
2.2.1.1 Machine Language
2.2.1.2 Assembly Language
2.2.1.3 High Level Language (HLL)
2.3 Generations of Programming Language
S/N Pgming Year Purpose Benefits | Mode of How it Disadv Pgming
Lang Type of Communication Works Lang
Inven Examples
tion
Low-Level 1940s Direct Efficient Binary or Directly Difficult Machine
Language hardware execution symbolic code executed or to write Language
manipulation assembled and debug ,
Assembly
Language
Machine 1940s Directly Fast Binary code Directly Difficult None
Language execute execution executed to write
instructions by and debug (machine
Shorter pgms processor -specific)
its a LLL A pgm’g lang in (in memory) Impossible
which d to adapt to
can only be instructions are in a More efficient other
represented by form dt allows d use of memory computers
0s & 1s. computer to
perform them Longer
immediately, programs (in
without any further instructions)
required
translation
Assembly 1940s Symbolic Efficient Mnemonics Translated Platform- x86
Language representation coding into dependent Assembly
of machine machine , error- , ARM
code code prone Assembly

an intermediary
language which is
more than LLL; &
less than HLL.
uses nos, symbols,
& abbreviations
instead of 0s & 1s.

Eg: For addition,


subtraction and
multiplications it
uses symbols like:
Add, Sub and Mul,

15
etc.

High-Level 1950s Abstracted Easy to Statements and Compiled May be Python,


Language programming write and syntax or slower, Java, C+
maintain interpreted resource- +
intensive
A pgm’g lang dt is (by
abstracted from d interpreter)
machine
architecture;
making it easier to
write and maintain
code.

uses stmts& syntax


dt are closer to
human lang

. | Abstracted from
hardware, easier to
use |

2.2 PROGRAMMING LANGUAGE CLASSIFICATION


Machine Language: Machine language is the low level programming language. It can only be represented
by 0s and 1s.
By definition: A machine language is a programming language in which the instructions are in a form that
allows the computer to perform them immediately, without any further required translation . Its also a
sequence of bits (machine code) that directly controls a processor, causing it to add, compare, or move data
from one place to another. The computer microprocessor can directly process machine codes without a
previous transformation.

the benefits of machine language:


i. Faster execution of the program
ii. More efficient use of memory
iii. Shorter programs (in memory)
iv. Freedom from the operating system
All of the above benefits are a direct result of programming in a language that the CPU can understand
without having to have it translated first.

The main disadvantages of machine language are:


i. Programs are more difficult to read and debug
ii. Impossible to adapt to other computers
iii. Longer programs (in instructions)
iv. Arithmetic calculations difficulty
Assembly Language is an intermediary language which is more than low level; and less than high-level
language. Assembly languages use numbers, symbols, and abbreviations instead of 0s and 1s. For example:
For addition, subtraction and multiplications it uses symbols like: Add, Sub and Mul, etc.
Assembly language is a low-level language that helps to communicate directly with computer hardware. It
uses mnemonics to represent the operations that a processor has to do. Which is an intermediate language
16
between high-level languages like C++ and the binary language. It uses hexadecimal and binary values,
and it is readable by humans.
Assembly language is a representation of machine language, enabling it to be read by humans. The main
difference between assembly language and machine language is that assembly language is one level higher
than machine language. It is more easily read by humans than machine language, but on the other hand,
computers can't read assembly language. It can be converted directly into machine code by a program
called an assembler.
Assembler is a program that performs the task of translating your assembly language program into a
sequence of machine language instructions that the CPU will understand, i.e. into binary numbers.
** A program written in assembly language is called the Source Program.
** The translated program in machine code is called the Object Program.
** Assembly language uses structured commands called mnemonics as substitutions for numbers allowing
humans to read the code easier than looking at binary. For example, at this stage, the instruction: INC A
may not mean much to you but at least you can read it. If you were told "INC" is a standard mnemonic for
increment (INCrement) and A is a variable, then by simply looking at the Instruction, you can get a feel for
what is happening.

How Assembly Language Works


Assembly languages contain mnemonic codes that specify what the processor should do. The mnemonic
code that was written by the programmer was converted into machine language (binary language) for
execution. An assembler is used to convert assembly code into machine language. That machine code is
stored in an executable file for the sake of execution.
It enables the programmer to communicate directly with the hardware such as registers, memory
locations, input/output devices or any other hardware components. Which could help the programmer to
directly control hardware components and to manage the resources in an efficient manner.

Differences between Machine Language and Assembly Language:

Machine Language Assembly Language

Assembly language is only understand by human


Machine language is only understand by the computers.
beings not by the computers.

In machine language, data are only represented with the In assembly language, data can be represented with
help of binary format(0s and 1s), hexadecimal and the help of mnemonics such as Mov, Add, Sub, End
octadecimal. etc.

Machine language is very difficult to understand by the Assembly language is easy to understand by the
human beings. human being as compare to machine language.

Modifications and error fixing cannot be done in Modifications and error fixing can be done in
machine language. assembly language.

Machine language is very difficult to memorize so it is Easy to memorize the assembly language because
not possible to learn the machine language. some alphabets and mnemonics are used.

Execution is fast in machine language because all data is Execution is slow as compared to machine
already present in binary format. language.
17
Machine Language Assembly Language

There is no need of translator.The machine Assembler is used as translator to convert


understandable form is the machine language. mnemonics into machine understandable form.

Assembly language is the machine dependent and it


Machine language is hardware dependent.
is not portable.

Difference between assembly language and high level language


1. Assembly level language :
It is a low-level language that allows users to write a program using alphanumeric mnemonic codes, instead
of numeric code for a set of instructions; examples of large assembly language programs from this time are
IBM PC DOS.
2. High-level language :
It is a machine-independent language. It enables a user to write a program in a language that resembles
English words and familiar mathematical symbols, COBOL was the first high-level language. Examples of
high-level language are python, c#, etc.

Assembly Level Language High-Level Language

 It needs an assembler for conversion  It needs a compiler/interpreter for conversion

 In this, we convert an Assembly level  In this, we convert a high-level language to Assembly


language to machine level language level language to machine level language

 It is machine dependent  It is machine-independent

 In this mnemonics, codes are used  In this English statement is used

 Iit is easy to access hardware component  In this, it is difficult to access hardware component

 In this more compact code  No compactness

Difference between HLL and LLL


Both High level language and low level language are the programming languages’s types. The main
difference between high level language and low level language is that, Programmers can easily understand
or interpret or compile the high level language in comparison of machine. On the other hand, Machine can
easily understand the low level language in comparison of human beings. Examples of high level languages
are C, C++, Java, Python, etc. Let’s see the difference between high level and low level languages:

1. It is programmer friendly language. It is a machine friendly language.

2. High level language is less memory efficient. Low level language is high memory efficient.

3. It is easy to understand. It is tough to understand.

18
1. It is programmer friendly language. It is a machine friendly language.

4. Debugging is easy. Debugging is complex comparatively.

5. It is simple to maintain. It is complex to maintain comparatively.

6. It is portable. It is non-portable.

7. It can run on any platform. It is machine-dependent.

8. It needs compiler or interpreter for translation. It needs assembler for translation.

9. It is used widely for programming. It is not commonly used now-a-days in programming.

2.3 GENERATION OF PROGRAMMING LANGUAGES


There are five generations of Programming languages. They are:
First-Generation Languages : These are low-level languages like machine language.
Second-Generation Languages : These are low-level assembly languages used in kernels and hardware
drives.
Third-Generation Languages : These are high-level languages like C, C++, Java, Visual Basic, and
JavaScript.
Fourth Generation Languages : These are languages that consist of statements that are similar to
statements in the human language. These are used mainly in database programming and scripting. Examples
of these languages include Perl, Python, Ruby, SQL, and MatLab(MatrixLaboratory).
Fifth Generation Languages : These are the programming languages that have visual tools to develop a
program. Examples of fifth-generation languages include Mercury, OPS5, and Prolog.
The first two generations are called low-level languages. The next three generations are called high-level
languages.

1. First-Generation Language : The first-generation languages are also called machine languages/ 1G
language. This language is machine-dependent. The machine language statements are written in binary code
(0/1 form) because the computer can understand only binary language.

Advantages :
1. Fast & efficient as statements are directly written in binary language.
2. No translator is required.
19
Disadvantages :
1. Difficult to learn binary codes.
2. Difficult to understand – both programs & where the error occurred.

2. Second Generation Language : The second-generation languages are also called assembler languages/
2G languages. Assembly language contains human-readable notations that can be further converted to
machine language using an assembler.

Assembler – converts assembly level instructions to machine-level instructions.


Programmers can write the code using symbolic instruction codes that are meaningful, and abbreviations of
mnemonics. It is also known as low-level language.

Advantages :
1. It is easier to understand if compared to machine language.
2. Modifications are easy.
3. Correction & location of errors are easy.

Disadvantages :
1. Assembler is required.
2. This language is architecture /machine-dependent, with a different instruction set for different machines.

3. Third-Generation Language : The third generation is also called procedural language /3 GL. It consists
of the use of a series of English-like words that humans can understand easily, to write instructions. It’s also
called High-Level Programming Language. For execution, a program in this language needs to be translated
into machine language using a Compiler/ Interpreter. Examples of this type of language are C, PASCAL,
FORTRAN, COBOL, etc.

Advantages :
1. Use of English-like words makes it a human-understandable language.
2. Lesser number of lines of code as compared to the above 2 languages.
3. Same code can be copied to another machine & executed on that machine by using compiler-specific to
that machine.
Disadvantages :
1. Compiler/ interpreter is needed.
2. Different compilers are needed for different machines.

4. Fourth Generation Language : The fourth-generation language is also called a non – procedural
language/ 4GL. It enables users to access the database. Examples: SQL, Foxpro, Focus, etc. These
languages are also human-friendly to understand.

Advantages :
1. Easy to understand & learn.
2. Less time is required for application creation.
3. It is less prone to errors.
Disadvantages :
1. Memory consumption is high.
2. Has poor control over Hardware.
3. Less flexible.

20
5. Fifth Generation Language : The fifth-generation languages are also called 5GL. It is based on the
concept of artificial intelligence. It uses the concept that rather than solving a problem algorithmically, an
application can be built to solve it based on some constraints, i.e., we make computers learn to solve any
problem. Parallel Processing & superconductors are used for this type of language to make real artificial
intelligence. Examples: PROLOG, LISP, etc.

Advantages :
1. Machines can make decisions.
2. Programmer effort reduces to solve a problem.
3. Easier than 3GL or 4GL to learn and use.
Disadvantages :
1. Complex and long code.
2. More resources are required & they are expensive too.

21

You might also like