CSC218 Class - 2 - Note - 2 (Final) - 035000
CSC218 Class - 2 - Note - 2 (Final) - 035000
CSC 218
(FOUNDATIONS OF SEQUENTIAL PROGRAMMINING)
A woman named Ada Lovelace devised the first programming language in 1883. She worked with Charles
Babbage (who we all know as Father of Computers) in his first mechanical computer called Analytical
Engine. Lovelace analyzed how numbers operated in computers and discovered that they could represent a
pattern or flow. Then, she devised a pattern for the Babbage Engine to compute the Bernoulli numbers,
which is considered to be the first programming language. With this, Ada was credited with developing
the first programming language.
In these 3 years, programming languages like FORTRAN, COBOL, and LISP were created. FORTRAN
was designed for complex mathematical, scientific and statistical operations. Fortran stands for formula
translation and the credit for its creation goes to John Backus. This language is still widely utilized in
1
mathematical analysis and computations. COBOL developed by Dr. Grace Murray Hopper is a Common
Business Oriented Language. This language could operate on any type of computer. The applications of this
language are in banking, telephone systems, credit card processing, hospital and government computers,
automotive systems, and traffic signals. LISP is a list processing language devised by John McCarthy
created for its application in Artificial Intelligence. Designed for easy processing of data strings
1970: Pascal
The credit for the creation of Pascal language goes to Niklaus Wirth. It was named Pascal to pay tribute to
French mathematician, philosopher, and physicist Blaise Pascal. It is a high-level programming
language considered a pretty easy learning curve. The main purpose of its development was to teach
computer programming. The teaching involved structured programming and data structures. The derivative
of Pascal called Object Pascal was commonly used for windows application development. It was also used
by Apple Lisa and Skype.
1972: C
All the programmers and people from technical backgrounds would have heard of the C language. It is
considered to be the most initial high-level language by many. This high-level language is closer to natural
language putting away the complex machine codes. It was developed by Dennis Richie in Bell labs.
The main purpose of its creation was for the Unix Operating System which is popular nowadays as an open-
source operating system. This language was a foundation for many different languages that would be created
in the future. This language was the base for Java, C#, JavaScript, Ruby, Go, Perl, Python, and PHP. It was
primarily used in cross-platform programming, system programming, Unix programming, and game
development.
1983: C++
C++ is considered the extension of C language with the capability of Object-Oriented Programming. The
enhancements such as classes, virtual functions, and templates were made. The credit for the creation of this
language goes to Bjarne Stroustrup, It is one of the most popular and widely used languages. This
language is highly used for game development tools to program game engines as well as in high-
performance software like PhotoShop. C++ was also used to develop other scripting languages and its
frameworks like NodeJS. The primary application is in commercial application development, embedded
software, client/server applications, etc.
1983: Objective C
Objective C is the object-oriented extension over C language devised by Brad Cox and Tom Love in
Stepstone. It is a general-purpose, high-level language with the addition of message-passing functionality
based on Smalltalk language. The main purpose of this language was for Apple programming. It was mainly
used for coding software for macOS and iOS, Apple operating systems.
1987: Perl
2
Perl is a scripting language designed mainly for editing text. It was developed by Larry Wall in Unisys. It
was developed for report processing on Unix systems. It is known for its high power performance,
modularity and versatility. The primary uses of Perl are for computer graphics CGI, database applications,
web programming, system administration, and graphics programming. This language is used by Amazon,
IMDb, etc.
1991: Python
Python is considered one of the easiest programmings to learn. The learning curve is quite flat and
recommended for coding beginners. The fact is that this language is very close to human language. The
coding mechanism is very simple and programmers can do multiple tasks with just single coding
expressions. This language was developed by Guido Van Rossum. The frameworks coded in python
language are used by popular social media apps like Instagram. This language is widely used nowadays for
machine learning and AI. It is also used for web applications, software development, and information
security.
1993: Ruby
In 1993, many popular and powerful languages were developed. Ruby was one of those programming
languages. This language was designed by Yukihiro Matsumoto originally for teaching purposes. This
language is influenced by Perl, Ada, Lisp, Smalltalk, etc. The language is very productive and enjoyable. It
is used for web application development and also the foundation for Ruby on Rails. It is used by Twitter,
Hulu, etc. The execution of the code is slow, but it allows programmers to put the code together quickly and
run a program.
1993: Java
Java is one of the most widely used and popular programming languages all across the globe. This language
was developed by Sun Microsystems. This language has huge use cases. Primarily, this language was
intended for networking tools and mobile devices but was enhanced to facilitate information across the
World Wide Web. This language can be found anywhere from huge power servers to small mobile devices.
The applications of this language are very widely distributed from network programming, web applications,
software development, mobile application development to GUI development. It is used in the development
of native Android OS apps as well.
1993: PHP:
PHP is a widely known and used language for dynamic web programming. This programming language is
mainly used to compile on web servers to serve the web pages and required resources. It was created by
Rasmus Lerdorf. It was originally named Personal Home Page but was later called Hypertext Preprocessor.
The major application of this language was to build dynamic web pages and server-side development. This
language was widely used by popular companies like Facebook, Wikipedia, WordPress, etc.
1993: JavaScript
JavaScript is a web scripting language known to all web developers. It was developed by Brendan Eich. It
is primarily used along with web pages for web browser interactions. We can say that almost every web
3
page uses JavaScript. It is a high-level scripting language used to make web pages dynamic at the client-side
without putting pressure on the server-side. It is used for web form submission and validations, user
interface interactivity, animations, and tracking.
JavaScript takes the load of the server-side by running in the user's computer and doing most of the client-
side computation which does not require data from the server. Today, JavaScript has expanded itself to
different frameworks and can be used to develop web applications, websites, server-side programs, desktop
applications, and mobile applications as well. JavaScript has no boundaries in the field of programming in a
modern programming environment.
2000+
Many new programming languages and frameworks were created after the 2000s. Most of the languages and
frameworks were based on the older programming languages with powerful extensions and security. Some
of the honorable mentions are React JS, Angular, C#, Scala, Go, Swift, etc. C# is an extension of C++
language with the simplicity of Visual Basic incorporated into it. C# was primarily used to develop the
Microsoft products and desktop applications.
Scala is the programming language that integrates functional programming with object-oriented
programming. Swift is another powerful programming language devised by Apple as the replacement of
Objective C. Swift just like objective C is used to develop the Apple software and applications with more
simplicity and efficiency.
The modern world cannot be free of programming language now. They have formed a foundation for
powerful systems and technology to grow. Programming languages inspired innovation, development, and
ability to turn the physical world into virtual, making tasks easier and enjoyable. Most of the programming
languages that are prevalent today are built upon the concept of older programming languages.
The new ones make the work simpler and efficient with fewer chances of errors and a high level of security.
Machine learning, data mining, Artificial Intelligence all use programming languages as a core ingredient.
Many businesses rely heavily on software programs to run day to day tasks efficiently. So, this programming
environment will only develop in the future with more modular, simplistic, and powerful coding languages.
Conclusion: Programming languages are still in the process of evolution in both industry as well as in
research. Only time will tell where this journey of programming languages will reach and what this
technology will look like when it reaches its pinnacle.
---
4
This table highlights key milestones in the evolution of programming languages, from the theoretical
beginnings to modern languages used in various domains.
1970 -1985
This period saw the emergence of influential languages like C, Prolog, and C++, which shaped the programming landscape
and continue to impact software development today.
1996 to bate
This period saw the rise of languages like Java, Python, JavaScript, and newer languages like Go and Rust, which are
designed for specific use cases and have gained popularity in various domains.
Learners will apply number-base convertion techniques for Binary, Octal, Decimal, Hexadecimal, etc to establish reasons for
number bases conversion in computer communication
1. Different number systems (such as decimal, binary, octal, and hexadecimal) are used to represent data in computers.
2. Converting between these bases allows us to express data in a format that suits the specific context or requirement.
6
therefore coded into computer’s memory as 0s and 1s. This method of instructing computer is called
machine language.
As the computer operates using a program coded in 0s and 1s, it is difficult for programmers to write a
program in 0s and 1s. Programmers find it easier to write programs in a language approaching that of
English. This language is called high level language.
Humans have been counting for a long time. To do so, we use systems that relate unique symbols to specific
values. This is called a number system, and it is the technique that we use to represent and manipulate
numbers. A number system must have unique symbols for every value, be consistent, provide comparable
values, and be easily reproducible.
A computer understands the positional number system where there are only a few symbols called digits, and
these symbols represent different values depending on the position they occupy in the number.
Number systems are one of the most fundamental concepts that computer scientists must learn. It’s an
important step for anyone who wants to become a computer scientist or programmer.
You are probably most familiar with the decimal system that forms the basis of how humans count. The
decimal system has a base of 10, because it provides 10 symbols to represent all numbers: 0, 1, 2, 3, 4, 5,
6, 7, 8, 9
Humans use the decimal system because we have 10 fingers to count on, but machines don’t have that
luxury. So, we’ve created other number systems that perform the same functions. Computers represent
information differently than humans, so we need different systems to represent numbers.
Binary
Octal
Decimal
Hexadecimal
7
Low-Level Programming:
1. In low-level programming languages (like assembly), direct manipulation of memory addresses and
registers often involves binary or hexadecimal representations.
2. Understanding and converting between these bases are crucial for efficient memory
management and bitwise operations.
Data Manipulation:
1. When performing bitwise operations (AND, OR, XOR), it’s common to work with binary
representations.
2. Converting between bases enables efficient manipulation of data at the bit level.
Computer Architecture:
1. Computer architecture relies on binary representation for instructions and data
2. Converting between bases helps engineers design efficient processors and memory systems.
Advantages of Hexadecimals
Hexadecimal representation has several advantages in the context of computer science and programming:
1. Compactness:
(a) Hexadecimal (base16) uses fewer digits to represent large values compared to decimal (base 10).
For example, the decimal number 255 is represented as FF in hexadecimal, which is more concise.
2. Direct Mapping to Binary:
(a) Each hexadecimal digit corresponds to a 4-bit binary sequence.
(b) This direct mapping makes it easy to convert between hexadecimal and binary.
For example: Hexadecimal 1A corresponds to binary 0001 1010.
3. Memory Addresses:
(a) Memory addresses in computer systems are often expressed in hexadecimal.
(b) Hexadecimal provides a convenient way to represent memory locations and offsets.
4. Color Representation:
(a) Hexadecimal is commonly used to represent colors in web design and graphics.
(b) Each color channel (red, green, blue) can be expressed as a 2-digit hexadecimal value
(e.g., #FF0000 for red).
5. Bitwise Operations:
(a) When performing bitwise operations (AND, OR, XOR), hexadecimal is useful.
(b) It simplifies working with individual bits and flags.
6. Debugging and Hex Dumps:
(a) Hexadecimal is used in debugging tools and hex dumps.
(b) It allows developers to inspect memory content and binary data more easily.
7. Representation of Binary Data:
(a) Binary files (such as executables, images, and audio) are often displayed in hexadecimal format.
(b) Hexadecimal provides a concise and readable representation of raw binary data.
8
In summary, hexadecimal is a versatile representation that balances readability, compactness, and direct
correspondence to binary. It’s a fundamental concept in computer science and programming
NUMBER 2 1 5 3
3 2 1
PLACE HOLDER 10 10 10 100
3 2 1
RESULT 2*10 1*10 5*10 3*100 2153
= 2000 = 100 = 50 = 3
In the decimal system the place-holder for each digit is a power of 10. Moving from right to left, in the
table, corresponds t o an increase in magnitude by a factor of 10 at every step.
The binary number in the table, 1101 is sometimes written with the subscript “2” to indicate that it is a base
2 number, i.e. 11012. To obtain the decimal representation of 1101 we multiply each binary digit by its
column’s weight and sum the values. Starting from the right,
9
The octal number in the table, 155 is sometimes written with the subscript “8” to indicate that it is a base 8
number, i.e. 1558. To obtain the decimal representation of 155 we multiply each octal digit by its column’s
weight and sum the values. Starting from the right,
5 x 80 + 5 x 81 + 1 x 82 = 5 + 40 + 64 = 10910
Hence, 1558 = 10910
The hex number in the table, 12BF can be written with the subscript “16”, to indicate that it is a base 16
number, i.e. BF1216. To obtain the decimal representation of BF12 16, we multiply each hex digit by its
column’s weight, noting that B represents 11 and F corresponds to 15, and sum the values, i.e.
As base 2 only uses the numbers 0 and 1 this approach essentially involves adding the non-zero
place values together.
(b) From the right, adding the place values, corresponding to the non-zero digits, in 11011101 gives:
11
(a). Convert the hexadecimal number 3B2 to a decimal number.
(b). Convert the hexadecimal number 4BAE to a decimal number.
Solution
(a) The place values of digits in a hex number are powers of 16. To convert 3B2 to its decimal
representation, starting from the right, multiply each digit in 3B2 by the appropriate power of 16.
General Steps:
(a) Divide the decimal number by 16.
(b) Take the remainder as the rightmost digit in the hexadecimal representation.
(c) Repeat the process with the quotient until the quotient becomes 15 or less.
(d) Combine the remainders in reverse order to get the complete hexadecimal number.
Remember that the letters A, B, C, D, E, and F are used for the values 10, 11, 12, 13, 14, and 15,
respectively.
0 corresponds to 0 in decimal.
1 corresponds to 1 in decimal.
…
9 corresponds to 9 in decimal.
A corresponds to 10 in decimal.
B corresponds to 11 in decimal.
C corresponds to 12 in decimal.
D corresponds to 13 in decimal.
E corresponds to 14 in decimal.
F corresponds to 15 in decimal.
So, when you see a single hexadecimal digit like F, it represents the decimal value 15.
Conversion to Hexadecimal:
Convert each group of four binary digits into the equivalent hexadecimal symbol:
1011 → B
0101 → 5
Merge the hexadecimal digits in the same order as the binary digit groups:
The hexadecimal representation of 10110101 is B5.
Change Numbers Above 9 into Letters:
Conclusively, in hexadecimal, numbers greater than 9 are represented by letters A-F.
** For example, A represents 10, B represents 11, and so on.
So, if you encounter a hexadecimal digit greater than 9, replace it with the corresponding letter.
In summary, converting binary to hexadecimal is straightforward once you understand the grouping and
mapping. It allows us to represent binary numbers in a more concise manner.
Note: In mathematics and computing, the hexadecimal numeral system is a positional numeral system that
represents numbers using a radix of 16. Unlike the decimal system representing numbers using 10
symbols…
13
2.1.5.5Conversion Between other Bases
The general steps for converting a base 10 or "normal" number into another base are:
First, divide the number by the base to get the remainder. This remainder is the first, ie least
significant, digit of the new number in the other base
Then repeat the process by dividing the quotient of step 1, by the new base. ...
Repeat this process until your quotient becomes less than the base. …
Hexadecimal to Octal
Example 11: Convert the hexadecimal number 8B6E to an octal number.
Solution
One method is to convert the hex number to binary and then convert from binary to octal.
Write each hex digit as a four bit binary number.
Starting from the right, split the binary representation into groups of three. Pad the leftmost triple
with zeros if required.
Octal to Hexadecimal
Example: Convert the octal number 6473 to a hex number.
Solution
All we have to do is reverse the process in the previous example.
Write each octal digit as a three bit binary number.
Starting from the right, split the binary representation into groups of four.
14
Pad the leftmost group with zeros if required.
Convert each decimal number to its hex equivalent, e.g. 10112 = 11 10.
Convert each binary number to its decimal equivalent, e.g. 1 1 10 = B16.
an intermediary
language which is
more than LLL; &
less than HLL.
uses nos, symbols,
& abbreviations
instead of 0s & 1s.
15
etc.
. | Abstracted from
hardware, easier to
use |
In machine language, data are only represented with the In assembly language, data can be represented with
help of binary format(0s and 1s), hexadecimal and the help of mnemonics such as Mov, Add, Sub, End
octadecimal. etc.
Machine language is very difficult to understand by the Assembly language is easy to understand by the
human beings. human being as compare to machine language.
Modifications and error fixing cannot be done in Modifications and error fixing can be done in
machine language. assembly language.
Machine language is very difficult to memorize so it is Easy to memorize the assembly language because
not possible to learn the machine language. some alphabets and mnemonics are used.
Execution is fast in machine language because all data is Execution is slow as compared to machine
already present in binary format. language.
17
Machine Language Assembly Language
Iit is easy to access hardware component In this, it is difficult to access hardware component
2. High level language is less memory efficient. Low level language is high memory efficient.
18
1. It is programmer friendly language. It is a machine friendly language.
6. It is portable. It is non-portable.
1. First-Generation Language : The first-generation languages are also called machine languages/ 1G
language. This language is machine-dependent. The machine language statements are written in binary code
(0/1 form) because the computer can understand only binary language.
Advantages :
1. Fast & efficient as statements are directly written in binary language.
2. No translator is required.
19
Disadvantages :
1. Difficult to learn binary codes.
2. Difficult to understand – both programs & where the error occurred.
2. Second Generation Language : The second-generation languages are also called assembler languages/
2G languages. Assembly language contains human-readable notations that can be further converted to
machine language using an assembler.
Advantages :
1. It is easier to understand if compared to machine language.
2. Modifications are easy.
3. Correction & location of errors are easy.
Disadvantages :
1. Assembler is required.
2. This language is architecture /machine-dependent, with a different instruction set for different machines.
3. Third-Generation Language : The third generation is also called procedural language /3 GL. It consists
of the use of a series of English-like words that humans can understand easily, to write instructions. It’s also
called High-Level Programming Language. For execution, a program in this language needs to be translated
into machine language using a Compiler/ Interpreter. Examples of this type of language are C, PASCAL,
FORTRAN, COBOL, etc.
Advantages :
1. Use of English-like words makes it a human-understandable language.
2. Lesser number of lines of code as compared to the above 2 languages.
3. Same code can be copied to another machine & executed on that machine by using compiler-specific to
that machine.
Disadvantages :
1. Compiler/ interpreter is needed.
2. Different compilers are needed for different machines.
4. Fourth Generation Language : The fourth-generation language is also called a non – procedural
language/ 4GL. It enables users to access the database. Examples: SQL, Foxpro, Focus, etc. These
languages are also human-friendly to understand.
Advantages :
1. Easy to understand & learn.
2. Less time is required for application creation.
3. It is less prone to errors.
Disadvantages :
1. Memory consumption is high.
2. Has poor control over Hardware.
3. Less flexible.
20
5. Fifth Generation Language : The fifth-generation languages are also called 5GL. It is based on the
concept of artificial intelligence. It uses the concept that rather than solving a problem algorithmically, an
application can be built to solve it based on some constraints, i.e., we make computers learn to solve any
problem. Parallel Processing & superconductors are used for this type of language to make real artificial
intelligence. Examples: PROLOG, LISP, etc.
Advantages :
1. Machines can make decisions.
2. Programmer effort reduces to solve a problem.
3. Easier than 3GL or 4GL to learn and use.
Disadvantages :
1. Complex and long code.
2. More resources are required & they are expensive too.
21