PPL Unit-1
PPL Unit-1
Prepared By
Ms. K Sailaja
Asst. Professor
CSE Dept, GNITC
1
UNIT-I
Preliminary Concepts: reasons for studying concepts of
programming languages, programming domains,
language evaluation criteria, influences on language
design, language categories, language design
trade-offs, implementation methods, programming
environments
● Web Software:
-Eclectic collection of languages:
markup(example:XHTML),
scripting(example:PHP),
general-purpose(example:JAVA).
7
❖ Language Evaluation Criteria
● Readability:
➢ The ease with which programs can be read and understood.
● Writability:
➢ The ease with which a language can be used to create programs.
● Reliability:
➢ Conformance to specifications (i.e., performs to its specifications).
● Cost:
➢ The ultimate total cost.
1-8
❖ Language Evaluation Criteria
1-8
❖ Evaluation Criteria:
Readability
➔ Overall simplicity
◆A manageable set of features and constructs.
◆Minimal feature multiplicity .
◆Minimal operator overloading.
➔ Orthogonality
◆A relatively small set of primitive constructs can be
combined in a relatively small number of ways
◆Every possible combination is legal and meaningful
➔ Data types
◆Adequate predefined data types.
❖ Evaluation Criteria:Readability
➔ Syntax considerations
-Identifier forms: flexible composition.
-Special words and methods of forming compound
statements.
-Form and meaning: self-descriptive constructs, meaningful
keywords.
10
❖ Evaluation Criteria: Writability
• Simplicity and orthogonality
– Few constructs, a small number of primitives, a small
set of rules for combining them.
● Support for abstraction
-The ability to define and use complex structures or
operations in ways that allow details to be ignored.
● Expressivity
– A set of relatively convenient ways of specifying
operations.
– Strength and number of operators and predefined
functions.
1-10
❖ Evaluation Criteria: Reliability
Type checking
– Testing for type errors.
• Exception handling
– Intercept run-time errors and take corrective measures.
• Aliasing
– Presence of two or more distinct referencing methods for
the same memory location.
• Readability and writability
– A language that does not support “natural” ways of
expressing an algorithm will require the use of
"unnatural” approaches, and hence reduced
reliability.
1-11
❖ Evaluation Criteria: Cost
• Training programmers to use the
language
• Writing programs
• Compiling programs
• Executing programs
• Language implementation system
• Reliability: poor reliability leads to high
costs
• Maintaining programs
1-12
Evaluation Criteria:
Others
• Portability
– The ease with which programs can be moved
from one implementation to another.
• Generality
– The applicability to a wide range of
applications.
• Well-definedness
– The completeness and precision of the
language’s official definition.
1-13
❖ Influences on Language Design
• Computer Architecture
– Languages are developed around the prevalent
computer architecture, known as the von
Neumann architecture
• Programming Methodologies
– New software development methodologies (e.g.,
object-oriented software development) led to
new programming paradigms and by extension,
new programming languages
1-14
❖ Computer Architecture Influence
• Well-known computer architecture: Von Neumann
• Imperative languages, most dominant, because of von
Neumann computers
– Data and programs stored in memory
– Memory is separate from CPU
– Instructions and data are piped from memory to CPU
– Basis for imperative languages
• Variables model memory cells
• Assignment statements model piping
• Iteration is efficient
1-15
❖ The Von Neumann
Architecture
1-16
❖ The Von Neumann
Architecture
• Fetch-execute-cycle (on a von Neumann
architecture computer)
initialize the program counter
repeat forever
fetch the instruction pointed by the counter
increment the counter
decode the instruction execute the
instruction
end repeat
1-17
Programming Methodologies Influences
• 1950s and early 1960s: Simple applications; worry about
machine efficiency
• Late 1960s: People efficiency became important;
readability,better control structures
– structured programming
– top-down design and step-wise refinement
• Late 1970s: Process-oriented to data-oriented
– data abstraction
• Middle 1980s: Object-oriented programming
– Data abstraction + inheritance + polymorphism
1-18
Language Categories
• Imperative
– Central features are variables, assignment statements, and iteration
– Include languages that support object-oriented programming
– Include scripting languages
– Include the visual languages
– Examples: C, Java, Perl, JavaScript, Visual BASIC .NET, C++
• Functional
– Main means of making computations is by applying functions to given
parameters
– Examples: LISP, Scheme
• Logic
– Rule-based (rules are specified in no particular order)
– Example: Prolog
• Markup/programming hybrid
– Markup languages extended to support some programming
– Examples: JSTL, XSLT 1-19
❖Language Design Trade-Offs
•Reliability vs. cost of execution
– Example: Java demands all references to array elements be checked
for proper indexing, which leads to increased execution costs
1-20
❖ Implementation Methods
• Compilation
– Programs are translated into machine language
• Pure Interpretation
– Programs are interpreted by another program known as an interpreter
Unit-1(PRINCIPLES OF
1-21
PROGRAMMING LANGUAGES)
❖ Layered View of Computer
The operating system and language implementation are layered over
machine interface of a computer
1-22
Compilatio
n
• Translate high-level program (source language) into machine
code (machine language)
• Fast execution
• Compilation process has several phases:
– lexical analysis: converts characters in the source program into lexical
units
– syntax analysis: transforms lexical units into parse trees which
represent the syntactic structure of program
– Semantics analysis & intermediate code generator:
generate intermediate code
– code generator : machine code is generated
1-23
The Compilation
Process
1-24
Von Neumann
Bottleneck
• Connection speed between a computer’s
memory and its processor determines the speed
of a computer
• Program instructions often can be executed much
faster than the speed of the connection; the
connection speed thus results in a bottleneck
• Known as the von Neumann bottleneck; it is
the primary limiting factor in the speed of
computers
1-25
Pure
Interpretation
• Easier implementation of programs (run-time errors can
easily and immediately be displayed)
• Slower execution (10 to 100 times slower than compiled
programs)
• Often requires more space
• Now rare for traditional high-level languages
• Significant comeback with some Web scripting languages
(e.g., JavaScript, PHP)
1-26
Pure Interpretation
Process
1-27
Hybrid Implementation
Systems
• A compromise between compilers and pure
interpreters
• A high-level language program is translated to an
intermediate language that allows easy
interpretation
• Faster than pure interpretation
• Examples
– Perl programs are partially compiled to detect errors before
interpretation
– Initial implementations of Java were hybrid; the intermediate form,
byte code, provides portability to any machine that has a byte code
interpreter and a run-time system (together, these are called Java
Virtual Machine)
1-28
Hybrid Implementation
Process
1-29
Programming
Environments
•The collection of tools used in software development
• The old way used the console and independent tools
– UNIX/Linux
• vi for editing
• compiler
• debugger
• Integrated Development Environments provide a graphical
interface to most of the necessary tools
1-30
Programming
Environments
• Eclipse
– An integrated development environment for Java, written
in java
– Support for other languages is also available
• Borland JBuilder, NetBeans
– Other integrated development environments for Java
• Microsoft Visual Studio.NET
– A large, complex visual environment
– Used to program in C#, Visual BASIC.NET, Jscript, J#
1-31
Introduction to syntax and semantics
1-34
Formal Definition of Languages
• Recognizers
– A recognition device reads input strings over the alphabet of the
language and decides whether the input strings belong to the
language
– Example: syntax analysis part of a compiler
- It uses a mechanism called ‘Automata’.
• Generators
– A device that generates sentences of a language
– One can determine if the syntax of a particular sentence is
syntactically correct by comparing it to the structure of the generator
1-35
Formal Method of Describing the
Language
The formal language-generation mechanisms,
usually called grammars, that are commonly
used to describe the syntax of programming
languages.
1-37
Formal Method of Describing the
Language
• Context-Free Grammars
– Developed by Noam Chomsky in -1950s
– Language generators, meant to describe the syntax of
natural languages
– Define a class of languages called context-free
languages
• Backus-Naur Form-(BNF) (1959)
– Invented by John Backus to describe Algol 58
– BNF is equivalent to context-free grammars
– BNF is a metalanguage for programming languages
1-38
BNF Fundamentals
• BNF uses abstractions for syntactic structures. A simple Java
assignment , for example, might be represented by the
abstraction <assign> (pointed brackets are often used to
delimit names of abstractions). The actual definition of
<assign> can be given by
1-40
Grammars and Derivations
•A grammar is a generative device for defining
languages.
1-42
Derivations
Derivation is a sequence of production rules. It is
used to get the input string through these
production rules.
1-43
Examples
• Production rules:
E=E+E
E=E-E
E=a|b
• Input: a - b + a
• Production rules:
E=E+E
E=E-E
E=a|b
Input:a - b + a
1-44
Examples
A Grammar for Simple Assignment Statements
<assign> → <id> = <expr>
<id> → A | B | C
<expr> → <id> + <expr>
| <id> * <expr>
| ( <expr> )
| <id>
For example,
the statement A = B * ( A + C )
1-45
Parse Tree
One of the most attractive features of grammars is
that they naturally describe the hierarchical
syntactic structure of the sentences of the
languages they define. These hierarchical
structures are called parse trees.
1-46
Parse Tree
1-47
Ambiguity in Grammars
A grammar that generates a sentential form
for which there are two or more distinct parse
trees is said to be ambiguous.
An Ambiguous Grammar for Simple Assignment
Statements A = B + C * A
<assign> → <id> = <expr>
<id> → A | B | C
<expr> → <expr> + <expr>
| <expr> * <expr>
| ( <expr> )
| <id>
1-48
Ambiguous Grammar Example
Ambiguity in Grammars
There are several other characteristics of a
grammar that are sometimes useful in
determining whether a grammar is
ambiguous.
They include the following:
(1) if the grammar generates a sentence with
more than one leftmost derivation and
(2) if the grammar generates a sentence with
more than one rightmost derivation.
1-50
An Unambiguous Expression Grammar
(Operator Precedence)
• If we use the parse tree to indicate precedence
levels of the operators, we cannot have
ambiguity.
An Unambiguous Grammar for Expressions
<assign> → <id> = <expr>
<id> → A | B | C
<expr> → <expr> + <term>
| <term>
<term> → <term> * <factor>
| <factor>
<factor> → ( <expr> )
| <id>
1-51
An Unambiguous Expression Grammar
Derivation of the
sentence
A=B+C*A
Using Left Most Derivation
1-52
An Unambiguous Expression
Grammar
1-53
Associativity of Operators
• Associativity implies the order of execution from
left-to-right or right-to-left when the precedence of the
operators is equal.
1-54
EBNF(Extended BNF)
• Because of a few minor inconveniences in BNF, it has
been extended in several ways. Most extended
versions are called Extended BNF, or simply EBNF,
even though they are not all exactly the same.
• The extensions do not enhance the descriptive
power of BNF; they only increase its readability and
writability.
1-55
EBNF(Extended BNF)
Three common extensions in BNF
• Optional parts are placed in brackets [ ]
<if_stmt> → if (<expression>) <statement> [else <statement>]
• Alternative parts of RHSs are placed inside
parentheses and separated via vertical bars
<term> → <term> (+|-|*) const
• Repetitions (0 or more) are placed inside braces { }
<ident_list> → <identifier> {,<identifier>}
1-56
1-57
Some variations on BNF and EBNF
1-58
Attribute Grammars
• CFGs cannot describe all the syntax of the
programming languages. So AGs was designed by
Knuth(1968) to describe both syntax & static
semantics of programs
1-59
Attribute Grammars : Definition
1-60
Additional Feature of AGs:
o Let X0 ->X1 ,... ,Xn be a rule
1-61
Attribute Grammars: An
Example
1-62
Computing Attribute Values
1-63
Computing Attribute Values
The following is an evaluation order of the
attributes,(A=A+B)
1. <var>.actual_type ← look-up(A)
2. <expr>.expected_type ← <var>.actual_type
3. <var>[2].actual_type ← look-up(A)
<var>[3].actual_type ← look-up(B)
4. <expr>.actual_type ← either int or real
5. <expr>.expected_type
==<expr>.actual_type is either TRUE or
FALSE
1-64
Computing Attribute Values
1-65
Computing Attribute Values
A fully attributed
parse tree
1-66
Describing the Meanings of Programs:
Dynamic Semantics
• Dynamic semantics deals with the describing the
meaning of the expressions.
• There are different reasons to know the method of
describing semantics. They are
– Programmers need to know the meaning of the language
semantics
– Compiler writers must know the semantics of the language for
which they are writing the compiler.
– Program correctness proofs are based on formal description of
the language semantics
– Designers could detect ambiguities and inconsistencies
1-67
Dynamic Semantics
• There are various methods available to describe the
semantics of the language constructs.
• Operational Semantics
• Axiomatic Semantics
• Denotational Semantics
1-68
Operational Semantics
• Operational Semantics
– Describe the meaning of a program by executing
its statements on a machine, either simulated or
actual. The change in the state of the machine
(memory, registers, etc.) defines the meaning of
the statement
• To describe operational semantics for a high-level
language, a virtual machine is needed
• It uses Pure interpreter, which requires complex
hardware and operating system
1-69
Operational Semantics
Basic Process:
•The understanding of Operational Semantics of a
language ‘L’ requires two components.
•The first component is a translator which converts
the statements into a low level language, the other
component is the virtual machine for that low level
language
•The state changes in the virtual machine when the
low level language statements are executed define
the meaning of those statements.
1-70
Operational Semantics
(continued)
Operational semantics approach is used in text books and
programming language reference manuals. For example,
the semantics of the C for construct can be described
in terms of simpler statements, as follows
Unit-1(PRINCIPLES OF
1-71
PROGRAMMING LANGUAGES)
Axiomatic Semantics
This method was developed to prove the
correctness of program statements. It
describes the logical characteristics of
programs. Mathematical logic forms the basis
for axiomatic semantics
• Axioms or inference rules are defined for each
statement type in the language
• The logic expressions are called 'assertions'
or 'predicates’.
1-72
Axiomatic Semantics
• The assertions before the statements, defines the
constraints imposed on program variables are
called “preconditions”. And the assertions that
follows the program statements, defines the new
constraints imposed on the variables after
executing the statements are called “post
conditions”
• These preconditions and post conditions are
shown in braces to distinguish them from
program statements.
1-73
Axiomatic Semantics Form
• A weakest precondition is the least restrictive
precondition that will guarantee the post condition
• The axiomatic semantic of a given statement is usually
expressed as follows
P🡪Precondition
Q🡪Post condition
S🡪Statement
1-74
Axiomatic Semantics Form
• Example: consider the given assignment statement and
post condition are
X = 2 + Y - 5 {X<10}
• The weakest precondition can be computed by
substituting the statement in the assertion as follows,
2 + Y – 5 < 10
Y < 13
Therefore the weakest precondition is {Y<13}.
1-75
Evaluation of Axiomatic
Semantics
• Developing axioms or inference rules for all of
the statements in a language is difficult
• It is a good tool for correctness proofs, and an
excellent framework for reasoning about
programs, but it is not as useful for language
users and compiler writers
1-76
Denotational Semantics
• Denotational semantics describes the meaning
of a program as a mathematical object. It is
based on recursive function theory.
1-77
Denotational Semantics
• The process of creating denotational semantic
involves the following steps.
• Define a mathematical object for each entity
in a language.
• Define a function that maps instances of
language entities to corresponding
mathematical object instances.
1-78
Denotational Semantics
• The denotational semantic of a program can
be described in terms of the variations that
occur in the state of ideal computes.
• These state changes can be defined through
mathematical functions.
• The state of a program is defined in terms of
values of all its current variables. If ‘s’ is the
state of a program then it is expressed as set
of ordered pairs as follows
1-79
Denotational Semantics
• s= {<a1,v1>,<a2,v2>,….,<an,vn>}
where , ‘a1,a2,..an’ are the variable names
and ‘v1,v2,..vn’ are the values of these
variables.
• Let VARMAP be a function that, returns the
current value of the variable when given a
variable name and a state.
VARMAP(aj,s)=vj
1-80
Denotation Semantics Vs Operational
Semantics
1-81