Logic Assignment
Logic Assignment
Compiler assignment
Modal logic is a type of formal logic primarily developed in the 1960s that extends
classical propositional and predicate logic to include operators expressing modality. A modal—a word
that expresses a modality—qualifies a statement. For example, the statement "John is happy" might be
qualified by saying that John is usually happy, in which case the term "usually" is functioning as a modal.
The traditional alethic modalities, or modalities of truth, include possibility ("Possibly, p", "It is possible
that p"), necessity ("Necessarily, p", "It is necessary that p"), and impossibility ("Impossibly, p", "It is
impossible that p").
A formal modal logic represents modalities using modal operators. For example, "It might rain today"
and "It is possible that rain will fall today" both contain the notion of possibility. In a modal logic this is
represented as an operator, "Possibly", attached to the sentence "It will rain today".
These systems usually are a Propositional logic that has two new symbols:
The symbols of modal logic consistute of an infinite countable set P of propositional variables,
logical connectives, parenthesization, and the modal operator . The choice of logical
connectives depends on the development of propositional logic one wants to follow; below I
choose negation and implication. The set of modal formulas is defined recursively as follows.
Every propositional variable is a formula. If φ and ψ are formulas then so are ¬φ, φ → ψ, and
φ. All formulas are obtained by a repeated application of these constructions
and the modus ponens inference rule: from φ and φ → ψ infer ψ. The modal proof system also
must include rules and axioms for the modal operator. There is only one extra inference rule, the
generalization or necessitation or introduction: from φ infer φ. There are several
possibilities for extra axioms, resulting in different modal logics. • the logic K is obtained by
adding the distribution axiom: (φ → ψ) → ( φ→ ψ); • the logic K4 is obtained by
adding the distribution axiom and the K4 axiom φ→ φ; • the provability logic or GL
logic is obtained by adding the distribution and K4 axiom and the L¨ob axiom: ( φ → φ)
→ φ.
Fuzzy logic
Fuzzy Logic is a form of multi-valued logic derived from fuzzy set theory to deal with reasoning that is
approximate rather than precise. Fuzzy logic is not a vague logic system, but a system of logic for dealing
with vague concepts. As in fuzzy set theory the set membership values can range (inclusively) between 0
and 1, in fuzzy logic the degree of truth of a statement can range between 0 and 1 and is not
constrained to the two truth values true/false as in classic predicate logic.
Examples of Fuzzy Logic In a Fuzzy Logic washing machine, Fuzzy Logic detects the type and amount of
laundry in the drum and allows only as much water to enter the machine as is really needed for the
loaded amount. So, less water will heat up quicker - which means less energy consumption. Additional
properties:
• Imbalance compensation: In the event of imbalance calculate the maximum possible speed, sets this
speed and starts spinning.
• Water level adjustment: Fuzzy automatic water level adjustment adapts water and energy
consumption to the individual requirements of each wash programme, depending on the amount of
laundry and type of fabric.
Fuzzy logic is an approach to computing based on "degrees of truth" rather than the
usual "true or false" (1 or 0) Boolean logic on which the modern computer is based.
In logic , fuzzy logic is a form of many-valued logic in which the truth value of variables
may be any real number between 0 and 1 both inclusive. It is employed to handle the
concept of partial truth, where the truth value may range between completely true and
completely false.[1] By contrast, in Boolean logic, the truth values of variables may only
be the integer values 0 or 1.
Fuzzification is the process of assigning the numerical input of a system to fuzzy sets
with some degree of membership. This degree of membership may be anywhere within
the interval [0,1]. If it is 0 then the value does not belong to the given fuzzy set, and if it
is 1 then the value completely belongs within the fuzzy set. Any value between 0 and 1
represents the degree of uncertainty that the value belongs in the set. These fuzzy sets
are typically described by words, and so by assigning the system input to fuzzy sets, we
can reason with it in a linguistically natural manner.
For example, in the image below the meanings of the expressions cold, warm, and hot
are represented by functions mapping a temperature scale. A point on that scale has
three "truth values"—one for each of the three functions. The vertical line in the image
represents a particular temperature that the three arrows (truth values) gauge. Since
the red arrow points to zero, this temperature may be interpreted as "not hot"; i.e. this
temperature has zero membership in the fuzzy set "hot". The orange arrow (pointing at
0.2) may describe it as "slightly warm" and the blue arrow (pointing at 0.8) "fairly cold".
Therefore, this temperature has 0.2 membership in the fuzzy set "warm" and 0.8
membership in the fuzzy set "cold". The degree of membership assigned for each fuzzy
set is the result of fuzzification.
Advantage of fuzzy
This system can work with any type of inputs whether it is imprecise, distorted or noisy input information.
The construction of Fuzzy Logic Systems is easy and understandable.
Fuzzy logic comes with mathematical concepts of set theory and the reasoning of that is quite simple.
It provides a very efficient solution to complex problems in all fields of life as it resembles human reasoning and
decision making.
The algorithms can be described with little data, so little memory is required.
Fuzzy Logic is defined as a many-valued logic form which may have truth values of variables in any
real number between 0 and 1. It is the handle concept of partial truth. In real life, we may come across
a situation where we can't decide whether the statement is true or false. At that time, fuzzy logic offers
very valuable flexibility for reasoning.
Intuitionistic logic
A type of logic which rejects the axiom law of excluded middle or, equivalently, the law of
double negation and/or Peirce's law. It is the foundation of intuitionism.
Intuitionistic logic is designed to capture a kind of reasoning where moves like the one in
the first proof are disallowed. Proving the existence of an x satisfying ϕ(x) means that
you have to give a specific x, and a proof that it satisfies ϕ, like in the second proof.
Proving that ϕ or ψ holds requires that you can prove one or the other. Formally
speaking, intuitionistic logic is what you get if you restrict a proof system for classical
logic in a certain way. From the mathematical point of view, these are just formal
deductive systems, but, as already noted, they are intended to capture a kind of
mathematical reasoning. One can take this to be the kind of reasoning that is justified
on a certain philosophical view of mathematics (such as Brouwer’s intuitionism); one
can take it to be a kind of mathematical reasoning which is more “concrete” and
satisfying (along the lines of Bishop’s constructivism); and one can argue about whether
or not the formal description captures the informal motivation. But whatever
philosophical positions we may hold, we can study intuitionistic logic as a formally
presented logic; and for whatever reasons, many mathematical logicians find it
interesting to do so.
Lukasiewicz logic
Lukasiewicz’s [1930] three-valued Aussagenkalk¨uls allows propositions to have the values ‘1’, ‘0’,‘1/2’
(see [9]). For convenience in comparison with standard truth-value semantics, these will be interpreted
in what follows as ‘true’ (T), ‘false’ (F), and ‘undetermined’ (U) . The exact axiomatization of
ÃLukasiewicz’s logic is unimportant for present purposes, but several versions of the theory have been
offered .Proof of the internal determinacy metatheorem requires only a consideration of the logic’s
characteristic nonstandard truth value semantics, which can be given as truth tables or matrix
definitions by cases of some choice of primitive propositional connectives. Here negation and the
conditional are defined, to which the other truth functions are reducible in the usual way.
1) (¬P → P) → P
2) P →(¬P → Q)
The term probabilistic in our context refers to the use of probabilistic representations and reasoning
mechanisms grounded in probability theory
probabilistic logic (also probability logic and probabilistic reasoning) is to combine the capacity
of probability theory to handle uncertainty with the capacity of deductive logic to exploit
structure of formal argument. The result is a richer and more expressive formalism with a broad
range of possible application areas. Probabilistic logics attempt to find a natural extension of
traditional logic truth tables: the results they define are derived through probabilistic
expressions instead. A difficulty with probabilistic logics is that they tend to multiply the
computational complexities of their probabilistic and logical components.
Compiler assignment
Bottom-up parsing
A parser is a compiler or interpreter component that breaks data into smaller
elements for easy translation into another language. A parser takes input in the
form of a sequence of tokens, interactive commands, or program instructions and
breaks them up into parts that can be used by other components in programming.
Compiler Design - Bottom-Up Parser. Bottom-up parsing starts from the leaf nodes of a tree and
works in upward direction till it reaches the root node. Here, we start from a sentence and then apply
production rules in reverse manner in order to reach the start symbol.
A bottom-up parsing constructs the parse tree for an input string beginning from the
bottom (the leaves) and moves to work towards the top (the root). Bottom-up parsing
is a parser that reduces the string to the start symbol of the grammar. During the
reduction, a specific substring matching the right side or body of the production will be
replaced by a non – terminal at the head of that production. Bottom-up parsing
constructs rightmost derivation in reverse order while scanning the input from left to
right.
As the name suggests, bottom-up parsing starts with the input symbols and tries to construct
the parse tree up to the start symbol.
Example:
Input string : a + b * c
Production rules:
S → E
E → E + T
E → E * T
E → T
T → id
Read the input and check if any production matches with the input:
a + b * c
T + b * c
E + b * c
E + T * c
E * c
E * T
E
S
LR parsers don’t need left-factored grammars and can also handle left-recursive grammars
Bottom-up Parser is the parser which generates the parse tree for the given
input string with the help of grammar productions by compressing the non-
terminals i.e. it starts from non-terminals and ends on the start symbol. It
uses reverse of the right most derivation.
Further Bottom-up parser is classified into 2 types: LR parser, and Operator
precedence parser.
(i). LR parser:
LR parser is the bottom-up parser which generates the parse tree for the given string by using unambiguous grammar. It follow
reverse of right most derivation.
LR parser is of 4 types:
(a). LR(0)
(b). SLR(1)
(c). LALR(1)
(d). CLR(1)
E→E+T|T
T→T*F|F
F → ( E ) | id
A rightmost derivation for id + id * id is shown below:
⇒rm F + id * id ⇒rm id + id * id
* Reduce: parser reduces the RHS of a production to its LHS The handle always appears on top of the
stack
Shift-reduce parsing uses two unique steps for bottom-up parsing. These steps are
known as shift-step and reduce-step.
Shift step: The shift step refers to the advancement of the input pointer to the
next input symbol, which is called the shifted symbol. This symbol is pushed onto
the stack. The shifted symbol is treated as a single node of the parse tree.
Reduce step : When the parser finds a complete grammar rule (RHS) and
replaces it to (LHS), it is known as reduce-step. This occurs when the top of the
stack contains a handle. To reduce, a POP function is performed on the stack
which pops off the handle and replaces it with LHS non-terminal symbol.
LR Parser
There are three widely used algorithms available for constructing an LR parser:
Uses the stack for designating what is still Uses the stack for designating what is
to be expected. already seen.
Builds the parse tree top-down. Builds the parse tree bottom-up.
Continuously pops a nonterminal off the Tries to recognize a right hand side on the
stack, and pushes the corresponding right stack, pops it, and pushes the corresponding
hand side. nonterminal.
Reads the terminals when it pops one off Reads the terminals while it pushes them on
the stack. the stack.
Pre-order traversal of the parse tree. Post-order traversal of the parse tree.
code generator
code generator
Code generation can be considered as the final phase of compilation. Through post
code generation, optimization process can be applied on the code, but that can be seen
as a part of code generation phase itself. The code generated by the compiler is an
object code of some lower-level programming language, for example, assembly
language. We have seen that the source code written in a higher-level language is
transformed into a lower-level language that results in a lower-level object code, which
should have the following minimum properties:
Target language : The code generator has to be aware of the nature of the
target language for which the code is to be transformed. That language may
facilitate some machine-specific instructions to help the compiler generate the
code in a more convenient way. The target machine can have either CISC or
RISC processor architecture.
IR Type : Intermediate representation has various forms. It can be in Abstract
Syntax Tree (AST) structure, Reverse Polish Notation, or 3-address code.
Selection of instruction : The code generator takes Intermediate
Representation as input and converts (maps) it into target machine’s instruction
set. One representation can have many ways (instructions) to convert it, so it
becomes the responsibility of the code generator to choose the appropriate
instructions wisely.
Register allocation : A program has a number of values to be maintained during
the execution. The target machine’s architecture may not allow all of the values
to be kept in the CPU memory or registers. Code generator decides what values
to keep in the registers. Also, it decides the registers to be used to keep these
values.
Ordering of instructions : At last, the code generator decides the order in which
the instruction will be executed. It creates schedules for instructions to execute
them.
Code generator is used to produce the target code for three-address statements. It uses
registers to store the operands of the three address statement.
Consider the three address statement x:= y + z. It can have the following sequence of codes:
MOV x, R0
ADD y, R0
A code-generation algorithm:
The algorithm takes a sequence of three-address statements as input. For each three
address statement of the form a:= b op c perform the various actions. These are as
follows:
1. Invoke a function getreg to find out the location L where the result of computation b op
c should be stored.
2. Consult the address description for y to determine y'. If the value of y currently in
memory and register both then prefer the register y' . If the value of y is not already in L
then generate the instruction MOV y' , L to place a copy of y in L.
3. Generate the instruction OP z' , L where z' is used to show the current location of z. if z is
in both then prefer a register to a memory location. Update the address descriptor of x
to indicate that x is in location L. If x is in L then update its descriptor and remove x from
all other descriptor.
4. If the current value of y or z have no next uses or not live on exit from the block or in
register then alter the register descriptor to indicate that after execution of x : = y op z
those register will no longer contain y or z.
Generating Code for Assignment Statements:
The assignment statement d:= (a-b) + (a-c) + (a-c) can be translated into the following
sequence of three address code:
1. t:= a-b
2. u:= a-c
3. v:= t +u
4. d:= v+u
The final phase of compilation is code generation. Optimization process is applied on the code through
post code generation. The compiler generates the code, which is an object code of the lower-level
programming language, such as assembly language. The higher-level source code language is
transformed into lower-level language and thus come up with lower-level object code. The object code
thus generated has the following properties:
– Meaning intended by the programmer in the original source program should carry forward in each
compilation stage until code-generation.
1.instruction selection,
3. instruction ordering.
– Approximation algorithms
– Heuristics
– Conservative estimates
• The requirements imposed on a code generator are severe The requirements imposed on a
code generator are severe
• The target program must preserve the The target program must preserve the semantic
meaning of the source program semantic meaning of the source program
• Target program must be of high quality; Target program must be of high quality; that is, it
must make effective use of the that is, it must make effective use of the available resources of
the target machine. available resources of the target machine.
• Compilers that need to produce efficient target programs, include an optimization phase
prior to code generation.
• The optimizer maps the IR into IR from which more efficient code can be generated.
• In general, the code optimization and code-generation phases of a compiler, often referred to
as the back end, may make multiple passes over the IR before generating the target program.
• Register allocation and assignment involves deciding what values to keep in which
registers.
• The most important criterion for a code generator is that The most important criterion for a
code generator is that it produce correct code. it produce correct code.
• Target Programs
• Memory Management
• Instruction Selection
• Register Allocation