0% found this document useful (0 votes)
7 views44 pages

Funda34 Logic34 Feedback34

Hand out

Uploaded by

samuraicyber123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views44 pages

Funda34 Logic34 Feedback34

Hand out

Uploaded by

samuraicyber123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

IT2016

DA and AD Converters
A voltage, current, or other quantity that conveys information is called a signal.
• Signals are ultimately derived from the world outside the electronic system but may come from either real‐time
information sources or information‐storage sources.
• An analog signal is a form of electrical energy (voltage, current, or electromagnetic power) where there is
(ideally) a linear relationship between the electrical quantity and the value that the signal represents.
• To use the power of digital electronics, one must convert from analog to digital form on the experimental
measurement end and convert from digital to analog form on the control or output end of a laboratory system.
In contrast to an analog signal, which can be represented mathematically as a continuous function of time v(t), digital
signals are invariably expressed as discrete numbers called samples associated with discrete times separated by
intervals called sampling intervals.

Source: cmm.gov.mo

Converters
There are two basic types of converters: digital-to-analog (DACs or D/As) and analog-to-digital (ADCs or A/Ds).
• In the case of DAC, they output an analog voltage that is a proportion of a reference voltage, the proportion
based on the digital word applied.
• In the case of the ADC, a digital representation of the analog voltage, applied to its input, is outputted
according to the reference voltage.
• The digital input or output is arranged in words of varying widths, referred to as bits, which are typically
arranged in groups of four called bytes.

Digital-To-Analog Architecture
• A Digital-to-Analog converter [DA or DAC] is an
electronic circuit that accepts a digital number at its
input and produces a corresponding analog signal
(usually a voltage) at the output.
• When data is in binary form, the 0's and 1's may be of
several forms where the logic zero may be a value up
to 0.8 volts, and the 1s may be a voltage from 2 to 5
volts.
• The data can be converted to clean digital form using
gates designed to be on or off, depending on the
value of the incoming signal.

03 Handout 1 *Property of STI


[email protected] Page 1 of 6
IT2016

DAC Method – Shift Keying


This method is used to send computer information over transmission channels that require analog signals. In each of
these systems, an electromagnetic carrier wave is used to carry the information over distances. This basic process is given
the name "shift-keying" to differentiate it from the purely analog systems.
Amplitude Shift Keying
• It is a technique in which the carrier signal is analog, and data to be
modulated is digital. The amplitude of the analog carrier signal is modified
to reflect binary data.
• The binary signal, when modulated, gives a zero value when the binary data
represents 0 while gives the carrier output when data is 1.
• The frequency and phase of the carrier signal remain constant.

Frequency Shift Keying


• In this modulation, the frequency of the analog carrier signal is modified to
reflect binary data.
• The output of a frequency shift keying modulated wave, is high in frequency
for a binary high (‘1’) input, and is low in frequency for a binary low (‘0’)
input.
• The amplitude and phase of the carrier signal remain constant.

Phase Shift Keying


• In this modulation, the phase of the analog carrier signal is modified to
reflect binary data.
• The amplitude and frequency of the carrier signal remain constant.

DAC Approaches
• Kelvin Divider (String DAC)
o Simplest voltage output DAC with a resistor
string and a set of switches.
o It uses the buffer, Op-Amp.
o Number of resistances and switches: 2𝑁
o The output impedance is code dependent in
which if there are small code changes, it would
have glitches (transient).
𝑁−1
𝑏𝑖
𝑉𝑂 = 𝑉𝑅 ∑
2𝑖+1
𝑖=0
o Wherein:
▪ 𝑁 – number of bits
▪ 𝑉𝑂 – Voltage output
▪ 𝑉𝑅 – Reference voltage
▪ 𝑏𝑖 – Digital equivalent

03 Handout 1 *Property of STI


[email protected] Page 2 of 6
IT2016

Example: How many resistors and switches would be required to implement an 8-bit resistor-string DAC?
𝑁−1
𝑁 8
𝑅𝑒𝑠𝑖𝑠𝑡𝑜𝑟𝑠 = 2 = 2 = 256 𝑆𝑤𝑖𝑡𝑐ℎ𝑒𝑠 = ∑ 2𝑖 = 28 − 1 = 255
𝑖=0
• Weighted Sum DAC
o One way to achieve D/A conversion is to use a
summing amplifier.
o This approach is not satisfactory for many bits
because it requires too much precision in the
summing resistors.
o Inputs are weighted in the summing amplifier to
produce the corresponding analog voltage
𝑁
𝑏𝑖
𝐼𝑂 = 𝑉𝑅 ∑ (𝑖−1)
2 𝑅
𝑖=1
𝑉𝑂 = −𝑅𝑓 𝐼𝑂
o Wherein:
▪ 𝑁 – number of bits
▪ 𝐼𝑂 – Current output
▪ 𝑉𝑂 – Voltage output
▪ 𝑉𝑅 – Reference voltage
▪ 𝑅𝑓 – Fixed Resistor
▪ 𝑅 – Junction Resistor
▪ 𝑏𝑖 – Digital equivalent
Example: Find output voltage and current for a binary-weighted resistor DAC of 4 bits with the given condition of 𝑅 =
10𝑘Ω, 𝑅𝑓 = 5𝑘Ω, 𝑉𝑅 = −10𝑣 and applied binary word is 1001.
Current output 𝐼𝑂 :
−10𝑣 1 0 0 1
𝐼𝑂 = [ + + + ] = −0.001125 𝐴
Ω 20 × 10𝑘Ω 21 × 10𝑘Ω 22 × 10𝑘Ω 23 × 10𝑘Ω

Voltage output 𝑉𝑂 :
𝑉𝑂 = −𝑅𝑓 𝐼𝑂 = (10𝑘Ω × −0.001125 𝐴) = 5.625𝑣

• R-2R Ladder
o The summing amplifier with the R-2R ladder of
resistances shown produces the output where
the D's take the value 0 or 1.
o The digital inputs could be TTL voltages which
close the switches on a logical 1 and leave it
grounded for a logical 0.
o It has two modes: Voltage mode & Current
mode.
4-bit R-2R Ladder:
𝑅𝑓 𝐷0 𝐷1 𝐷2 𝐷3
𝑉𝑂 = 𝑉𝑅 [ + + + ]
𝑅 16 8 4 2
o Wherein:
▪ 𝑉𝑂 – Voltage output
▪ 𝑉𝑅 – Reference voltage
▪ 𝑅𝑓 – Fixed Resistor
▪ 𝑅 – Junction Resistor
▪ 𝐷𝑛 – Digital equivalent (4 bits)

03 Handout 1 *Property of STI


[email protected] Page 3 of 6
IT2016

Example: Find the voltage for an R-2R DAC of 4 bits with given condition of 𝑅 = 8𝑘Ω, 𝑅𝑓 = 2𝑘Ω, 𝑉𝑅 = 12𝑣 and
applied binary word is 1100.
2𝑘Ω 0 0 1 1
𝑉𝑂 = (12𝑣) [ + + + ] = 0.25(12𝑣)(0.75) = 2.25𝑣
8𝑘Ω 16 8 4 2

Analog-To-Digital Architectures
• An Analog to Digital converter [AD or ADC] is an
electronic circuit that accepts an analog input signal
(usually a voltage) and produces a corresponding
digital number at the output.
• The basic principle of operation is to use the
comparator principle to determine whether to turn
on a bit of the binary number output.
• Most ADC chips also include some of the support
circuitry, such as a clock oscillator for the sampling
clock, reference, the sample and hold function, and
output data latches.

ADC Method - Pulse Code Modulation (PCM)


It is a method that is used to convert an analog signal into a digital signal so that a modified analog signal can be
transmitted through the digital communication network.
Sampling
• This is the technique that helps to collect the sample data at
instantaneous values of the message signal, to reconstruct the
original signal.
• Pulse amplitude modulation is a technique in which the
amplitude of each pulse is controlled by the instantaneous
amplitude of the modulation signal.

Quantizing
• It is a process of reducing the excessive bits and confining the
data.
• The sampled output, when given to Quantizer, reduces the
redundant bits and compresses the value.

Encoder
• The digitization of the analog signal is done by the encoder.
• It designates each quantized level by a binary code.
• The sampling done here is the sample-and-hold process.
• A sample-and-hold circuit is usually used with an ADC to sample
the input analog signal and hold the sampled signal.

03 Handout 1 *Property of STI


[email protected] Page 4 of 6
IT2016

ADC Approaches
• Successive-approximation ADC (SAR)
o In this ADC, the normal counter is replaced
with a successive approximation register.
o This is designed to reduce the conversion and
to increase the speed of operation.
o The successive approximation registers count
by changing the bits.

Example: Calculate the digital and voltage value of a 4-bit ADC with 𝑉𝑖𝑛 = 0.6𝑣 and 𝑉𝑅 = 1𝑣.
Step 1: Determine the state and resolution. Step 2: Set each MSB by dividing 𝑉𝑅 ÷ 2 until the value
reaches to resolution as the last MSB or MSB(0).
𝑁 = 2𝑛 1𝑣
𝑀𝑆𝐵(3) = = 0.5𝑣
𝑁 = 24 = 16 2
𝑀𝑆𝐵(3)
𝑉𝑅 1𝑣 𝑀𝑆𝐵(2) = = 0.25𝑣
2
𝑅𝑒𝑠 = = 𝑀𝑆𝐵(2)
𝑁 16 𝑀𝑆𝐵(1) = = 0.125𝑣
𝑅𝑒𝑠 = 0.0625 2
𝑴𝑺𝑩(𝟏)
𝑴𝑺𝑩(𝟎) = = 𝟎. 𝟎𝟔𝟐𝟓
𝟐
Step 3: Comparison of voltage references to voltage input.
First Bit Succeeding Bit
• Divide 𝑉𝑅 by 2 • Compute 𝑉𝑅 as:
• Compare 𝑉𝑅 with 𝑉𝑖𝑛 𝑉𝑅 = 1𝑠𝑡 𝑀𝑆𝐵 + 𝑐𝑢𝑟𝑟𝑒𝑛𝑡 𝑀𝑆𝐵
• If 𝑉𝑖𝑛 > 𝑉𝑅 then MSB is 1 • Compare the recent 𝑉𝑅 with 𝑉𝑖𝑛
• If 𝑉𝑖𝑛 < 𝑉𝑅 then MSB is 0 • Follow comparison as of First Bit.

𝑴𝑺𝑩(𝟑): 𝑴𝑺𝑩(𝟐): 𝑴𝑺𝑩(𝟏): 𝑴𝑺𝑩(𝟎):


𝑉𝑖𝑛 = 0.6𝑣 𝑉𝑖𝑛 = 0.6𝑣 𝑉𝑖𝑛 = 0.6𝑣 𝑉𝑖𝑛 = 0.6𝑣
𝑉𝑅 = 1𝑣 ÷ 2 𝑉𝑅 = 𝑀𝑆𝐵(3) + 𝑀𝑆𝐵(2) 𝑉𝑅 = 𝑀𝑆𝐵(3) + 𝑀𝑆𝐵(1) 𝑉𝑅 = 𝑀𝑆𝐵(3) + 𝑀𝑆𝐵(0)
𝑉𝑅 = 0.5𝑣 𝑉𝑅 = 0.5 + 0.25 𝑉𝑅 = 0.5 + 0.125 𝑉𝑅 = 0.5 + 0.0625
𝑉𝑅 = 0.75 𝑉𝑅 = 0.625 𝑉𝑅 = 0.5625
𝑉𝑖𝑛 > 𝑉𝑅 𝑉𝑖𝑛 < 𝑉𝑅 𝑉𝑖𝑛 < 𝑉𝑅 𝑉𝑖𝑛 > 𝑉𝑅

∴ 𝑀𝑆𝐵(3) = 1 ∴ 𝑀𝑆𝐵(2) = 0 ∴ 𝑀𝑆𝐵(1) = 0 ∴ 𝑀𝑆𝐵(0) = 1

Step 4: Lineup the all MSB from greatest to least to get the digital value.
1 0 0 1
MSB(3) MSB(2) MSB(1) MSB(0)

Step 5: Add-up all MSB, which value is 1 to find for the voltage value.
𝑉𝐷 = {[𝑴𝑺𝑩(𝟑) × 𝟏] + [𝑀𝑆𝐵(2) × 0] + [𝑀𝑆𝐵(1) × 0] + [𝑴𝑺𝑩(𝟎) × 𝟏]}𝑣
𝑉𝐷 = [0.5 + 0 + 0 + 0.0625]𝑣
𝑉𝐷 = 0.5625𝑣

03 Handout 1 *Property of STI


[email protected] Page 5 of 6
IT2016

• Flash ADC
o It is a series of comparators in which each input
compares to a unique reference voltage.
o The comparator outputs connect to a priority encoder
circuit, which produces a binary output.
o As the analog input voltage exceeds the reference
voltage at each comparator, the comparator outputs
will sequentially saturate to a high state.
o The priority encoder generates a binary number
based on the highest-order active input, ignoring all
other active inputs.
• Single-Slope Integrating ADC
o An unknown input voltage is integrated, and the
value is compared against a known reference value.
o The time it takes for the integrator to trip the
comparator is proportional to the unknown voltage.
o In this case, the known reference voltage must be
stable and accurate to guarantee the accuracy of the
measurement.

References:
Boylestad, R. & Nashelsky, R. (2013). Electronic devices and circuit theory (11th ed.). Pearson.
Lee, J., Jeelani, K., & Beckwith, J. (n.d.). Digital to analog converter [Slides]. Georgia Institute of Technology.
http://ume.gatech.edu/mechatronics_course/DAC_S04.pdf
Nave, R. (2016). Digital-to-analog conversion [Lecture]. Georgia State University. http://hyperphysics.phy-
astr.gsu.edu/hbase/Electronic/dac.html
Pandey, H. (Nov. 25, 2019). Digital to analog conversion. Geeksforgeeks. https://www.geeksforgeeks.org/digital-to-
analog-conversion/?ref=lbp
Stephan, K. (2015). Analog and mixed-signal electronics. Wiley.
Sunny Classroom. (Nov. 17, 2018). Pcm - Analog to digital conversion [Video]. YouTube.
https://www.youtube.com/watch?v=HlGJ6xxbz8s

03 Handout 1 *Property of STI


[email protected] Page 6 of 6
IT2016

Data Measurement and Acquisition


Measurement involves assigning numeric values to objects or events to make meaning and understanding of a variable.
It is a way of refining our ordinary observations to assign numerical values to our observations.
• In measurement, the most used units define quantities of length, area, volume, angular measurement,
temperature, pressure, electrical/electronic units, and many more.
• Properties of objects that can take on different values are referred to as variables. Variable responses to
individual items on these scales are combined to create a single score meant to measure variables or traits.
• The measurement process involves recording observations that are manifestations of the underlying element.
o Precision is the degree of consistency of a group of measurements, while accuracy is the absolute
nearness of measured quantities to their true values.

Scales of Measurement
The scale is several individual measurement items are combined to create a single, composite instrument.
Measurement scales are important because they allow us to transform or substitute precise numbers for imprecise
words.
• Nominal: Categorical data and numbers simply used as identifiers or names represent a nominal scale of
measurement.
o Numbers on the back of a baseball jersey.
o Social security number.
• Ordinal: An ordinal scale of measurement represents an ordered series of relationships or rank order.
o Individuals competing in a contest may be fortunate to achieve first, second, or third place.
o Likert-type scales.
• Interval: A scale representing the quantity and has equal units but for which zero simply represents an
additional point of measurement is an interval scale. There is no ‘true’ zero, only an ‘arbitrary’ zero.
o The Fahrenheit scale.
o Measurement of Sea Level.
• Ratio: The ratio scale of measurement is like the interval scale in that it also represents the quantity and has
equality of units. This scale also has an absolute zero (no numbers exist below the zero).
o Physical measures will represent ratio data (for example, height and weight).
o The length of a piece of wood in centimeters.

Indications Indications Direction Indicates Amount Absolute Zero


Difference of Difference of Difference
Nominal X
Ordinal X X
Interval X X X
Ratio X X X X

Reliability and Validity


The goal of the measurement process is to ensure that the values assigned to variables are reliable and valid. The
validity and reliability of a test are established by evidence.
• Validity ensures that the assignment of values truly reflects the underlying construct or concept.
o Types of Validity
▪ Face Validity – Measurements appear to measure what is intended.
▪ Content Validity – Measurements are drawn from the course or program material.
▪ Concurrent Validity – Measurement is like other established measurements.
▪ Construct Validity – Series of measurement that supports a psychological concept by
predicting operationally defined behavior.
▪ Predictive Validity – Measurements predicts some target behavior.

04 Handout 1 *Property of STI


[email protected] Page 1 of 3
IT2016

• Reliability ensures that the assignment of values is consistent or reproducible. Essentially, the consistency of
scores produced by a given instrument. A measuring instrument is
reliable if measurements recorded at different times give similar
results.
o Types of Reliability
▪ Test-Retest Reliability – Administer the same test
twice and correlate scores.
▪ Alternate Reliability – Administers two forms of
test and correlate scores.
▪ Split-Half Reliability – Split test into halves and
correlate scores.
▪ Inter-Rater Reliability – Compare two or more
rates in a time then correlate score via agreement.
Both validity and reliability are important in the measurement process
because the reproducibility of a measure, as well as the trueness of a
measure, is critical in research.

Measurement errors
It can be classified as random errors and nonrandom errors. In any measurement which includes errors, true value is
impossible to find but it can be estimated through measured quantity.
• Gross Errors (Mistakes): Large amounts, easy to find, must be eliminated before adjustment.
• Systematic Errors: Follows a mathematical function, can usually be checked and adjusted, and tend to
maintain the same sign. A systematic error such as confounding variables and biasing artificially “trend” the
measurement in one direction or another.
• Random Errors: remains after eliminating gross and systematic errors. Impossible to compute or eliminate.
They follow the probability laws so that they can be adjusted. Their signs are not constant. Present in all
surveying measurements. More observations result in a better estimate of them. Random error is an
uncontrolled “noise” that does not dramatically impact the accuracy of the measurement.

Data Acquisition Parameters


It is the sampling of continuous real-world information to generate data that can be manipulated by a computer. A PC
can be used to provide data acquisition of real-world information such as voltage, current, temperature, pressure, or
sound.
• Sample Rate
This is a digital representation of the data, with changes in discrete steps where any step smaller than the
resolution of the data acquisition device cannot be represented. All modern data acquisition digitizes the data.
The data is digitized in amplitude and time.
• Filters
A filter can be used to separate the wanted signal from noise. Since there is a possibility of frequencies higher
than half the sample rate, a filter is almost always used in vibration measurement applications.
• Buffer Blocks
To capture data rapidly and precisely timed intervals, the low-level data acquisition driver does the work,
putting one sample after another into a portion of the PC’s RAM referred to as a buffer. This is used to speed
up data acquisition.
o Acquisition Types
▪ Polled (or asynchronous) acquisition can be used, in which the application determines when
to sample data from the data acquisition device, one sample at-a-time.
▪ Interrupt driven (or synchronous or buffered) acquisition acquires data in blocks, acquiring
many samples at once. Interrupt acquisition can give sample rates 10 to 1000 times faster
than polled.

04 Handout 1 *Property of STI


[email protected] Page 2 of 3
IT2016

o Acquisition Modes
▪ Continuous acquisition of data at rates over 100,000 samples per second can be achieved in
a certain software or hardware decoder. At these rates, data can only be streamed to disk in
binary format.
▪ Burst acquisition is even faster when the data acquisition device has its own buffer. The rate
of acquisition is limited only by the speed of the device and the size of its buffer.
• Time Delays
There is an inherent delay between the reading of the data and the processing of it. There is a small delay due
to the processing in the application.
• Noise
Noise is unwanted interference that affects the signal and may distort the information.
o Radiated Noise
This noise travels through the air as radio waves. To couple into a circuit or pass through an enclosure
efficiently, the dimensions of the circuit or the hole in the enclosure must be close to the wavelength of the
noise or much larger.
o Conducted Noise
This noise gets into a circuit on wires. These can be the signal wires picking up the measured signal, or they
can be the power supply wires. The conducting noise is reduced by shielding or filtering.

References:
Boylestad, R. & Nashelsky, R. (2013). Electronic devices and circuit theory (11th ed.). Pearson
Fernandez-Canque, H. (2017). Analog electronics applications – Fundamentals of design and analysis. CRC Press.
Schuler, C. (2019). Electronics: Principles and applications (9th ed.). McGraw-Hill.
Stephan, K. (2015). Analog and mixed-signal electronics. Wiley.
Storey, N. (2017). Electronics: A systems approach (6th ed.). Pearson.

04 Handout 1 *Property of STI


[email protected] Page 3 of 3
IT2014

Karnaugh Map Definition


What is a Karnaugh Map? A Karnaugh Map, also known as the Veitch diagram or the K-map, was first proposed by
Edward Veitch and modified by Maurice Karnaugh (a telecommunications engineer at Bell Labs) while designing digital
logic-based telephone switching circuits in 1953. It is an arrangement of boxes or squares called cells, where each cell
corresponds to one line of a truth table (shown below). Also, it represents a different combination of the variables
(either in minterm or maxterm) of a Boolean function.

Inputs Output B Inputs Output B


A B F A B F
0 0 A 0 1 0 0 A 0 1
0 1 0 0 1 0
1 0 1 0
1 1 1 1
1 1
Inputs Output B Inputs Output
A B F A B F B
0 0 A 0 1 0 0 A 0 1
0 1 0 0 1
1 0 1 0 0
1 1 1 1
1
1

The binary value (either in 0s or 1s) for each box is the binary value of the output terms in the corresponding table
row, while the input variables are the cells’ coordinates.

Below is an example of a truth table of the X-OR gate (taken from Logic Gates discussion) and its corresponding
Karnaugh map.

Row Number A B 𝑭𝑭 = 𝑨𝑨 ⊕ 𝑩𝑩 X - coordinate


Literal
0 0 0 0 B 𝐵𝐵� B
1 0 1 1 A 0 1 Binary Values
2 1 0 1 Row # 0 Row # 1
𝐴𝐴̅ 0
3 1 1 0 0 1 Function Values
Y - coordinate
Row # 2 Row # 3
A 1
1 0

The values inside the squares are copied from the output column of the truth table. Therefore, there is one (1)
square in the map for every row in the truth table. Around the edge of the Karnaugh map are the values of the
two (2) input variables. B is along the top, and A is placed in the left side of the K-map.

In contrast to a truth table, in which the input values typically follow a standard binary sequence (00, 01, 10, 11),
the Karnaugh map's input values are ordered as 00, 01, 11, and 10 such that one bit changes from one cell to the
next. This ordering is known as a Gray code.

Note: Gray code will be discussed further on the Code Conversion topic.

The two (2) adjacent cells in the map are arranged in a way that one variable changes every time it crosses the
horizontal or vertical cell boundaries so that any adjacent cells that are grouped together can eliminate any terms
that form the Postulate 6a, which is 𝐴𝐴 • 𝐴𝐴̅ = 0.

03 Handout 1 *Property of STI


[email protected] Page 1 of 7
IT2014

With regards to the grouping together of adjacent cells containing both ones (for SOP) or zeros (for POS), the
Karnaugh map uses the following rules for the simplification of expressions:

• Groups should NOT include cells with different values, as shown below:
B 𝐵𝐵� B B 𝐵𝐵� B B 𝐵𝐵� B B 𝐵𝐵� B
A 0 1 A 0 1 A 0 1 A 0 1
𝐴𝐴̅ 0 0 𝐴𝐴̅ 0 𝐴𝐴̅ 0 1 𝐴𝐴̅ 0
A 1 1 A 1 0 1 A 1 1 A 1 1 1

B 𝐵𝐵� B B 𝐵𝐵� B B 𝐵𝐵� B B 𝐵𝐵� B


A 0 1 A 0 1 A 0 1 A 0 1
𝐴𝐴̅ 0 1 𝐴𝐴̅ 0 1 1 𝐴𝐴̅ 0 0 𝐴𝐴̅ 0
A 1 1 A 1 A 1 0 A 1 0 0

B 𝐵𝐵� B B 𝐵𝐵� B
A 0 1 A 0 1
𝐴𝐴̅ 0 0 𝐴𝐴̅ 0 0 0
A 1 0 A 1

• Groups can be in horizontal or vertical directions, but NOT diagonal.


B 𝐵𝐵� B B 𝐵𝐵� B B 𝐵𝐵� B
B 𝐵𝐵� B
A 0 1 A 0 1 A 0 1 A 0 1
𝐴𝐴̅ 0 1 ̅
𝐴𝐴 0 1 ̅
𝐴𝐴 0 1 𝐴𝐴̅ 0 1 1
A 1 1 A 1 1 1
A 1 1 A 1 1

• Groups should contain 2𝑛𝑛 cells. This implies that if 𝑛𝑛 = 1, a group will contain two 1's since 21 = 2 (as
shown below). If 𝑛𝑛 = 2, a group will contain four 1's since 22 = 4.
BC 𝐵𝐵� C BC 𝐵𝐵�𝐶𝐶̅ 𝐵𝐵�𝐶𝐶 BC 𝐵𝐵𝐶𝐶̅
Group of 2 Group of 3
A 0 1 A 00 01 11 10
𝐴𝐴̅ 0 1 1 𝐴𝐴̅ 0 1 1 1
A 1 A 1

BC 𝐵𝐵� C BC 𝐵𝐵�𝐶𝐶̅ 𝐵𝐵�𝐶𝐶 BC 𝐵𝐵𝐶𝐶̅


Group of 4 Group of 5
A 0 1 A 00 01 11 10
𝐴𝐴̅ 0 1 1 𝐴𝐴̅ 0 1 1 1 1
A 1 1 1 A 1 1

• Groups may overlap, and each group should be as large as possible. The larger the number of 1’s or 0’s
grouped together, the simpler is the product term or the sum term that the group represents.
BC 𝐵𝐵�𝐶𝐶̅ 𝐵𝐵�𝐶𝐶 BC 𝐵𝐵𝐶𝐶̅ Groups not BC 𝐵𝐵�𝐶𝐶̅ 𝐵𝐵�𝐶𝐶 BC 𝐵𝐵𝐶𝐶̅ Groups
A 00 01 11 10 overlapping A 00 01 11 10 overlapping
𝐴𝐴̅ 0 1 1 1 1 𝐴𝐴̅ 0 1 1 1 1
A 1 1 1 A 1 1 1

• Groups may wrap around the table. The leftmost cell in a row may be grouped with the rightmost cell, and
the topmost cell in a column may be grouped with the bottommost cell. Cells occupying the four corners
of the map are also included.

03 Handout 1 *Property of STI


[email protected] Page 2 of 7
IT2014

CD 𝐶𝐶̅ 𝐷𝐷
� 𝐶𝐶̅ 𝐷𝐷 �
CD 𝐶𝐶𝐷𝐷
BC 𝐵𝐵�𝐶𝐶̅ 𝐵𝐵�𝐶𝐶 BC 𝐵𝐵𝐶𝐶̅
A 00 01 11 10 AB 00 01 11 10
𝐴𝐴̅ 0 1 1 𝐴𝐴̅𝐵𝐵� 00 1 1 1 1
A 1 1 1 𝐴𝐴̅𝐵𝐵 01
AB 11
𝐴𝐴𝐵𝐵� 10 1 1 1 1
CD 𝐶𝐶̅ 𝐷𝐷
� 𝐶𝐶̅ 𝐷𝐷 �
CD 𝐶𝐶𝐷𝐷
AB 00 01 11 10
𝐴𝐴̅𝐵𝐵� 00 1 1
𝐴𝐴̅𝐵𝐵 01
AB 11
𝐴𝐴𝐵𝐵� 10 1 1

Two and Three-Variable Maps


Two-Variable Map. The number of cells in a Karnaugh map is equal to 2𝑛𝑛 , where n is the total number of possible
input variable combinations. Thus, for the case of 2 variables, we form a map consisting of 22 = 4 (2-by-2 matrix),
as shown below, where B is along the top, and A is down the left-hand side.

For Minterm (m): For Maxterm (M):


B 𝐵𝐵� B B 𝐵𝐵� B B B 𝐵𝐵� B B 𝐵𝐵�
A 0 1 A 0 1 A 0 1 A 0 1
Row Row Row # 0 Row # 1 Row Row Row # 0 Row # 1
𝐴𝐴̅ 0 A 0
𝐴𝐴̅ 0 #0 #1 𝐴𝐴̅𝐵𝐵� 𝐴𝐴̅𝐵𝐵 A 0 #0 #1 𝐴𝐴 + 𝐵𝐵 𝐴𝐴 + 𝐵𝐵�
00 01 Row # 2 Row # 3 0+0 0+1 Row # 2 Row # 3
A 1 𝐴𝐴̅ 1
Row Row 𝐴𝐴𝐵𝐵� 𝐴𝐴𝐴𝐴 Row Row 𝐴𝐴̅ + 𝐵𝐵 𝐴𝐴̅ + 𝐵𝐵�
A 1 #2 #3 𝐴𝐴̅ 1 #2 #3
10 11 1+0 1+1

Simplification rules for two-variable K-map

Rules Example
Sample Problem 1: Identify the function which generates the K-map
shown:
B 𝐵𝐵� B
B 𝐵𝐵� B
A 0 1
One (1) square – 2 literals A 0 1
𝐴𝐴̅ 0 1
A 0
A 1
𝐴𝐴̅ 1 0
𝑭𝑭 = � + 𝑩𝑩
𝑨𝑨 �
� 𝑩𝑩
𝑭𝑭 = 𝑨𝑨 �
Sample Problem 2: Simplify the Boolean functions
𝐹𝐹 = 𝐴𝐴𝐵𝐵� + 𝐴𝐴𝐴𝐴 and 𝐹𝐹 = (𝐴𝐴 + 𝐵𝐵)(𝐴𝐴̅ + 𝐵𝐵)
Solution:
� + 𝑨𝑨𝑨𝑨:
For 𝑭𝑭 = 𝑨𝑨𝑩𝑩 For 𝑭𝑭 = (𝑨𝑨 + 𝑩𝑩)(𝑨𝑨’ + 𝑩𝑩):
Two (2) adjacent squares – 1
B 𝐵𝐵� B B B 𝐵𝐵�
literal
A 0 1 0 1 A
𝐴𝐴̅ 0 0 A 0
A 1 1 1 0 𝐴𝐴̅ 1
𝑭𝑭 = 𝑨𝑨 𝑭𝑭 = 𝑩𝑩
Four (4) adjacent squares – logic Sample Problem 3: Identify the function which generates the K-map
1 (SOP); logic 0 (POS) shown:

03 Handout 1 *Property of STI


[email protected] Page 3 of 7
IT2014

B 𝐵𝐵� B B B 𝐵𝐵�
A 0 1 A 0 1
𝐴𝐴̅ 0 1 1 A 0 0 0
A 1 1 1 𝐴𝐴̅ 1 0 0
𝑭𝑭 = 𝟏𝟏 𝑭𝑭 = 𝟎𝟎

Three-Variable Map. In the case of 3 variables, we form a map consisting of 23 = 8 cells (2-by-4 or 4-by-2 matrix),
as shown below. 2-by-4 matrix (Minterm (m))

2-by-4 matrix (Minterm (m))


BC 𝐵𝐵�𝐶𝐶̅ 𝐵𝐵�𝐶𝐶 BC 𝐵𝐵𝐶𝐶̅ BC 𝐵𝐵�𝐶𝐶̅ 𝐵𝐵�𝐶𝐶 BC 𝐵𝐵𝐶𝐶̅
A 00 01 11 10 A 00 01 11 10
Row # 0 Row # 1 Row # 3 Row # 2 Row # 0 Row # 1 Row # 3 Row # 2
𝐴𝐴̅ 0 𝐴𝐴̅ 0
000 001 011 010 𝐴𝐴̅𝐵𝐵� 𝐶𝐶̅ 𝐴𝐴̅𝐵𝐵� 𝐶𝐶 𝐴𝐴̅𝐵𝐵𝐵𝐵 𝐴𝐴̅𝐵𝐵 𝐶𝐶̅
Row # 4 Row # 5 Row # 7 Row # 6 Row # 4 Row # 5 Row # 7 Row # 6
A 1 A 1
100 101 111 110 𝐴𝐴𝐵𝐵� 𝐶𝐶̅ 𝐴𝐴𝐵𝐵� 𝐶𝐶 𝐴𝐴𝐴𝐴𝐴𝐴 𝐴𝐴𝐴𝐴𝐶𝐶̅

Note: The same table is applied for maxterm (M), except that the literals are in product of sum (POS) form – all
literals in “1’s” are complemented while all literals in “0’s” are not complemented.

Simplification rules for three-variable K-map

Rules Example
Sample Problem 1: Identify the function which generates the K-
map shown:
BC 𝐵𝐵�𝐶𝐶̅ 𝐵𝐵�𝐶𝐶 BC 𝐵𝐵𝐶𝐶̅
One (1) square – 3 literals A 00 01 11 10
𝐴𝐴̅ 0 1
A 1
� 𝑩𝑩𝑩𝑩
𝑭𝑭 = 𝑨𝑨
Sample Problem 2: Simplify the Boolean function
𝐹𝐹 = 𝐴𝐴𝐴𝐴’𝐶𝐶 + 𝐴𝐴𝐴𝐴𝐴𝐴 + 𝐴𝐴𝐴𝐴𝐴𝐴’
Solution:
For 𝑭𝑭 = 𝑨𝑨𝑩𝑩� 𝑪𝑪 + 𝑨𝑨𝑨𝑨𝑨𝑨 + 𝑨𝑨𝑨𝑨𝑪𝑪

BC 𝐵𝐵�𝐶𝐶̅ 𝐵𝐵�𝐶𝐶 BC 𝐵𝐵𝐶𝐶̅
A 00 01 11 10
𝐴𝐴̅ 0
Two (2) adjacent squares – 2 literals A 1 1 1 1

𝑭𝑭 = 𝑨𝑨𝑨𝑨 + 𝑨𝑨𝑨𝑨;
Note: 2 literals per array
Simplifying further:
Postulate 5a (Distributive):
(𝐴𝐴 • 𝐵𝐵) + (𝐴𝐴 • 𝐶𝐶) = 𝐴𝐴 • (𝐵𝐵 + 𝐶𝐶)
= 𝑭𝑭 = 𝑨𝑨(𝑪𝑪 + 𝑩𝑩)
Sample Problem 3: Simplify the Boolean function
Four (4) adjacent squares – 1 literal
𝐹𝐹 = (𝐴𝐴 + 𝐵𝐵 + 𝐶𝐶̅ )(𝐴𝐴̅ + 𝐵𝐵 + 𝐶𝐶)(𝐴𝐴̅ + 𝐵𝐵 + 𝐶𝐶̅ )

03 Handout 1 *Property of STI


[email protected] Page 4 of 7
IT2014

Solution:
� )(𝑨𝑨
For 𝑭𝑭 = (𝑨𝑨 + 𝑩𝑩 + 𝑪𝑪 � + 𝑩𝑩 + 𝑪𝑪
� )(𝑨𝑨 + 𝑩𝑩� + 𝑪𝑪’)(𝑨𝑨� + 𝑩𝑩� + 𝑪𝑪’)
BC 𝐵𝐵 + 𝐶𝐶 𝐵𝐵 + 𝐶𝐶̅ 𝐵𝐵� + 𝐶𝐶̅ 𝐵𝐵� + 𝐶𝐶
A 00 01 11 10
A 0 0 0
𝐴𝐴̅ 1 0 0

𝑭𝑭 = 𝑪𝑪
Sample Problem 4: Identify the function which generates the K-
map shown:
BC 𝐵𝐵�𝐶𝐶̅ 𝐵𝐵�𝐶𝐶 BC 𝐵𝐵𝐶𝐶̅
Eight (8) adjacent squares – logic 1
A 00 01 11 10
(SOP); logic 0 (POS)
̅
𝐴𝐴 0 1 1 1 1
A 1 1 1 1 1
𝑭𝑭 = 𝟏𝟏

Four-Variable Map
In the case of 4 variables, we form a map consisting of 24 = 16 cells (4-by-4 matrix), as shown below.
For Minterm (m): For Minterm (m):
CD 𝐶𝐶̅ 𝐷𝐷
� 𝐶𝐶̅ 𝐷𝐷 CD �
𝐶𝐶𝐷𝐷 CD 𝐶𝐶̅ 𝐷𝐷
� 𝐶𝐶̅ 𝐷𝐷 CD �
𝐶𝐶𝐷𝐷
AB 00 01 11 10 AB 00 01 11 10
Row # Row # Row # Row # Row # Row # Row # Row #
𝐴𝐴̅𝐵𝐵� 00 0 1 3 2 𝐴𝐴̅𝐵𝐵� 00 0 1 3 2
0000 0001 0011 0010 𝐴𝐴̅𝐵𝐵� 𝐶𝐶̅ 𝐷𝐷
� 𝐴𝐴̅𝐵𝐵� 𝐶𝐶̅ 𝐷𝐷 𝐴𝐴̅𝐵𝐵�𝐶𝐶𝐶𝐶 𝐴𝐴̅𝐵𝐵� 𝐶𝐶𝐷𝐷

Row # Row # Row # Row # Row # Row # Row # Row #
𝐴𝐴̅𝐵𝐵 01 4 5 7 6 𝐴𝐴̅𝐵𝐵 01 4 5 7 6
0100 0101 0111 0110 𝐴𝐴𝐵𝐵 𝐶𝐶̅ 𝐷𝐷
̅ � 𝐴𝐴𝐵𝐵 𝐶𝐶̅ 𝐷𝐷
̅ ̅
𝐴𝐴𝐵𝐵𝐵𝐵𝐵𝐵 ̅
𝐴𝐴𝐵𝐵𝐵𝐵𝐷𝐷 �
Row # Row # Row # Row # Row # Row # Row # Row #
AB 11 12 13 15 14 AB 11 12 13 15 14
1100 1101 1111 1110 𝐴𝐴𝐴𝐴𝐶𝐶̅ 𝐷𝐷 � 𝐴𝐴𝐴𝐴𝐶𝐶̅ 𝐷𝐷 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝐴𝐴𝐴𝐴𝐴𝐴𝐷𝐷 �
Row # Row # Row # Row # Row # Row # Row # Row #
𝐴𝐴𝐵𝐵� 10 8 9 11 10 𝐴𝐴𝐵𝐵� 10 8 9 11 10
1000 1001 1011 1010 𝐴𝐴𝐵𝐵 𝐶𝐶̅ 𝐷𝐷
� � 𝐴𝐴𝐵𝐵 𝐶𝐶̅ 𝐷𝐷
� 𝐴𝐴𝐵𝐵�𝐶𝐶𝐶𝐶 �
𝐴𝐴𝐵𝐵 𝐶𝐶𝐷𝐷�

Note: The same table is applied for maxterm (M), except that the literals are in product of sum (POS) form – all
literals in “1’s” are complemented while all literals in “0’s” are not complemented.

Simplification rules for four- variable K-map

Rules Example
Sample Problem 1: Identify the function which generates the K-
map shown:
CD 𝐶𝐶̅ 𝐷𝐷
� 𝐶𝐶̅ 𝐷𝐷 CD �
𝐶𝐶𝐷𝐷
AB 00 01 11 10
One (1) square – 4 literals 𝐴𝐴̅𝐵𝐵� 00
𝐴𝐴̅𝐵𝐵 01 1
AB 11
𝐴𝐴𝐵𝐵� 10
� 𝑩𝑩𝑩𝑩𝑫𝑫
𝑭𝑭 = 𝑨𝑨 �
Two (2) adjacent squares – 3 literals Sample Problem 2: Simplify the Boolean functions

03 Handout 1 *Property of STI


[email protected] Page 5 of 7
IT2014

𝐹𝐹 = (𝐴𝐴̅ + 𝐵𝐵� + 𝐶𝐶 + 𝐷𝐷)( 𝐴𝐴̅ + 𝐵𝐵� + 𝐶𝐶̅ + 𝐷𝐷)


CD 𝐶𝐶 + 𝐷𝐷 𝐶𝐶 + 𝐷𝐷 � 𝐶𝐶̅ + 𝐷𝐷 � C’+D
AB 0+0 0+1 1+1 1+0
𝐴𝐴 + 𝐵𝐵 0 + 0
𝐴𝐴 + 𝐵𝐵� 0 + 1
𝐴𝐴̅ + 𝐵𝐵� 1 + 1 0 0
̅
𝐴𝐴 + 𝐵𝐵 1 + 0
𝑭𝑭 = 𝑨𝑨 � + 𝑩𝑩� + 𝑫𝑫
Sample Problem 3: Identify the function which generates the K-
map shown:
CD 𝐶𝐶̅ 𝐷𝐷
� 𝐶𝐶̅ 𝐷𝐷 CD �
𝐶𝐶𝐷𝐷
AB 00 01 11 10
Four (4) squares – 2 literals 𝐴𝐴̅𝐵𝐵� 00 1
𝐴𝐴̅𝐵𝐵 01 1
AB 11 1
𝐴𝐴𝐵𝐵� 10 1
𝑭𝑭 = 𝑪𝑪𝑪𝑪
Sample Problem 4: Identify the function which generates the K-
map shown:
CD 𝐶𝐶 + 𝐷𝐷 𝐶𝐶 + 𝐷𝐷� 𝐶𝐶̅ + 𝐷𝐷
� 𝐶𝐶̅ + 𝐷𝐷
AB 0+0 0+1 1+1 1+0
Eight (8) squares – 1 literal 𝐴𝐴 + 𝐵𝐵 0 + 0 0 0
𝐴𝐴 + 𝐵𝐵� 0 + 1 0 0
̅ �
𝐴𝐴 + 𝐵𝐵 1 + 1 0 0
𝐴𝐴̅ + 𝐵𝐵 1 + 0 0 0
𝑭𝑭 = 𝑫𝑫
Sample Problem 5: Identify the function which generates the K-
map shown:
CD 𝐶𝐶̅ 𝐷𝐷
� 𝐶𝐶̅ 𝐷𝐷 CD �
𝐶𝐶𝐷𝐷
AB 00 01 11 10
Sixteen (16) squares – logic 1 (SOP); logic 0
𝐴𝐴̅𝐵𝐵� 00 1 1 1 1
(POS)
𝐴𝐴̅𝐵𝐵 01 1 1 1 1
AB 11 1 1 1 1
𝐴𝐴𝐵𝐵� 10 1 1 1 1
𝑭𝑭 = 𝟏𝟏

Don’t Care Conditions


Sometimes, a situation arises in which some input variable combinations are not allowed. These unallowed states
are treated as the “don’t care” condition.

The “don’t care” condition (which is often represented with an “X” in a K-map) can either be 0 or 1. It does not
affect the result of the expression, since it is assumed that the combinations of the inputs leading to this condition
will never occur.

With don’t care conditions, further simplification of the K-map is often guaranteed.

Note: Don’t care conditions are further covered in the Code Conversion topic.

03 Handout 1 *Property of STI


[email protected] Page 6 of 7
IT2014

Solution:
CD 𝐶𝐶 + 𝐷𝐷 �
𝐶𝐶 + 𝐷𝐷 𝐶𝐶̅ + 𝐷𝐷
� 𝐶𝐶̅ + 𝐷𝐷
AB 0+0 0+1 1+1 1+0
0 1 3 2
Sample Problem: Simplify the Boolean function 𝐴𝐴 + 𝐵𝐵 0+0 X X
𝐹𝐹 (𝐴𝐴, 𝐵𝐵, 𝐶𝐶, 𝐷𝐷) = 𝛱𝛱𝑀𝑀(6,7,10,11,12,14,15) with
4 5 7 6
the don’t care conditions described by the 𝐴𝐴 + 𝐵𝐵� 0+1
0 0
function: 𝑑𝑑(𝐴𝐴, 𝐵𝐵, 𝐶𝐶, 𝐷𝐷) = 𝛱𝛱𝑀𝑀 (0,3,13).
12 13 15 14
𝐴𝐴̅ + 𝐵𝐵� 1+1 X 0
0 0
8 9 11 10
𝐴𝐴̅ + 𝐵𝐵 1+0
0 0
� + 𝑩𝑩
𝑭𝑭 = (𝑨𝑨 � )(𝑩𝑩
� + 𝑪𝑪
� )(𝑨𝑨
� + 𝑪𝑪
�)

References:
Karim, M., & Chen, X. (2017). Digital design: Basic concepts and principles. Boca Raton, FL: CRC Press/Taylor & Francis.
LaMeres, B. (2019). Introduction to logic circuits & logic design with VHDL (1st ed.). Springer International.
Ndjountche, T. (2016). Digital electronics 2: Sequential and arithmetic logic circuits. John Wiley & Sons.

03 Handout 1 *Property of STI


[email protected] Page 7 of 7
IT2014

NAND Implementation

NAND Gate as a NOT (Inverter) Gate – This is made by


connecting all the inputs and creating, in effect, a single
common input.

NAND Gate as an AND Gate – This is made by


connecting the output of one (1) NAND gate to the other
NAND gate inputs that are connected.

NAND Gate as an OR Gate – This is made by connecting


the two (2) NAND gates (each with all the inputs
connected) to another NAND gate’s inputs.

Example: Convert the following gate circuit diagram into


one built exclusively of NAND gates.

Solution:

NOR Implementation

NOR Gate as a NOT (Inverter) Gate – Just like the NAND


gate, this is made by connecting all the inputs and
creating, in effect, a single common input.

NOR Gate as an AND Gate – This is made by connecting


the two (2) NOR gates (each with all the inputs
connected) to another NOR gate’s inputs.

04 Handout 1 *Property of STI


[email protected] Page 1 of 2
IT2014

NOR Gate as an OR Gate – This is made by connecting


the output of one (1) NOR gate to the inputs of the other
NOR gate that are connected.

Example: Convert the following gate circuit diagram into Solution:


one built exclusively of NOR gates.

References:
Karim, M., & Chen, X. (2017). Digital design: Basic concepts and principles. Boca Raton, FL: CRC Press/Taylor & Francis
LaMeres, B. (2019). Introduction to logic circuits & logic design with VHDL (1st ed.). Springer International
Ndjountche, T. (2016). Digital electronics 2: Sequential and arithmetic logic circuits. John Wiley & Sons

04 Handout 1 *Property of STI


[email protected] Page 2 of 2
IT2001

PARTIAL FRACTION EXPANSION


The use of the Laplace Transform is to convert a time-domain signal to its frequency domain
counterpart. One of its advantages is that every solution becomes algebraic in nature, i.e., no differential
equations involved.

If a problem is solved using Laplace Transforms, then the final answer will still be in terms of the variable
“s” and not the time variable “t”. There is a need to find a way to reverse this operation to go back to
the time domain perspective.

This is where the Inverse Laplace Transform comes into play. It reverses the operation, thus
transforming the frequency domain signal to its time-domain representation. In order to this, the Inverse
Laplace Transform Integral must be used, as given below.

 + j
L F (s ) = f (t ) =  F (s )e
−1 1 st
ds
2j
 − j

However, using such is difficult to evaluate because it requires contour integration using complex
variables theory.

The complicated fraction is split up into forms that are in the Laplace Transform table.

The Laplace Transform table provides common engineering problem pairs. This implies that once the
polynomial is split into simpler fractions, all that is left is to look at the Laplace Transform table.

The Laplace Transform table is shown below.

𝑓(𝑡) 𝐹(𝑠) 𝑓(𝑡) 𝐹(𝑠)


1 𝑛!
𝑢(𝑡) 𝑡 𝑛 𝑒−𝑎𝑡 𝑢(𝑡)
𝑠 (𝑠 + 𝑎)𝑛+1
𝑛! 𝑠
𝑡 𝑛 𝑢(𝑡) 𝑐𝑜𝑠 𝜔 𝑡𝑢(𝑡)
𝑠 𝑛+1 𝑠 + 𝜔2
2
𝜔
𝛿(𝑡) 1 𝑠𝑖𝑛 𝜔 𝑡𝑢(𝑡)
𝑠2 + 𝜔2
𝑠+𝑎
𝛿(𝑡 − 𝑎) 𝑒 −𝑎𝑠 𝑒 −𝑎𝑡 𝑐𝑜𝑠 𝜔 𝑡𝑢(𝑡)
(𝑠 + 𝑎)2 + 𝜔 2
𝐾 𝜔
𝐾𝑒 −𝑎𝑡 𝑢(𝑡) 𝑒 −𝑎𝑡 𝑠𝑖𝑛 𝜔 𝑡𝑢(𝑡)
𝑠+𝑎 (𝑠 + 𝑎)2 + 𝜔 2

Most Laplace transform expressions are not in recognizable form, but in most cases appear in a
rational form of, that is:

N (s ) bm s m + bm −1s m −1 + bm − 2 s m − 2 +  + b1s + b0
F (s ) = =
D(s ) an s n + an −1s n −1 + an − 2 s n − 2 +  + a1s + a0
an , bm = coefficien ts
If
m  n → Proper rational function
m  n → Improper rational function

Remember that any rational function will always have a numerator and a denominator, and this time,
the numerator and denominator are functions of “s”.

From the rational function above, it is noteworthy to say that the coefficients bm and an can never be
equal to zero. If this happens, then the degree of the numerator and denominator will decrease by one.
Other coefficients other than these two can be equal to zero.

Partial Fraction Expansion *Property of STI


Page 1 of 15
IT2001

If the degree of the numerator is less than that of the degree of the denominator, then there exists a
proper rational function. Most engineering problems and scenarios are of this form.

If the degree of the numerator is greater than or equal that of the degree of the denominator, then there
exists an improper rational function.

One of the most important things to do in a partial fraction is getting the roots. Both the numerator and
the denominator have roots. Roots are values that will make the polynomial equal to zero.

If the numerator is equated to zero, then the roots are called zeros. If the denominator is equated to
zero, then the roots are called poles. For partial fraction expansion, getting the zeros is not yet of
utmost importance. Getting the poles is the primary thing needed to determine, and these poles are
determined through the basic factoring techniques.

Once the denominator has been factored out to determine the poles, one would like to observe whether
these poles are distinct and real, repeated and real, complex conjugates, or a combination of real and
complex conjugate poles.

Complex conjugates mean that these poles always appear in pairs, thus, always have an even degree
in “s” polynomial. Recall also that the complex conjugate of a +bj is a – bj. As for another example,
the complex conjugate of 2 – 2j is 2 + 2j.

Because there is a slight difference in the attack of certain problems in determining the inverse Laplace
Transform, there is a need to explore different cases.

DISTINCT AND REAL POLES

The first step in proceeding with partial fraction for all cases would always be factoring. If all poles are
distinct and real, then for a given a proper rational polynomial in “s”, the rational function can be written
as below.

N (s )
F (s ) =
(s − p1 )(s − p2 )(s − p3 ) (s − pn )
p1  p2  p3    pn

Notice that the poles are not equal to any of the other poles. They are totally unique from the rest of
the poles. If after examination, the poles are found unique, then the partial fraction expansion of the
given rational polynomial follows the form:

F (s ) =
A1 A2 A3 An
+ + ++
s − p1 s − p2 s − p3 s − pn

Observe that there will be “n” fractions in order to fully represent the given rational polynomial. For a
polynomial with degree “n”, there lies “n” number of values that will make the polynomial equal to zero.

One would also see that the above partial fraction expansion has denominators with degree equal to
one.

In partial fraction expansion, it is required to just evaluate the coefficients An. After all An’s are solved,
look at the Laplace Transform table, and convert it to the time domain representation.

The coefficients are evaluated as follows:

An = (s − pn )F (s ) s = p
n

Looking at the Laplace transform table:

Partial Fraction Expansion *Property of STI


Page 2 of 15
IT2001

 Ke− at u(t )
K
s+a
The number raised in the exponential function represents the roots derived in the homogeneous
equation of the differential equation.

3s + 2
Example 1: Given that F (s ) =
s 2 + 3s + 2
The first step is always to factor the denominator. Leave the numerator as is.

3s + 2 3s + 2
F (s ) = =
2
s + 3s + 2 (s + 1)(s + 2)
The partial fraction expansion of the polynomial above is:

F (s ) =
A1 A
+ 2
s +1 s + 2
To evaluate the coefficients A1 and A2, recall that:

A1 = (s − p1 )F (s ) s = p1

A2 = (s − p2 )F (s ) s = p
2

Evaluate the coefficients:

 3s + 2 
A1 = (s + 1) 
 (s + 1)(s + 2) s = −1
 3(− 1) + 2 
A1 =  
 (− 1 + 2)  s = −1
A1 = −1

And

 3s + 2 
A2 = (s + 2) 
 (s + 1)(s + 2) s = −2
 3(− 2) + 2 
A2 =  
 (− 2 + 1)  s = −2
A2 = 4

Now that all coefficients are completely known, then the partial fraction expansion of the given rational
polynomial is:

Partial Fraction Expansion *Property of STI


Page 3 of 15
IT2001

3s + 2
F (s ) =
2
s + 3s + 2
3s + 2
F (s ) =
(s + 1)(s + 2)
−1
F (s ) =
4
+
s +1 s + 2
Looking at the Laplace Transform table, the time domain representation of the above partial fractions
are:

(
f (t ) = L−1 F (s ) = − e −t + 4e −2t u(t ) )
Do not forget the “u(t)”. This is the unit step function that implies that this is a one-sided or unilateral
Laplace transform pair.

3s 2 + 2s + 5
Example 2: Given that F (s ) =
s 3 + 12 s 2 + 44 s + 48
The first step is always to factor the denominator. Leave the numerator as is.

3s 2 + 2s + 5 3s 2 + 2s + 5
F (s ) = =
s 3 + 12 s 2 + 44 s + 48 (s + 2)(s + 4)(s + 6)
The partial fraction expansion of the polynomial above is:

F (s ) =
A1 A A
+ 2 + 3
s+2 s+4 s+6
To evaluate the coefficients A1 and A2, recall that:

A1 = (s − p1 )F (s ) s = p1

A2 = (s − p2 )F (s ) s = p
2

A3 = (s − p2 )F (s ) s = p
3

Evaluate the coefficients:

 3s 2 + 2s + 5 
A1 = (s + 2) 
 (s + 2)(s + 4)(s + 6) s = −2
 3(− 2)2 + 2(− 2) + 5 
A1 =  
 (− 2 + 4)(− 2 + 6)  s = −2
9
A1 =
8

Partial Fraction Expansion *Property of STI


Page 4 of 15
IT2001

 3s 2 + 2s + 5 
A2 = (s + 4) 
 (s + 2)(s + 4)(s + 6) s = −4
 3(− 4)2 + 2(− 4) + 5 
A2 =  
 (− 4 + 2)(− 4 + 6)  s = −4
37
A2 = −
4
and

 3s 2 + 2s + 5 
A3 = (s + 6) 
 (s + 2)(s + 4)(s + 6) s = −6 1
 3(− 6)2 + 2(− 6) + 5 
A3 =  
 (− 6 + 2)(− 6 + 4)  s = −6
89
A3 =
8
Now that all coefficients are completely known, then the partial fraction expansion of the given rational
polynomial is:

3s 2 + 2s + 5
F (s ) =
s 3 + 12 s 2 + 44 s + 48
3s 2 + 2s + 5
F (s ) =
(s + 2)(s + 4)(s + 6)
9 37 89
F (s ) = 8 − 4 + 8
s+2 s+4 s+6
Looking at the Laplace Transform table, the time domain representation of the above partial fractions
is:

9 
f (t ) = L−1F (s ) =  e − 2t − e − 4t + e − 6t u (t )
37 89
8 4 8 

Do not forget the “u(t)”. This is the unit step function that implies that this is a one-sided or unilateral
Laplace transform pair.

REPEATED AND REAL POLES

If repeated real poles with multiplicity m occur, then the proper rational function can be represented by:

N (s )
F (s ) =
(s − p1 )m (s − p2 )(s − p3 )(s − pn )
The partial fraction expansion of such is:

Partial Fraction Expansion *Property of STI


Page 5 of 15
IT2001

A11 A12 A13 A1m


F (s ) = + + +
(s − p1 )m (s − p1 )m −1 (s − p1 )m − 2 (s − p1 )
A2 A3 An
+ + ++
s − p 2 s − p3 s − pn

Notice that there occur “m” partial fractions for a pole with multiplicity “m”.

For the simple poles, the coefficients are evaluated just the same:

An = (s − pn )F (s ) s = p
n

However, for poles with multiplicity “m”, the coefficients are evaluated by:

An =
1 d m −1
(m − 1)! dsm −1
(
(s − pn )m F (s ) )
s = pn

In this way, it is seen that the derivative operation is involved. As the number of multiplicity increases,
there is also that number minus one of differentiation involved.

s+3
Example 1: Given F (s ) =
(s + 1)2 (s + 2)
The first step is always to factor the denominator. Leave the numerator as is. Since the given rational
polynomial is already factored, then the partial fraction expansion of the polynomial above is:

A1 A21 A
F (s ) = + + 22
(s + 2) (s + 1)2 (s + 1)
To evaluate the coefficients A1, A21, and A22, recall that:

A1 = (s − p1 )F (s ) s = p1

A21 = (s − p2 )2 F (s )
s = p2

A22 =
d
ds
 
(s − p 2 )2 F (s )
s = p2

Evaluate the coefficients:

 s+3 
A1 = (s + 2) 
 (s + 1)2 (s + 2) 
  s = −2
A1 = 1
 s+3 
A21 = (s + 1)2  
 (s + 1)2 (s + 2) 
  s = −1
A21 = 2

Partial Fraction Expansion *Property of STI


Page 6 of 15
IT2001

and

d   s+3 
A22 = (s + 1)2  
ds   (s + 1)2 (s + 2 ) 
 
d  s+3 
A22 =  
ds  (s + 2 )  s = −1
A22 = −1

Another way of getting the coefficient A22 is shown below. You may use this method also in any case,
though it is mostly used in this case.

Since A1 and A21 are already known, substitute any value of “s” except the roots, in this case, -1 and -
2.

s+3 1 2 A
= + + 22
(s + 1)2 (s + 2) s =0 (s + 2) (s + 1)2 (s + 1)
3 1 2 A
= + + 22
(1)2 (2) s =0 (2) (1)2 (1)
3 1
A22 = − − 2 = −1
2 2
Notice that this answer is the same as the previous one but without the differentiation process.

Now that all coefficients are completely known, then the partial fraction expansion of the given rational
polynomial is:

s+3
F (s ) =
(s + 1)2 (s + 2)
A1 A21 A
F (s ) = + + 22
(s + 2) (s + 1) (s + 1)
2

−1
F (s ) =
1 2
+ +
(s + 2) (s + 1) (s + 1)
2

Looking at the Laplace Transform table, the time domain representation of the above partial fractions
is:

(
f (t ) = L−1F (s ) = e− 2t + 2te −t − e−t u(t ))
Do not forget the “u(t)”. This is the unit step function that implies that this is a one-sided or unilateral
Laplace transform pair.

s 2 + 3s + 1
Example 2: Given F (s ) =
(s + 1)3 (s + 2)2

Partial Fraction Expansion *Property of STI


Page 7 of 15
IT2001

The first step is always to factor the denominator. Leave the numerator as is. Since the given rational
polynomial is already factored, then the partial fraction expansion of the polynomial above is:

A11 A12 A13 A21 A


F (s ) = + + + + 22
(s + 1)3 (s + 1)2 (s + 1) (s + 2)2 (s + 2)
To evaluate the coefficients A1, A21, and A22, recall that:

A11 = (s + 1)3 F (s )
s = −1

A12 =
d
ds

(s + 1)3 F (s ) 
s = −1

A13 =
1 d2
2 ds 2

(s + 1)3 F (s ) 
s = −1

A21 = (s + 2 )2 F (s )
s = −2

A22 =
d
ds

(s + 2)2 F (s ) 
s = −2

Evaluate the coefficients:

 s 2 + 3s + 1 
A11 = (s + 1)3  
 (s + 1)3 (s + 2)2 
  s = −1
A11 = −1
d  s 2 + 3s + 1 
A12 =
ds  (s + 2 )2 
s = −1

(s + 2 ) (2s + 3) − 2(s + 2)(s 2 + 3s + 1)


2
=
(s + 2)4 s = −1

s+4
=
(s + 2)3 s = −1
A12 = 3

Partial Fraction Expansion *Property of STI


Page 8 of 15
IT2001

1 d2  s 2 + 3s + 1 
A13 =  
2! ds 2  (s + 2)2 
  s = −1

d  d  s 2 + 3s + 1 
=  
ds  ds  (s + 2 )2 
  s = −1

1 d  s + 4 
=
2 ds  (s + 2)3 
s = −1

1  (s + 2)3 − 3(s + 2 )2 (s + 4)


=  
2  (s + 2)6  s = −1

1  s + 2 − 3s − 12 
=
2  (s + 2 )4 
 s = −1
 −s −5 
= 
 (s + 2)4  s = −1
A13 = −4

 s 2 + 3s + 1 
A21 = (s + 2)2  
 (s + 1)3 (s + 2)2 
  s = −2
A21 = 1

and

d   s 2 + 3s + 1 
A22 = (s + 2)2  
ds   (s + 1)3 (s + 2)2 
  

=
(s + 1)3 (2s + 3) − 3(s + 1)2 (s 2 + 3s + 1)
(s + 1)6 s = −2

(s + 1)(2s + 3) − 3(s 2 + 3s + 1) − s 2 − 4s
= =
(s + 1)4 s = −2 (s + 1)4 s = −2
A22 = 4

Partial Fraction Expansion *Property of STI


Page 9 of 15
IT2001

Now that all coefficients are completely known, then the partial fraction expansion of the given rational
polynomial is:

s 2 + 3s + 1 A11 A12 A13 A21 A


F (s ) = F (s ) = + + + + 22
(s + 1)3 (s + 2)2 (s + 1)3 (s + 1)2 (s + 1) (s + 2)2 (s + 2)
−1 −4
F (s ) =
3 1 4
+ + + +
(s + 1) 3
(s + 1) 2 (s + 1) (s + 2) (s + 2)
2

Looking at the Laplace Transform table, the time domain representation of the above partial fractions
is:

 1 
f (t ) = L−1F (s ) =  − t 2e −t + 3te −t − 4e −t + te − 2t + 4e − 2t u (t )
 2 

Do not forget the “u(t)”. This is the unit step function that implies that this is a one-sided or unilateral
Laplace transform pair.

COMPLEX POLES

In most cases, the poles in the denominator occur in complex conjugate pairs. These poles have both
the real and imaginary parts. Since these always occur in conjugate pairs, the number of poles is
always even.

One may think of these conjugate pairs as distinct complex poles. And indeed, it is true. However, the
difficulty of proceeding this way adds a little difficulty. One important note when it comes to complex
poles, their coefficients will also be the complex conjugate of the other.

Do not proceed in this manner and instead remember that complex poles will always result in a sinusoid
function in the time domain.

Use the frequency shifting property of the Laplace transform. This is stated below.

e−at f (t )  F (s + a )

3s + 9
Example 1: Given F (s ) =
2
s + 4s + 5
The first step is always to factor the denominator. Leave the numerator as is. The partial fraction
expansion of the polynomial above is:

3s + 9
F (s ) =
2
s + 4s + 5
Bs + C
F (s ) =
(s 2 + 4s + 4)+ 1
Bs + C
F (s ) =
(s + 2)2 + 1
Notice what was done in the denominator. This is a technique in factoring known as completing the
square. Such a technique is very useful in the analysis of linear systems using Laplace transforms.

Also, take note of the numerator. The numerator is just the derivative of the denominator with arbitrary
coefficients.

Partial Fraction Expansion *Property of STI


Page 10 of 15
IT2001

Evaluate the coefficients:

Let s = 0
3s + 9 Bs + C
=
2
s + 4s + 5 s =0 (s + 2)2 + 1 s =0
C 9
=
4 +1 5
C =9
Let 𝑠 = 1
3𝑠 + 9 𝐵𝑠 + 𝐶
| = |
𝑠 2 + 4𝑠 + 5 𝑠=1 (𝑠 + 2)2 + 1 𝑠=1
𝐵+9 3+9
=
9 + 1 (1 + 2)2 + 1
𝐵 + 9 12
=
10 10
𝐵=3

Note that looking at the coefficients' values, one can compare these with the given rational polynomial.
Hence, it is necessary to remember that when a complex polynomial is given above, it is NO LONGER
necessary to proceed with such an evaluation of the arbitrary coefficients.

However, if the POLES are not solely complex, meaning there are other poles, then proceed like the
above method.

We now have:

3s + 9
F (s ) =
(s + 2)2 + 1
This can also be expressed as:

3(s + 2 − 2)  
F (s ) =
1
+ 9 
(s + 2)2 + 1  (s + 2)2 + 1 
The two fractions are separated, one with “s”, one without the “s”. From here on, there is a need to
“massage” the partial fraction.

The technique in going to this step is that one must remember the form of the Laplace transform using
the frequency shifting property, i.e.:

e−at f (t )  F (s + a )

Look at the denominator. Since the “s” is “s+2”, the numerator must also be “s+2”. Since “2” is just a
constant, add and subtract 2 without changing the expression above. Then, we have:

3(s + 2) 3(2)  
F (s ) =
1
− + 9 
(s + 2)2 + 1 (s + 2)2 + 1  (s + 2)2 + 1 
3(s + 2)
F (s ) =
3
+
(s + 2)2 + 1 (s + 2)2 + 1
Using the frequency shifting property of the Laplace transforms, it can be seen that:

Partial Fraction Expansion *Property of STI


Page 11 of 15
IT2001

(
f (t ) = L−1F (s ) = 3e− 2t cost + 3e− 2t sin t u(t ) )
Do not forget the “u(t)”. This is the unit step function that implies that this is a one-sided or unilateral
Laplace transform pair.

Remember that for complex poles, a decaying sinusoid will always occur. In this kind of problem, it is
handy to remember Euler’s identity, i.e.

e  j = cos  j sin 

COMBINATION

For rational functions with both real and complex roots, use any of the three cases above, which applies
to the factored denominator.

s+3
Example 1: Given F (s ) =
s + 5s 2 + 12 s + 8
3

The first step is always to factor the denominator. Leave the numerator as is.

s+3
F (s ) =
(s + 1)(s 2 + 4s + 8)
The partial fraction expansion of the given rational polynomial is:

Bs + C
F (s ) =
A
+
s + 1 s + 4s + 8
2

Evaluate the coefficients:

 s+3 
A = (s + 1) 
 (
 (s + 1) s 2 + 4s + 8 ) 
 s = −1
 −1+ 3 
A= 
 (− 1)2 + 4(− 1) + 8 
 
2
A=
5
To solve for the constants B and C, assign any real pole except that s = -1.

Partial Fraction Expansion *Property of STI


Page 12 of 15
IT2001

 2 
s+3  Bs + C 
= 5 + 
( )
(s + 1) s 2 + 4s + 8 s = 0  s + 1 s 2 + 4s + 8 
  s =0
3 2 C
= + 
8 5 8 
3 2
C = 8 − 
8 5
 16 
= 3 − 
 5
1
C=−
5
and

 2 
s+3  Bs + C 
= 5 +
( )
(s + 1) s 2 + 4s + 8 s =1  s + 1 s 2 + 4s + 8 
  s =1
 2 1 
 B− 
1+ 3
= 5 + 5 
(1 + 1)(1 + 4 + 8) s =1  1 + 1 1 + 4 + 8 
 
  s =1
2 1 B 1 
= + − 
13  5 13 65 
2 1 1 
B = 13 − + 
 13 5 65 
 13 1 
= 2 − + 
 5 5
2
B=−
5

Partial Fraction Expansion *Property of STI


Page 13 of 15
IT2001

We now have:

2 2 1
− s−
F (s ) = 5 + 5 5
s + 1 s 2 + 4s + 8
Manipulate the equation so a Laplace transform pair in the table can be seen. Then do the following:

2 1  2  1 
F (s ) = 
s 1
 −  2  −  
5  s + 1  5  s + 4s + 8  5  s + 4s + 8 
2

2  1  2   1 
F (s ) = 
s −  1 
−  2
( ) (
5  s + 1  5  s + 4s + 4 + 4  5  s 2 + 4s + 4 + 4  )
2  1  2   1 
F (s ) = 
s −  1 
−  2
( ) (
5  s + 1  5  s + 4s + 4 + 4  5  s + 4s + 4 + 4 
  2
)
2  1  2   1 
F (s ) = 
s −  1 
−  2
( ) (
5  s + 1  5  s + 4s + 4 + 4  5  s + 4s + 4 + 4 
  2
)
2  1  2   1 
F (s ) = 
s −  1 
− 
5  s + 1  5  (s + 2) + 4  5  (s + 2) + 4 
2   2

2  1  2  s + 2 − 2  1  2  
F (s ) = 
1 
−  −  
5  s + 1  5  (s + 2) + 2  5  2  (s + 2) + 2 
2 2   2 2

2  1  2  s+2  2   1  
F (s ) =  +  2  −   2 
− 
5  s + 1  5  (s + 2)2 + 22  5  (s + 2)2 + 2 2   10  (s + 2)2 + 22 

2  1  2  s+2  3 
F (s ) =  +  2 
− 
5  s + 1  5  (s + 2)2 + 22  10  (s + 2)2 + 22 
2 
f (t ) = L−1F (s ) =  e −t − e − 2t cos 2t + e − 2t sin 2t u (t )
2 3
 5 5 10 

Do not forget the “u(t)”. This is the unit step function that implies that this is a one-sided or unilateral
Laplace transform pair.

IMPROPER RATIONAL FUNCTIONS

Recall that improper fractions exhibit this property. The degree of the numerator is greater than or
equal to the degree of the denominator, m  n. To solve such problem, divide the numerator by the
denominator to obtain an expression of the form:

N (s )
F (s ) = k0 + k1s + k 2 s 2 +  + k m − n s m − n +
D(s )
N (s )
= proper rational function
D(s )

Remember your long division method because that is the key to turning the improper rational function
into a proper rational function.

Partial Fraction Expansion *Property of STI


Page 14 of 15
IT2001

For the remaining proper rational function, use the appropriate case as studied

s 2 + 2s + 2
Example 1: Given: F (s ) =
s +1

s 2 + 2s + 2
F (s ) =
1
= 1+ s +
s +1 s +1
The inverse Laplace of this is:

( )
f (t ) = L−1F (s ) =  (t ) +  ' (t ) + e −t u (t )

New Transform Pair


dn
n
 (t ) = s n
dt

REFERENCES:

DiStefano, J., Stubberud, A. & Williams, I. (2012). Schaum's outline of feedback and control Systems (2nd
ed.). New York: McGraw Hill

Dorf, R., & Bishop, R. (2017). Modern control systems (13th ed.). Pearson.

Franklin, G., Powell, J., & Emami-Naeini, A. (2018). Feedback control of dynamic systems (8th ed.). Pearson.

Golnaraghi, F. & Kuo, B.C. (2009). Automatic control systems (9th ed.). New Jersey: John Wiley & Sons

Nise, N.S. (2010). Control systems engineering (6th ed.). New Jersey: John Wiley & Sons

Ogata, K. (2009). Modern control engineering (5th ed.). New Jersey: Prentice Hall

Partial Fraction Expansion *Property of STI


Page 15 of 15
IT2001

A transfer function is the relationship between the input and the output of a certain system. Given the
system below, we say that:

R(s) G(s) Y(s)

The output is Y(s), the input is R(s) and the plant is given by G(s). The relationship of these three is
given by:

Y (s ) = R(s )G (s )
Y (s ) − Output
R(s ) − Input
G (s ) − Plant
From this equation, if we divide both sides by R(s), then we will get:

Y (s )
= G (s )
R(s )
And in general,

N (s ) polynomial of order m
G (s ) = =
D(s ) polynomial of order n
Therefore, the relationship between the input and the output is given by the plant response and is called
the transfer function.

We recall again that the zeros of the transfer function are derived by equating the numerator to zero.
We see that:

s = ( z1, z2 ,  zm ) such that N(s) = 0

And the poles of the system are derived from equating the denominator to zero, such that:

s = ( p1, p2 ,  pm ) such that D(s) = 0

We state that: Zeros and poles affect the open-loop and closed-loop stability of a system. We say that
a system is stable if for a bounded input, the output is also bounded.

The two (2) predominant systems in control system: open loop and closed loop.

For an open loop shown below, we have already stated the transfer function.

R(s) G(s) Y(s)

Figure 4.1. Open loop system

04 Handout 1 *Property of STI


Page 1 of 11
IT2001

Again, in general, the transfer function of such system is governed by system plant, given by:

N (s )
G (s ) =
(s − p1 ) (s − pn )
Notice that the above transfer function has a denominator that has already been factored out. Here,
the assumption is placed that pn may still be complex or real. Even though we disregarded the
multiplicity of poles here, the point that we are after is that the general solution for such will be of the
form:

y (t ) = A1e p1t +  + An e pnt

And we could see here that for this to be stable, the poles should be negative to have a decaying
response. If there is a pole that is positive, then the system is still unstable. We conclude that for open
loop systems, the poles solely determine system stability, and these poles should be negative in value
or are located at the left-hand side of the s-plane.

The system shown in Figure 4.2 is a closed loop negative feedback system. The circle acts as a
summing block. Note the signs. The output is multiplied by a certain “k” and is fed back to the summing
block. Its sign is negative. This is the reason such a diagram is called a negative feedback system. In
most systems, we would always want to have negative feedback because this can ensure system
stability.

Figure 4.2. Closed loop negative feedback system

The transfer function is given by:

Y (s ) G (s ) N (s )
= =
R(s ) 1 + kG (s ) D(s ) + kN (s )
Unlike the open loop system where stability relies on the system poles only, the stability of a closed
loop system is determined by the zeros and poles.

General Control System

A typical general control system is shown below.

Figure 4.3. General control system

04 Handout 1 *Property of STI


Page 2 of 11
IT2001

We note the following conventions and representations before we start.

R : reference input
D : disturbance
− known/unkn own
− random/det erministic
N : sensor / measurement noise

For any control system, we always have the reference input R. This is what every control system would
want to achieve. The actual output is represented by Y.

The output Y is sampled or sensed by an appropriate transducer. Since transducers are electronic
components, they are inherent in noise. This may be due to the characteristic of the device that may
depend on temperature, etc. The sample output plus the sensor noise are fed back to the system,
particularly to the summing block.

The error E is the error of the output, and the input and the desired value of this should be zero. A zero
value denotes that the output has reached the desired input.

The error is then fed to a block K. This block K, in general, will be called the compensator or controller.
This block compensates or controls the plant G in order for the plant to reach the desired reference
input. The output of the compensator or controller drives the plant G.

However, there are certain disturbances in the output. These can be known or unknown or random or
deterministic or a combination of both. An example could be an additional load in the propeller of a
certain electric fan. We know that if the load is heavy, then the propeller should slow down.

Let us now determine the relationships of these representations. We assume that for any block diagram
we have, we are using Laplace transforms as their transfer functions. We begin with the output Y.

From Figure 4.3:

Y = KGE + D
E = R −Y − N
If we substitute the second equation to the first equation, we will see that:

Y = KG (R − Y − N ) + D
Y = KGR − KGY − KGN + D
Y + KGY = KGR − KGN + D
(1 + KG )Y = KGR − KGN + D
KG KG 1
Y= R− N+ D
1 + KG 1 + KG 1 + KG
We can notice from the final answer that the output Y is dependent on three inputs, the desired input,
disturbance, and noise. However, we see also that the noise and disturbance have multiplying factors
dictated by the controller and the plant. This means that we can really minimize their effect by designing
appropriate controllers/compensators.

The tracking error E can also be derived using the same step except that the first equation is now
substituted for the second equation.

Y = KGE + D
E = R −Y − N
Following the same derivation steps, we can see that:

04 Handout 1 *Property of STI


Page 3 of 11
IT2001

E = R − KGE − D − N
E + KGE = R − D − N
(1 + KG )E = R − D − N
1 1 1
E= R− N− D
1 + KG 1 + KG 1 + KG
Ideally, we want the tracking error equal to zero. In practice, a small error will normally do in the range
of 1-5% of the desired value is alright for any system design. Mathematically, we can have a zero error
if the magnitude of “KG” is infinity.

For the actuator input U, we have the following:

U = KE
K
U= (R − D − N )
1 + KG
This is the command accepted by the system plant. We have arrived with this equation by simply
multiplying the tracking error signal with “K”.

Control System Objectives and Design

So far, the equations that we have derived are:

KG KG 1
Y= R− N+ D
1 + KG 1 + KG 1 + KG

1 1 1
E= R− N− D
1 + KG 1 + KG 1 + KG

K
U= (R − D − N )
1 + KG
These equations will be our governing equations in meeting our objective of achieving a desirable
reference input.

We say again that we want to have a small error E. A small error E will tell the control engineer that
the actual output is near the desired reference input.

With a small tracking error, it follows that we also have a small actuator input. Providing a small actuator
input allows the system plant to go to the desired state at smaller ranges, thus prohibiting it from going
unstable.

Now, if we look at the tracking error E and the output Y, we see their dependence on the noise and
disturbance. As much as possible, these two, N and D, should be small enough to affect the system’s
response.

How do we minimize disturbance D and the noise N? As a practical approach, the system must be well
shielded from these two unwanted signals. For example, the sensors must be of high quality or
precision/accuracy to read the actual output. They should also have a small internal noise. However,
these two may not be generalized on how to minimize because each control system will have their own
disturbances and noises. A good controller engineer at least would consider all disturbances and
noises that greatly affect the system’s performance.

We now have some definitions.

From the three equations above, we see that their denominators are the same. We call this the return
difference, defined by:

04 Handout 1 *Property of STI


Page 4 of 11
IT2001

J = 1 + KGH
The return difference is the measure of the difference between the actual error or output and the
sampled error or output. One can see this clearly from the derivation of the output and error signals
above.

We note that the “H” represents the feedback transfer function. For the closed loop system above, H
= 1. When this happens, we have a unity negative feedback system on our hand.

KGH is normally defined as the loop gain because if we take out the noise, disturbance, input, and
output, these three blocks form a loop, thus having the name loop gain. In the future, we will see that
this plays an important role in determining system stability.

Sensitivity is defined as:

1 1
S= =
1 + KGH J
The sensitivity of the plant means how well the system is sensitive to the reference input, noise, and
disturbance. Of course, as much as possible, we want our system to be super sensitive to the reference
input and least sensitive to the noise and disturbance.

Complementary sensitivity is defined as:

T = 1− S
KGH
T=
1 + KGH
Note that the complementary sensitivity T is only injected or a factor of the desired reference input.

Substituting these three definitions, we now have:

output : Y = SD + T (R − N )
error : E = S (R − D − N )
input : U = KS (R − D − N )

To formulate our control system objectives, we now have the following observations.

• Disturbance rejection – We must reduce the effect of disturbance D by having a small S, i.e., being less
sensitive to such disturbances.

• Good tracking – We must have a small error, ideally equal to zero, thus requiring again a small S, i.e.,
being less sensitive to both disturbance and noise.

• Bounded actuator signals – From the two abovementioned objectives, we saw that S is small, and it
has to be kept that way. For a bounded actuator signal, we need a small U, thus requiring a small KS
or small K.

• Noise immunity - We must reduce the effect of noise by having large T and small S.

We see now that for some control objectives, we need a small S, and for some, we need a large S. In
a control system, technically and practically speaking, we can not have the best of both worlds. So
there is a part of the control system design that there is a compromise of what is best for the system
as a whole. Normally, this is defined by the control specifications set by the designer or the employer.

In order to meet these objectives, the design must take into consideration the following.

• Large loop gain at low frequencies

04 Handout 1 *Property of STI


Page 5 of 11
IT2001

o Tracking and disturbance rejection

• Low loop gain at middle frequencies

o System stability

• Low loop gain at high frequencies

o Noise immunity

System Response and Standard Reference Inputs

System response is the system’s response to a certain input. So if we say “step response”, then this
is the system’s response to a step input.

In control systems, we follow standard reference inputs. One advantage of using such inputs is the
ease of computation, especially when using Laplace transforms.

So, given a standard reference input (impulse, step, ramp inputs), we classify the system according to
its response.

We recall from differential equations that:

y (t ) = y t (t ) + y ss (t )
y t (t ) : transient response
lim y t (t ) = 0
t →∞
y ss (t ) : steady state response

The transient response is determined by the plant’s transfer function, while the steady-state response
is determined by the forcing function or the reference input.

In the long run, the effect of the transient response dies out, leaving the steady-state response. This
steady-state response must be equal to the desired input.

What are the standard reference inputs used in control systems? We have to take note that these
reference inputs should not only be mathematical in nature but also found in practical applications.

Standard inputs also allow the use of linearity and superposition in the analysis of any control system
project, plus the fact that they have simple Laplace transforms.

The standard reference inputs used are:

1
• Step input u (t ) ⇔ . The practical representation of this is the common ON and OFF switch. When
s
the switch is OFF, it is equal to zero. At the time the switch is turned ON, that is the time where the
function u(t) will change from zero to one. The discontinuity lies in how fast the switch was pressed.
There many different kinds of switches, the most common is the mechanical switch we use at home.
Electronic switches are made of BJT’s, diodes, FET’s, etc.

1
• Ramp input tu (t ) = r (t ) ⇔ . A ramp is a monotonically increasing function determined by its slope
s2
on how fast it quickly changes. A common practical application of this is a change in temperature in a
room or in a device.

t2 1
• Parabolic input u (t ) ⇔ . This can be used in robot trajectories or any motion following a
2 s3
parabolic path.

04 Handout 1 *Property of STI


Page 6 of 11
IT2001

s
cos ωt ⇔
s +ω2
2
• Sinusoidal input . The most common application of such are the AC machines or
ω
sin ωt ⇔
s2 + ω 2
rotating machines.

Other reference inputs may arise from the superposition of these standard inputs. The analysis is
simply done by adding their Laplace transforms.

Error Response and System Type

The classical control technique uses the unity gain negative feedback control system shown below.

Figure 4.4 Negative feedback control system

What makes this classical is that the feedback has a unity transfer function or simply a gain block, i.e.,
no rational polynomial transfer functions. Please note here that we used H as a controller.

We recall that the error of such closed loop system is given by:

1
E= R
1 + GH
For simplicity’s sake, we neglect disturbance and noise. This is also a rational polynomial function, and
in the long run, we want this to be equal to zero. The steady-state error is defined as:

ess = lim e(t )


t →∞

Using the Final Value Theorem of the Laplace Transform, we will see that:

R(s )
ess = lim sE (s ) = lim s
s →0 s → 0 1 + GH (s )

Note that there is a factor “s” and the error is dependent on the reference input and the system plant
and controller. Whether in the time or frequency domain, we want this value to be equal to zero.

From this error response, we also can see the loop gain GH(s). This is represented by:

GH (s ) = K
(s − z1 )(s − zm )
s j (s − p1 )(s − pn )
j = system type

The variable “j” in the denominator is defined as the system type. If j = 0, then the system is said to be
Type 0. If j = 1, then the system is said to be Type 1 and so on. This implies how many poles located
at the origin are there in a given system. The origin means that the pole is equal to zero.

04 Handout 1 *Property of STI


Page 7 of 11
IT2001

We now study what the importance of the system type to the error response is. Keep in mind that what
we want is that the error response should be zero as the time approaches infinity or if the complex
variable “s” approaches zero.

Consider a heating system with a model G(s) given by:

1
G (s ) =
s +1
− 1st order
− type 0

Clearly, it can be seen that it is a first-order system and a Type 0 system. Also, note that we will be
placing this in an open loop system, such as shown below.

Figure 4.5 Open loop system

We get the step response of the system above. Its step response is shown below.

Step Response
1

0.9

0.8

0.7

0.6
Amplitude

0.5

0.4

0.3

0.2

0.1

0
0 1 2 3 4 5 6
Time (seconds)

Figure 4.6 Open loop system step response

From the graph, we see that at approximately five seconds, the system approached its desired value
of one. From zero to less than five seconds is the transient response.

Now, using the same system, but this time, closing the loop with the controller equal to one and as
shown below,

04 Handout 1 *Property of STI


Page 8 of 11
IT2001

Figure 4.7 Closed loop system

The step response is also determined and shown below. Note the following changes.

• The steady-state value is 0.5, thus having a steady-state error of 0.5 since the desired is one.

• Transient time decreases by almost half or is now approximately equal to 2.5 seconds.

Step Response
0.5

0.45

0.4

0.35

0.3
Amplitude

0.25

0.2

0.15

0.1

0.05

0
0 0.5 1 1.5 2 2.5 3
Time (seconds)

Figure 4.8 Closed loop system step response

The two superimposed plots are shown below to clearly see the changes brought about by closing the
loop.

04 Handout 1 *Property of STI


Page 9 of 11
IT2001

Step Response
1

0.9

closed loop
0.8
open loop

0.7

0.6
Amplitude

0.5

0.4

0.3

0.2

0.1

0
0 1 2 3 4 5 6
Time (seconds)

Figure 4.9 Open loop and closed loop systems step responses

Though the open loop has reached the final response to one, the closed loop is still the best way to
control a system. It is by chance that the open loop system G(s) has a final value of one when one
used the Final Value theorem.

Now let us mathematically analyze the system above since the plots generated above came from a
well-known software (Matlab).

The steady-state error of the said system is again shown below.

E R −T 1 s +1
= = =
R R 1+ G s + 2
If the input is a unit step, then, using the final value theorem, we will see that:

r (t ) = u (t )
 s + 1  1  s + 1 1
ess = lim s   = =
s → 0  s + 2  s  s + 2 2

We observe that for a type zero system, there is a finite error. Also, a step input is a type 1 system
since we have an “s” in the denominator.

Now, assume that we change the system plant to:

1
G (s ) =
s(s + 1)
Which is type 1 in nature. Doing the same mathematical analysis, we see that:

04 Handout 1 *Property of STI


Page 10 of 11
IT2001

E R −T 1 s (s + 1)
= = =
R R 1+ G s2 + s +1

r (t ) = u (t )
 s (s + 1)  1 
ess = lim s   = 0
s →0  s 2 + s + 1  s 

The error response now has a steady-state value of zero, meaning the desired input is reached. If the
input to this system is now a ramp input, we will have:

r (t ) = tu (t )
 s (s + 1)  1 
ess = lim s   = 1
s → 0  s 2 + s + 1  s 2 

The error is now not equal to zero.

In summary, the system type determines the steady-state error response value of any given system. It
also determines up to what input will provide a zero steady-state error. If a plan is Type 2, then both
unit and ramp inputs will produce zero steady-state error at the output. A parabolic input will introduce
a finite steady-state error and so on.

REFERENCES:

Dorf, R.C., & Bishop, R.H. (2010). Modern Control Systems 12th Edition. New Jersey: Prentice Hall

Nise, N.S. (2010). Control Systems Engineering 6th Edition. New Jersey: John Wiley & Sons

DiStefano, J., Stubberud, A. & Williams, I. (2012). Schaum’s Outline of Feedback and Control Systems
2nd Edition. New York: McGraw Hill

Ogata, K. (2009). Modern Control Engineering 5th Edition. New Jersey: Prentice Hall

Golnaraghi, F. & Kuo, B.C. (2009). Automatic Control Systems 9th Edition. New Jersey: John Wiley & Sons

04 Handout 1 *Property of STI


Page 11 of 11

You might also like