Lesson 0 - Flow Charts and Pseudocodes
Lesson 0 - Flow Charts and Pseudocodes
1.2 Flowcharts
Flowcharts are a graphical means of representing an algorithm, as should be expected, they have
advantages and disadvantages compared to other ways of representing algorithms. One of their primary
advantages is that they permit the structure of a program to be easily visualized - even if all the text were
to be removed. The human brain is very good at picking out these patterns and keeping them "in the back
of the mind" as a reference frame for viewing the code as it develops.
Most programmers also find it easier to sketch flowcharts on a piece of paper and to modify them by
crossing out connection arrows and drawing new ones that they would working with other representations
like pseudocode. By the same token, most programmers do not like to develop flowcharts in an electronic
format because the overhead of creating and modifying it is generally more than they want to deal.
The idea behind a flowchart is that it links together a series of blocks each of which perform some specific
task. Each of these tasks is represented by a block and has exactly one arrow leading to it and, more
importantly, one arrow exiting from it. This is key to the concept of a "structured program".
The shape of the block may convey additional information about what is happening. For instance, a
rectangular block is frequently used to indicated that a computation is occurring while a slanted
parallelogram is used to indicate some type of input or output operation. The diversity of shapes that can
be used and what they mean is staggering - for instance a different shape can be used to indicate output
to a tape drive versus to a hard disk or to indicate output in text format verses binary format. By using
such highly specialized symbols, much of what is happening can be conveyed by the symbols themselves.
But the power of using these distinctions is generally only useful to people that work with flowcharts
continuously, professionally, and who are describing very large and complex systems. At our level, it is
far better to restrict ourselves to a minimum number of shapes and explicitly indicate any information
that otherwise might have been implied by using a different shape.
The shapes we will use are the circle, the rectangle, the parallelogram, the diamond, and the arrows that
interconnect them.
The circle indicates the entry and exit point for the program - or for the current segment of the program.
The entry point has exactly one arrow leaving it and the exit point has exactly one arrow entering it.
Execution of the program - or of that segment of the program - always starts at the entry point and finishes
at the exit point.
Rectangle - Task
The rectangle represents a task that is to be performed. That task might be as simple as incrementing the
value of a single variable or as complex as you can imagine. The key point is that it also has a single
entry point and a single exit point.
Parallelogram - Input/Output
The parallelogram is used to indicate that some form in input/output operation is occurring. They must
also obey the single entry single exit point rule which makes sense given that they are a task-block except
with a slightly different shape for the symbol. We could easily eliminate this symbol and use the basic
rectangle but the points at which I/O occur within our programs are extremely important and being able
to easily and quickly identify them is valuable enough to warrant dealing with a special symbol.
Since a Task block can be arbitrarily complex, it can also contain I/O elements. Whether to use a rectangle
or a parallelogram is therefore a judgment call. One way to handle this is to decide whether a task's
primary purpose is to perform I/O. Again, that is a judgment call. Another option is to use a symbol that
is rectangular on one side and slanted on the other indicating that it is performing both I/O and non-I/O
tasks.
The diamond represents a decision point within our program. A question is asked and depending on the
resulting answer, different paths are taken. Therefore a diamond has a single entry point but more than
one exit point. Usually, there are two exit points - one that is taken if the answer to the question is "true"
and another that is taken if the answer to the question is "false". This is sufficient to represent any type
of branching logic including both the typical selection statements and the typical repetition
The arrows simply show which symbol gets executed next. The rule is that once an arrow leaves a symbol,
it must lead directly to exactly one other symbol - arrows can never fork and diverge. They can, however,
converge and join arrows coming from other blocks.
1.3 Pseudocode
In computer science, algorithms are usually represented as pseudocode. Pseudocode is close enough to a
real programming language that is, it can represent the tasks the computer must perform in executing the
algorithm. Pseudocode is also independent of any particular language, and uncluttered by details of
syntax, which characteristics make it attractive for conveying to humans the essential operations of an
algorithm.
Furthermore, if the pseudocode is already in an electronic format that has been structured to lend itself to
translation to the final language. This can be a powerful advantage of pseudocode over flowcharts where
the entire source code still has to be typed by hand unless you are fortunate to have a tool that can take a
flowchart - typically developed using that same tool - and translating it to directly to code. Such tools do
exist - and they tend to be rather expensive.
1.3.1 Example
GCD algorithm:
GCD ( a, b ) //Function name and arguments
While b ! = 0 { //! = means “not equal” indentation shows what to do while b ! = 0
r <-- a modulo b // set r = a modulo b ( = remainder a/b)
a <-- b // set a = original b
b <-- r // set b = r (i.e., the remainder)
} //border of the “while” repetition
return a //when b=0, return value of a as the GCD
Sequential_Search(list_of_names, name)
length <-- length of list_of_names
match_found <-- false
index <-- 1
while match_found = false AND index <= length {
if list_of_names[index] = name then
match_found <-- true
index <-- index + 1
}
return match_found
end
Iteration is the repetition of a process to produce a sequence of outcomes. Each repetition of this process
is a single iteration and the outcome of every iteration is the starting point of the next iteration.
In mathematics, one can use iteration to find approximate solutions to equations. Repeatedly solving an
equation to obtain a result using the result from the previous calculation is referred as an iterative
process. Many root-finding algorithms make use of iterative methods, producing a sequence of numbers
that are hopefully approximating toward the root with each iteration. Take a look at Newton’s method
and Brent’s method as example root-finding algorithms.
In programming we use iteration quite often. Many programming languages have loop constructs that
lets you iterate over a given set or we can write iterative functions that loop to repeat some piece of
code. We also have the concept of recursion, which is similar to iteration but instead of repeating some
piece of code, a recursive function calls itself to repeat the code.
Using a simple for loop construct to loop through all the integers from 1 to n, multiplying as we go
along, would be an iterative process.
Writing down a function fn(n) that calls itself taking n as the argument and returning n * fn(n - 1) unless
n = 0, would be a recursive process.
Most CPUs model recursion with loops and a stack data structure. Each recursive call pushes a new
stack frame then pops it when it returns. Recursion hence usually has the overhead of method calls
(unless you’re using tail recursion and a compiler that handles it), but it usually requires less code than
iteration. Infinite recursion can lead to system crash while infinite iteration would consume CPU cycles.
1. Always have at least one case that can be solved WITHOUT recursion ("base case").
2. Any recursive call must make progress towards a base case ("make progress").
3. Assume the recursive call works. ("You have to believe")
4. Never duplicate work by solving the same instance of a problem in separate recursive calls.
("Compound interest rule")
1. When a solution has elements that call itself, when the definition is defined in terms of itself
AND it doesn't use too much memory AND it is not too slow.
Iterative Recursive
static long sumOfDigits(long x) static long sumOfDigits(long x)
{ {
long sum = 0; if (x < 10)
while (x > 0) {
{ return x;
sum += x%10; }
x = x/10; return (x % 10) + sumOfDigits( x / 10 );
} }
return sum;
} (Notice: No "sum" variable)
Exercise:
What is the result if the function sumOfDigits(7353) is called? Use any of the functions above.