Assignment of Programming
Assignment of Programming
Table of content
P1…………………………………………………………………………………5
Introduction……………………………………………………………………………………..5
Algorithm………………………………………………………………………………………5
Types of algorithm……………………………………………………………………………..6
Characteristics of algorithm……………………………………………………………………8
Qualities of good algorithm…………………………………………………………………….10
Advantages of algorithm……………………………………………………………………….10
Disadvantages of algorithm…………………………………………………………………….11
Examples of algorithm…………………………………………………………………………11
Programming algorithm………………………………………………………………………..12
Control structure……………………………………………………………………………….13
Bubble sort……………………………………………………………………………………..13
Optimizing bubble sort…………………………………………………………………………14
Flowchart………………………………………………………………………………………16
Advantages of flowchart…………………………………………………………………………17
Disadvantages of flowchart………………………………………………………………………17
Application of algorithm…………………………………………………………………………17
Steps of developing an application………………………………………………………………18
Conclusion……………………………………………………………………………………….22
M1………………………………………………………………………………..23
Introduction…………………………………………………………………………………….23
Coding………………………………………………………………………………………….23
Works of coding………………………………………………………………………………….24
Uses of coding……………………………………………………………………………………25
Advantages of coding……………………………………………………………………………25
Code where should one begin? ...................................................................................................26
Programming…………………………………………………………………………………….27
D1………………………………………………………………………………38
Introduction………………………………………………………………………………….38
Code generation………………………………………………………………………………38
Implementation of algorithm in suitable languages………………………………………….39
Roles of program……………………………………………………………………………41
Process of turning an algorithm into working program code……………………………….41
Relationship between algorithm and code variant………………………………………….45
Conclusion………………………………………………………………………………….46
P2…………………………………………………………………………….46
Procedural programming…………………………………………………………………..46
Programming paradigm……………………………………………………………………46
Types of programming paradigm………………………………………………………….48
Object oriented programming……………………………………………………………..54
Characteristics of object oriented programming……………………………………………54
Event driven programming…………………………………………………………………60
Characteristics of event driven programming……………………………………………..61
Diff. between procedural and object oriented programming………………………………..63
Conclusion……………………………………………………………………………………63
M2………………………………………………………………………………64
Integrated Development Environment (IDE)………………………………………………….64
Benefits of IDE…………………………………………………………………………………64
Common features of IDE……………………………………………………………………….64
Conclusion……………………………………………………………………………………..65
D2………………………………………………………………………………..66
Introduction………………………………………………………………………………………66
Components of program…………………………………………………………………………66
Elements of programming……………………………………………………………………….67
Conclusion……………………………………………………………………………………….68
P3………………………………………………………………………………….69
Introduction……………………………………………………………………………………..69
Algorithm understanding………………………………………………………………………69
Process of algorithm…………………………………………………………………………….70
Extensions of algorithm…………………………………………………………………………71
Limitations of algorithm………………………………………………………………………..72
Example projects……………………………………………………………………………….73
Conclusion……………………………………………………………………………………….73
M3………………………………………………………………………………73
Introduction…………………………………………………………………………………….73
Discussion……………………………………………………………………………………..74
Resolving errors………………………………………………………………………………….76
Key terms………………………………………………………………………………………...79
Conclusion……………………………………………………………………………………….79
D3………………………………………………………………………………..80
Introduction……………………………………………………………………………………80
Use of IDE for development of application……………………………………………………80
Conclusion…………………………………………………………………………………….82
P4……………………………………………………………………………….82
Debugging…………………………………………………………………………………….82
Types of debugging……………………………………………………………………………83
Debugging process…………………………………………………………………………….83
Debugging software…………………………………………………………………………..84
Debugging techniques……………………………………………………………………………85
Conclusion…………………………………………………………………………………..86
M4…………………………………………………………………………….86
Debugging tools…………………………………………………………………………….86
Use of debugging tools in my application………………………………………………….89
Importance of debugging……………………………………………………………………91
Conclusion………………………………………………………………………………….91
P5……………………………………………………………………………..91
Coding standard…………………………………………………………………………….91
Advantages of coding standard…………………………………………………………….91
Common aspects of coding standard………………………………………………………92
Types of naming conventions………………………………………………………………93
Conclusion………………………………………………………………………………….94
D4……………………………………………………………………………..94
Coding standard used in my application……………………………………………………94
Pros and cons of coding standard…………………………………………………………..94
Conclusion……………………………………………………………………………………96
References…………………………………………………………………………………..97
P [1]
Provide the definition of what an algorithm is and outline the process in
building an application.
Introduction:
An algorithm is a popular term that you must have heard in numerous areas, including computer
programming, mathematics, and even in our daily lives.
An algorithm can be clarified as a step by step process or formula for problem-solving or you
can say that it is a set of instructions formulated to conduct a particular work. So, the best
example for this is a recipe as it explains what must be perpetrated, step by step. Algorithms are
normally built in underlying languages that means it can be carried out in more than one
programming language. Algorithms are used as specifications for data processing, doing
mathematics, automated reasoning, and several other chores like this.
Accordingly, I will introduce here about the definition of the algorithm, types of an algorithm,
characteristics of algorithm, its advantages and disadvantages, applications of an algorithm,
programming algorithm, etc.
Algorithm:
transition from one state to the next is not necessarily deterministic; some algorithms, known
as randomized algorithms, incorporate random input.
From the data structure point of view, following are some important categories of algorithms −
Search − Algorithm to search an item in a data structure.
Sort − Algorithm to sort items in a certain order.
Insert − Algorithm to insert item in a data structure.
Update − Algorithm to update an existing item in a data structure.
Delete − Algorithm to delete an existing item from a data structure.
Types of Algorithm:
A brute force algorithm essentially attempts all the chances until an acceptable result is
found. This is the most fundamental and least complex type of algorithm. Such types of
algorithms are moreover used to locate the ideal or best solution as it checks all the potential
solutions. Also, it is used for finding an agreeable solution (not the best), basically stopping
when an answer to the issue is found. It is a clear way to deal with an issue that is the first
approach that strikes our mind after observing the issue.
Recursive Algorithm:
This type of algorithm is also called the memorization technique. This is because, in this, the
thought is to store the recently determined outcome to try not to figure it over and over.
In Dynamic Programming, partition the unpredictable issue into more modest covering sub-
problems and putting away the outcome for some time later. In simple language, we can say that
it recollects the previous outcome and uses it to discover new outcomes.
In the Divide and Conquer algorithm, the thought is to tackle the issue in two areas, the first
section partitions the issue into sub-problems of a similar sort. The second section is to tackle the
more modest issue autonomously and afterwards add the joined outcome to create the last
response to the issue.
Greedy Algorithm:
Greedy is an algorithmic paradigm that builds up a solution piece by piece, always choosing the
next piece that offers the most obvious and immediate benefit. So the problems where choosing
locally optimal also leads to global solution are best fit for Greedy. Now coming towards another
type that is a greedy algorithm, so in this, the solution is created portion by portion. The finding
to select the following role is accomplished on the purpose that it provides the sudden help and it
never deems the options that had assumed lately.
Backtracking Algorithm:
In this type of algorithm, the issue is worked out steadily, for example, it is an algorithmic-
procedure for taking care of issues recursively by attempting to construct an answer steadily,
each piece, in turn, eliminating those solutions that neglect to fulfil the conditions of the situation
at any point of time.
Randomized Algorithm:
In this type of algorithm, a random number is taken for deciding at least once during the
computations. An algorithm that uses random numbers to decide what to do next anywhere in its
logic is called Randomized Algorithm. For example, in Randomized Quick Sort, we use random
number to pick the next pivot (or we randomly shuffle the array). Typically, this randomness is
used to reduce time complexity or space complexity in other standard algorithms.
Characteristics of an Algorithm:
There are some characteristics that every algorithm should follow and here is the list of some of
them which we will see one by one.
1. Input specified:
The input is the information to be changed during the calculation to create the output. An
algorithm ought to have at least 0 all around characterized inputs. Input exactness
necessitates that you understand what sort of information, how much and what structure the
2. Output specified:
The output is the information coming about because of the calculation. An algorithm ought to
have at least 1 all around characterized outputs, and should coordinate the ideal output.
Output exactness likewise necessitates that you understand what sort of information, how
Algorithms must determine each step and each of its steps should be clear in all behaviours
and must direct to only one meaning. That's why the algorithm should be clear and
unambiguous. Details of each step must be likewise be explained (counting how to deal with
4. Feasible:
The algorithm should be effective which implies that all those means that are needed to get to
output must be feasible with the accessible resources. It should not contain any pointless and
5. Independent:
An algorithm should have step by step directions, which should be independent of any
programming code. It should be with the end goal that it very well may be a sudden spike in
6. Finiteness:
The algorithm must quit, eventually. Stopping may imply that you get the normal output.
Algorithms must end after a limited number of steps. An algorithm should not be boundless
and consistently end after a finite number of steps. There is no reason for building up an
algorithm that is limitless as it will be pointless for us.
Algorithms should be most effective among many different ways to solve a problem.
An algorithm shouldn't include computer code. Instead, the algorithm should be written in such a
way that it can be used in different programming languages.
Advantages of an Algorithm:
Disadvantages:
Examples:
Design an algorithm to add two numbers and display the result.
Step 1 − START
Step 2 − declare three integers a, b & c
Step 3 − define values of a & b
Step 4 − add values of a & b
Step 5 − store output of step 4 to c
Step 6 − print c
Step 7 − STOP
Algorithms tell the programmers how to code the program. Alternatively, the algorithm can be
written as −
Step 1 − START ADD
Step 2 − get values of a & b
Step 3 − c ← a + b
Step 4 − display c
Step 5 − STOP
In design and analysis of algorithms, usually the second method is used to describe an algorithm.
It makes it easy for the analyst to analyze the algorithm ignoring all unwanted definitions. He can
observe what operations are being used and how the process is flowing. Writing step numbers, is
optional. We design an algorithm to get a solution of a given problem. A problem can be solved
in more than one ways.
Hence, many solution algorithms can be derived for a given problem. The next step is to analyze
those proposed solution algorithms and implement the best suitable solution.
Programming Algorithm:
A programming algorithm a recipe that depicts the specific advances required for the computer
to take care of an issue or arrive at an objective. We have all observed food plans, they list the
ingredients required and a bunch of steps for how to make the portrayed dinner.
Indeed, an algorithm is much the same as that. In computer language, the word for a recipe is a
process, and the ingredients are called inputs. Your computer takes a glance at your system,
follows it precisely, and you will see the outcomes, which are called outputs.
A programming algorithm portrays how to accomplish something, and your computer will do it
precisely that way without fail. All things considered, it will once you convert your algorithm
Nevertheless, it's crucial to take note that a programming algorithm isn't computer code. It's
written in straightforward English or whatever the programmers talk about. It doesn't beat around
the bush, it has a beginning, a center, and an end. Indeed, you will likely name the initial step
Data Types:
Data Type Meaning Size (in Bytes)
int Integer 2 or 4
float Floating-point 4
char Character 1
bool Boolean 1
void Empty 0
Control Structure:
Control Structures are just a way to specify flow of control in programs. Any algorithm or
program can be more-clear and understood if they use self-contained modules called as logic or
control structures. It basically analyzes and chooses in which direction a program flows based
on certain parameters or conditions. There are three basic types of logic, or flow of control,
known as:
1. Sequence logic, or sequential flow
2. Selection logic, or conditional flow
3. Iteration logic, or repetitive flow
Bubble Sort:
Bubble sort is a simple sorting algorithm. This sorting algorithm is comparison-based algorithm
in which each pair of adjacent elements is compared and the elements are swapped if they are not
in order. This algorithm is not suitable for large data sets as its average and worst case complexity
are of Ο (n2) where n is the number of items.
More generally, it can happen that more than one element is placed in their final position on a
single pass. In particular, after every pass, all elements after the last swap are sorted, and do not
need to be checked again. This allows to skip over many elements, resulting in about a worst
case 50% improvement in comparison count (though no improvement in swap counts), and adds
very little complexity because the new code subsumes the "swapped" variable:
newn := 0
for i := 1 to n - 1 inclusive do
if A[i - 1] > A[i] then
swap(A[i - 1], A[i])
newn := i
end if
end for
n := newn
until n ≤ 1
end procedure
Alternate modifications, such as the cocktail shaker sort attempt to improve on the bubble sort
performance while keeping the same idea of repeatedly comparing and swapping adjacent items.
Step-by-step Example:
Take an array of numbers "5 1 4 2 8", and sort the array from lowest number to greatest number
using bubble sort. In each step, elements written in bold are being compared. Three passes will
be required;
First Pass
( 5 1 4 2 8 ) → ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps
since 5 > 1.
( 1 5 4 2 8 ) → ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) → ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) → ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5),
algorithm does not swap them.
Second Pass
(14258)→(14258)
( 1 4 2 5 8 ) → ( 1 2 4 5 8 ), Swap since 4 > 2
(12458)→(12458)
(12458)→(12458)
Now, the array is already sorted, but the algorithm does not
know if it is completed. The algorithm needs one
additional whole pass without any swap to know it is sorted.
Third Pass
(12458)→(12458)
(12458)→(12458)
(12458)→(12458)
(12458)→(12458)
Flowchart:
A flowchart is the graphical or diagrammatical representation of a process or system which
details the sequencing of steps to get the desired output. A flowchart uses various symbols and
structures to represent the flow of information. Flowchart first takes the input, does processing
tasks and gives final output. The symbols used in flowchart is shown below.
The graphical representation helps the users to find where the decision or looping needs to be
done. Also, flowchart helps in debugging process. These are some advantages of Flowchart.
Disadvantages of flowchart: The main disadvantage of flowchart is its complex logic. Drawing
flowchart is not an easy job and it is time consuming as well. Also, another major problem is if
we need to modify anything in the flowchart then it is really difficult. We might need to redraw
the entire flowchart. These are some major problems we may face in flowchart.
Applications of the Algorithm:
Here we will see some of the practical applications of the algorithm:
First, we will start with the internet which is very much important for our daily life and we
cannot even imagine our life without the internet and it is the outcome of clever and creative
algorithms. Numerous sites on the internet can operate and falsify this huge number of data only
The everyday electronic commerce activities are massively subject to our data, for example,
credit or debit card numbers, passwords, OTPs, and many more. The center technologies used
incorporate public-key crypto currency and digital signatures which depend on mathematical
algorithms.
Even an application that doesn't need algorithm content at the application level depends
vigorously on the algorithm as the application relies upon hardware, GUI, networking, or object
There are some other vital use cases where the algorithm has been used such as if we watch any
video on YouTube then next time we will get related-type advice as recommended videos for us.
Our developers use the spiral iterative methodology to build software applications in the shortest
possible time and with zero risk. In this process, an application will go through different stages,
such as:
Requirement Definition
1
Analysis
2
Design
3
Development
4
Integration & Testing
5
Deployment & Acceptance
We start the project by gathering your requirements, conducting business analysis and creating a
feature list and cost estimate. The outsourcing contract will then be signed and the project will
start up. We then assign an experienced project manager to head a team of skilled software
developers.
Conclusion:
In conclusion, I can say that an algorithm is a step by step process for problem-solving. Above, I
have concluded several applications, characteristics of algorithms but there are numerous
advantages and disadvantages of algorithms. The algorithm is easy to understand and in this, the
issue is torn down into tinier pieces or steps. Thus, it makes it easier for the programmer to
modify it into an actual program. Also, there are some disadvantages like writing an algorithm
carries a long time so we can say that it is time-consuming and branching and looping articles are
hard to indicate in algorithms. I have also demonstrated about the process or steps of developing
and building an application. Algorithm and flowcharts are both necessary before developing an
application. Algorithm follows sequential pattern whereas flowchart follows sequential pattern
but with diagrams. Algorithm helps break down complex structures into simpler structures.
Flowchart acts as a blueprint for a program. Both algorithm and flowchart are necessary for
effective management and building an application. Hench, algorithm and flowchart are
necessary.
M [1]
Determine the steps taken from writing code to execution.
Introduction:
We may not have reached into the future where we can travel in flying cars yet, but we have
advanced leaps and bounds into a high-tech society. Nowadays, everything is computer-aided-
from your alarm clock to your coffee machine to automated cars and even your home lights (Hi
Alexa! Ok, Google!). None of these would have been possible without computers, and the
language that runs them - is coding.
Coding:
Coding is basically the act of translating codes from human language to a machine-based
language. It can also be called a subset of programming since it is the foundation of
programming. A coder has to be multilingual and has to write codes in different programming
languages such as Java, C, Python, R based on the requirement. With the help of codes, you are
providing instructions and information to the computer.
Coding, in simpler terms, means feeding our commands in the computer in a language the
computer understands, so that the computer can carry out the said command, and perform the
task. It is, therefore, not an exaggeration to say that coding runs the future that we are living in
the present. Coding, in simpler terms, is the language used by computers to understand our
commands and, therefore, process our requests. Programming is a list of codes arranged in a
sequence that results in the completion of work.
Take, for example, the following analogy - you click on a video app on your smartphones, and it
plays a video. A program is what brings about the completion of the task 'playing the said video.'
The program is made up of a series of smaller tasks that direct your smartphone to do the above
task and bring it to completion. Each smaller task is written in code, i.e., the computer language,
and that is what coding is all about.
Works of coding:
Computers and artificial intelligence are built up of, mainly, transistors; and these transistors act
as the 'brain' of the computer. Hence, the computer only understands the language of 'on' and
'off,' guided by the transistor switches. The on and off are represented by 1 and 0, respectively, in
a binary system. Therefore, your computer and every other gadget run on an infinite sequence of
binary codes.
These binary codes form the machine code, with each number directing the machine (your
computer) to change a sequence in its memory.
Programming languages make the binary code language of the computers more manageable by
translating our commands into binary code.
Coding means using the programming language to get the computer to behave as desired.
Each line of the code is a set of instructions for the computer. A set of codes form a script, and a
set or dozens of sets, form a program.
Uses of coding:
In a broader sense, coding is used to run the simplest of appliances and gadgets used in today's
world.
Coding finds excessive use in popular gadgets such as applications on the phone, tablets,
computers, and other smart gadgets, like smart watches and smart TV.
Coding is used in automated cars, to control every aspect, from clutches to air
conditioning, to fuel injectors.
System analogs are being employed to streamline procedures at a broader level, such as
controlling sewages, electrical grids, traffic lights, etc.
Coding finds use in every phase of the current world.
Advantages of Coding:
Coding helps you understand the ABC of how the technology works- it helps one become
adapter in using technology around them.
2. Problem-solving:
Programming is essentially about creating solutions to the problems faced while transmitting
data. Hence, coding helps you become more apt and creative in problem-solving.
With the increasing demand for technology, software engineering and programming are the
fastest-growing job opportunities all over the world.
The field of coding is vast and with so a good deal of different programming languages, each
with its benefits, uses, and advantages. It is easy for a beginner to get overwhelmed by the
prospectus.
Here are the three easiest programming languages that a complete beginner can learn:
1. HTML:
Every beginner's boot camp to coding begins with learning how to create an HTML page.
HTML was initially created to help writers present their documents to the readers in a simpler
way on the World Wide Web.
2. CSS:
Cascading Style Sheets are used to design the layout of the web browser page, and it includes
everything-including designing the font, background color, animations, hyperlinks, etc.
The CSS defines how readability and ease of use of your web page.
3. Java Script:
An amalgamation of CSS with HTML, JavaScript enables increased interactivity of your web
browser with the client.
Programming:
Programming is a bigger aspect than coding, which is one of the parts of it. It is the process of
developing an executable software program that is implemented without any errors. It is the
programmer’s job to analyze a problem in the code and provide solutions.
Application creation requires several necessary steps, including planning, designing, testing,
deployment, and maintenance. So, programming deals with not only coding but also analysis and
implementing algorithms, understanding data structures, and mitigating issues. Altogether, the
whole process is called programming.
A pseudocode is a good approach for explaining the algorithm to the coder. Coding is an
essential part of programming, but a programmer requires a lot more knowledge, experience, and
additional skills than coding.
A programmer creates complex programs, read, and executed by the machine providing a
complete set of instructions for computers to perform. It takes years to become a professional
programmer. If you can build a program and ensure that it doesn’t have errors, you can consider
yourself that you have leveled up in your career as a successful programmer.
There is one simple example that can clearly explain programming. For instance, you can
program the clock to wake you up at 6 AM. Also, you can program the AC to work on the
temperature that you have chosen with the remote button that has codes at the backend to work
on the given set of instructions by the user.
Programming languages. Development environments
As algorithms gain their prominence in our daily lives and start to affect us to a degree still
unknown to many, we should start seeking to understand them a little bit better by drawing on the
history, social relevance, and definition of the algorithm itself.
First of all, algorithms are not code. They are two separated yet well acquainted entities in the
way that one completes the other. “Algorithms are a finite number of calculations or instructions
that, when implemented, will yield a result.” Code is the practical implementation of
algorithms, described as set of instruction for a computer delivered via the use of specific
programming languages.¹
It is important to remember that an algorithm does not live only within the environment of
Computer Science but is firstly a set of mathematical operations. One can think of algorithms as
step-by-step formal instructions to solve a problem that are only abstract until implemented on
the machine. As suggested by Andrew Goffey in the chapter “Algorithms” on M. Fuller’ Software
Studies, we can interpret algorithms as if they were sentences (term firstly used by Michel
Foucault) in a language — one which does not cohere with the idea that words can’t do things. A
language that is self-sufficient and can transversally communicate with itself without the need of
actual “speakers” if engineered to do so.²
Languages base their existence on two basic delivery methods, speech and writing (memorization
being a result of the formers). However, we can see the power of speech diminishing when, in
society, “the volume of knowledge quickly increased to a point at which there was too much
[knowledge] to pass in the form of dialogue”. To reason about the nature of the very first
algorithm known in history — the instructions for factorization and finding square roots
developed by Babylonians in 1600 BC — we need to imagine dealing with the volume of
congested information required to be processed to get the job done. To tackle this issue of “big
data” it became crucial to develop written rules and instructions aimed at managing such “ill-
defined network of actions upon actions”.
Later in history, Turing’s abstract mechanical machine condensed the logic of virtually any
possible algorithm into bits on an infinite strip of tape. Here the algorithm, while occupying a
crucial role in the computing process, becomes nothing but a set of data fed into the machine to
yield a result. But even a cardinal concept like that of the Universal Turing Machine assumes a
degree of abstraction in that it supposes a precise set of inputs (1, 0, and Blank) and outputs
(move tape left, move tape right, go back to the beginning state, etc.). Without such structure of
data the algorithm is broken.
In a sense, we need algorithms to be broken, “machines to break down and systems to be hacked”⁵
because unintended side-effects are the foundation stone for most creative applications known to
us today. I believe the study of algorithms needs to escape the shadow of code, and the much
worshipped process of learning to code. This means taking a step back to find the patterns that
connect data, before even writing a single line of code.
In fact, this stage should really be called identifying the solution because what you're really
trying to do is to tie down exactly what it is that you're trying achieve.
Requirements
Specification
Requirements:
The first step is to examine the problem carefully to try to identify what qualifies as a solution. A
single problem may have many different solutions, but they will all have something in common.
So here you're trying to work out exactly what your program will be required to do.
For example, if we were asked to write a calculator program, we could choose many different
ways for the user to enter calculations - from entering equations, pressing buttons or even
writing them on the screen - but if the software can't add up correctly then it won't have solved
the problem. Therefore our first few requirements must be that:
the user can enter sums (we don't care how they do this)
and that the program will then evaluate those sums correctly
and display the result for the user.
We also have to decide what sort of sums our calculator will be required to evaluate. Again,
we have a fair amount of choice - we could be ambitious and ask it to solve simultaneous
equations or complex expressions, however since this is our first program we should probably
make the requirements more simple. So the third requirement is that:
The calculator must be able to evaluate sums made up of two whole numbers (integer
operands) and one addition (+), subtraction (-), multiplication (*) or division (/) sign
(operator).
Note that computer scientists traditionally use * instead of x and / instead of � to indicate
multiplication and division respectively.
Thus our calculator must be able to deal with sums like 1 + 1, 10 - 6, 43 * 5 and 42 / 7.
However it won't have to handle 67.345 + 6¼, the cube root of PI or 152.
Specification:
The second step is to then look at the list of requirements and to decide exactly what your
solution should do to fulfil them. As we mentioned above, there are usually many different
solutions to a single problem; here, your aim is to decide on which of those solutions you want.
Therefore, you're trying to specify, in a fairly accurate manner, just what it is your final program
will do.
For example, for the calculator, we've already decided that the program must allow us to enter
simple sums and then must evaluate them correctly and display an answer. We must now tie
down exactly what this means.
Therefore, we have to decide which method of entering sums to use. We could specify any one
of a number of methods, but for now, we'll choose a simple method. We should also specify
what other behavior we're expecting the program to have:
When the program runs it will display a welcome message, followed by some simple
instructions.
The program will then display a prompt sign ([number]>) and the user can then type
the first number of their sum at the keyboard followed by the RETURN (<-') key.
The program will display a second prompt sign ([+-/*]>) and the user can then enter
the operator that they wish to use, followed by RETURN.
A third prompt sign will be displayed ([number]>) and the user will then enter the
second number, again followed by RETURN.
The calculator program will then display the mathematically correct answer to the sum
on the screen and end.
By the time you have worked out your specification, you should have a very clear idea of what
your final program will do: your goal.
Design a Solution:
Once you've identified the things required to solve your problem, and specified what form your
solution will take, the next step is to work out just how you're going to turn that specification into
a working program. This is usually the hardest task!
As mentioned before, a program is simply a list of steps describing to the computer what it
should do. A design is simply a higher-level description of those steps. In effect it's a program
written as if the computer was a person. So, it doesn't have to completely spell out every step -
because humans know how to do a lot of things already and have a lot of common sense,
meaning that they can work the simple steps out for themselves. It also doesn't have to be written
in any special programming language - English will do (although people often use special
notations like pseudocode or flow charts for the more complicated sections).
Another way of looking at is that a programmer should be able to take a design and write the
program from it without having to think too hard. It's a bit like an architect's drawing: it contains
all the important structures without showing every bit of brick and mortar.
Working out a design to fulfil a particular specification can be difficult for several reasons:
1. You may need to learn a bit more about the capabilities of your computer and your
chosen programming language/environment to see what things it makes easy or difficult.
2. You may also need to learn some extra information about the problem or find a technique
to solve it before you can work out how to build the program.
3. Finally, you may be able to think of several ways to build the program, but they will all
have different strengths and weaknesses and so some choices will have to be made.
For our calculator, we have a fairly comprehensive specification and since it is a fairly simple
program we can turn the that quite easily into a design:
1. BEGIN
Here we assume that PRINT means 'put something on the screen' and READ means 'get
something typed on the keyboard' - both fairly standard programming operations.
Notice how step ten is actually hiding quite a complicated procedure. Although we (as
humans) could work out which operator was which and do the appropriate arithmetic, the
computer itself will need to be told exactly how to do this - but we'll leave that until the
programming stage.
Notice also how the design includes all of the important steps needed to fulfil our specification
- but that doesn't go into too much unnecessary detail. This is called abstraction.
When your design is completed you should have a very clear idea of how the computer is going
to fulfil your specification, which in turn meets your requirements, which in turn should solve
your original problem.
Program:
Programming is then the task of describing your design to the computer: teaching it your way of
solving the problem.
1. Coding.
2. Compiling.
3. Debugging.
Coding:
Coding is the act of translating the design into an actual program, written in some form of
programming language. This is the step where you actually have to sit down at the computer and
type!
Coding is a little bit like writing an essay (but don't let that put you off). In most cases you write
your program using something a bit like a word processor. And, like essays, there are certain
things that you always need to to include in your program (a bit like titles, contents pages,
introductions, references etc.). But we'll come on to them later.
When you've finished translating your design into a program (usually filling in lots of details in
the process) you need to submit it to the computer to see what it makes of it.
As an example, we shall develop and present the code for the calculator later on.
Compiling:
Compilation is actually the process of turning the program written in some programming
language into the instructions made up of 0's and 1's that the computer can actually follow. This
is necessary because the chip that makes your computer work only understands binary machine
code - something that most humans would have a great deal of trouble using since it looks
something like:
01110110
01101101
10101111
00110000
00010101
Early programmers actually used to write their programs in that sort of a style - but luckily they
soon learnt how to create programs that could take something written in a more understandable
language and translate it into this gobbledy gook. These programs are called compilers and you
can think of them simply as translators that can read a programming language, translate it and
write out the corresponding machine code.
Compilers are notoriously pedantic though - if you don't write very correct programs, they will
complain. Think of them as the strictest sort of English teacher, who picks you up on every
single missing comma, misplaced apostrophe and grammatical error.
Debugging:
This is where debugging makes it first appearance, since once the compiler has looked at your
program it is likely to come back to you with a list of mistakes as long as your arm. Don't worry
though, as this is perfectly normal - even the most experienced programmers make blunders.
Debugging is simply the task of looking at the original program, identifying the mistakes,
correcting the code and recompiling it. This cycle of code -> compile -> debug will often be
repeated many times before the compiler is happy with it. Luckily, the compiler never ever gets
cross during this process - the programmer on the other hand...
It should also be said at this point that it isn't actually necessary to write the entire program
before you start to compile and debug it. In most cases it is better to write a small section of the
code first, get that to work, and then move on to the next stage. This reduces the amount of code
that needs to be debugged each time and generally creates a good feeling of "getting there" as
each section is completed.
Finally though, the compiler will present you with a program that the computer can run:
hopefully, your solution.
Testing:
The final step in the grand programming process is that of testing your creation to check that it
does what you wanted it to do. This step is unfortunately necessary because although the
compiler has checked that your program is correctly written, it can't check whether what you've
written actually solves your original problem.
This is because it is quite possible to write a sentence in any language that is perfectly formed
with regards to the language that it's written in (syntacticly correct) but at the same time be utter
nonsense (semantically incorrect). For example, 'Fish trousers go sideways.' is a great sentence -
it's got a capital letter and a full stop - but it doesn't mean a lot. Similarly, 'Put the ice cube tray in
the oven.' has verbs and nouns and so on - but it's pretty useless if you wanted to make ice cubes.
So your program needs to be tested, and this is often initially done informally (or perhaps,
haphazardly) by running it and playing with it for a bit to see if it seems to be working correctly.
After this has been done, it should also be checked more thoroughly by subjecting it to carefully
worked out set of tests that put it through its paces, and check that it meets the requirements and
specification - but we shall discuss this more later on in the course.
Where mistakes are identified, it is a case of donning a Sherlock Holmes hat and trying to figure
out where in the code the mistake is. Once identified, the problem should be fixed by changing
the code and recompiling. Care should be taken at this point that this fix doesn't break something
else, so careful retesting is important. This process is also known as debugging.
Once all the testing and debugging has been completed, you should be pretty certain that your
program works according to your requirements and your specification and so you should finally
have a solution to your problem!
1. Create / Edit:
First of all, we need to create a C program for execution. We use an editor to create or edit
source program (also known as source code). C program file has extension .C (for examples:
myprogram.c, hello.c, etc.)
2. Compile:
After creating or editing source code we need to compile it by using compiler. If compiler does
not detect any errors in the program, then it produces object files. Object file has extension .OBJ
(for examples: myprogram.obj, hello.obj, etc.). If compiler detects error in the program, then we
need to return to step 1 to make correction in source code.
3. Link:
Object files are not executable file so in order to make executable file we use linker. If no errors
occur, linker produces executable file having extension .EXE (for examples: myprogram.exe,
hello.exe, etc.)
4. Execute:
After obtaining executing file we can run this just like other applications. We need to test to
determine whether it works properly or not. If application does not work properly we need to
return to step 1 for modifications.
Conclusion:
In this task, I have demonstrated that coding is the act of translating codes from human language
to a machine-based language. I have also clarified about the works of coding, components of
coding and relationship between algorithm & code. Likewise, I also discusses over on the steps
that taken from the writing code to the execution process and methods of generating the
executable code.
D [1]
Introduction:
Code Generation:
In computing, code generation is the process by which a compiler's code generator converts
some intermediate representation of source code into a form (e.g., machine code) that can be
readily executed by a machine. Sophisticated compilers typically perform multiple passes over
various intermediate forms. Code generation is a mechanism where a compiler takes the source
code as an input and converts it into machine code. This machine code is actually executed by
the system.
Code generation is generally considered the last phase of compilation, although there are
multiple intermediate steps performed before the final executable is produced. These
intermediate steps are used to perform optimization and other relevant processes.
The code generation process is performed by a component known as a code generator, part of the
compiler program. The original source code of any program passes through multiple phases
before the final executable is generated. This final executable code is actually the machine code,
which computer systems can execute readily.
In the intermediate phases of compilation, code optimization rules are applied one at a time.
Sometimes these optimization processes are dependent on each other, so they are applied one
after another based on the dependency hierarchy. After passing multiple phases, a parse tree or
an abstract syntax tree is generated and that is the input to the code generator. At this point, the
code generator converts it into linear sequential instructions. After this stage, there may be some
more steps depending upon the compiler. The final optimized code is the machine code for
execution and output generation.
Even their syntax is so easy that just a pure beginner would understand it without anyone
teaching them. All the common data structures in these languages have abstractions. You can
even build your own implemented versions and build data structures upon data structures. These
languages are typed dynamically.
But there is only one problem here that it can be easier for a programmer to start with, but when
they run tests, they may see lots of errors that they didn’t see before runtime, unlike other low-
level languages.
2. C Language:
C is exactly the opposite of Python here. You may even get confused here because although C is
a high-level language, some people even consider it a low-level language due to its coding
method. Even C is very good in terms of abstraction here. If you are into algorithms, you may, at
a later time, one or the other day need to learn proper low-level languages like assembly.
The point being, if you know C very well, it would be quite easy to migrate from C or any
similar type of language to assembly language. Memory management is also very good in C, and
this is very important for algorithms.
3. Java Program:
A lot of people actually hate Java for being too verbose and strict. Even some people say that it
lacks lots of features that are available in modern, sophisticated languages. But this does not
actually point to be concerned of.
Java, unlike Python, is not a dynamically typed language. It is a statically typed language and has
loads of garbage collection. This means that Java will actually show errors during compiling and
even before runtime. Compared to other high-level languages, Java has an extremely low
memory leak that obviously can be fixed and has no segmentation faults.
4. C# and C++:
C# is almost similar to Java. It is more like Java with the capabilities of the Modern Language.
Some people like to use even C++. But it is extremely unnecessarily complicated. Some people
use it because since it is hard to understand, but once you manage to crack it, people will
seriously have a tough time understanding your algorithms which makes it perfect for the job.
C#, on the other hand, has garbage collection similar to that of Java.
There are also other functional languages like Haskell (Lisp Family) and Scale (based on Java).
You can read my other blogs on them where I have written in detail about how they work and
stuff. Java, C, and C++ all run on one or the other virtual machine. Whereas Ruby and Python
are interpreters on their interpreter.
Roles Of:
Pre-processor:
A preprocessor (or pre-compiler) is a program that processes its input data to produce output that
is used as input to another program. The output is said to be a preprocessed form of the input
data, which is often used by some subsequent programs like compilers.
Complier:
Translates software written in a higher-level language into instructions that computer can
understand. It converts the text that a programmer writes into a format the CPU can understand.
The process of compilation is relatively complicated. It spends a lot of time analyzing and
processing the program.
Linker:
Linker is a program in a system which helps to link a object modules of program into a single
object file. It performs the process of linking. .. Linking is performed at both compile time, when
the source code is translated into machine code and load time, when the program is loaded into
memory by the loader.
Interpreter:
An interpreter plays the role of enabling communication between two or more individuals who
don't speak the same language. ... The critical role of an Interpreter is to interpret conversations
from one source language to another target language.
1. Divide the smaller number into the larger one and get the remainder (i.e. 450 / 100 = 4 with a
remainder of 50)
2. Repeat Step 1 with the smaller of the two numbers and the remainder (i.e. 100 / 50 = 2 with no
remainder)
3. Repeat the first two steps until you get a remainder that divides into both of the original numbers
without leaving a remainder of its own. That’s the GCD. In this example, that’s 50 which is the
GCD for 450 and 100.
500 / 460 = 1 R 40
460 / 40 = 11 R 20
40 / 20 = 2 R 0
This is often demonstrated by assigning the two numbers to the sides of a rectangle and then
repeatedly dividing that rectangle into squares with sides equal to the smaller number as shown
below:
As you can guess, tiling a room would be one of the popular real-life applications for the GCD
but whatever you’re using it for, you now need a way to code it. Often, the best way to start out
is just to write the code in the same order of the algorithm’s steps and let it grow. Starting with
the initial calculation in C#:
if (Remainder != 0)
HighNumber = LowNumber;
LowNumber = Remainder;
Leaving aside the need to declare variables and assign values to them for a moment, the lines
above show the remainder being calculated from the high and low numbers using the Mod
operator. Then the code needs to evaluate the remainder and decide how to proceed so we start
an IF statement and determine if the remainder equals zero (using the Not Equal To operator).
If the remainder IS greater than zero, the calculation will need to continue so we juggle some
figures around. The remainder is now going to be divided into the lower number so the high
number (960) is discarded, the low number (500) is moved into its place and the remainder (460)
becomes the low number.
Now we need two things – a way to loop this calculation and a way to stop it when it finishes.
do
if (Remainder != 0)
HighNumber = LowNumber;
LowNumber = Remainder;
Return LowNumber;
The DO … WHILE loop works pretty well here. The condition assigned to the WHILE
statement tests to see if the remainder is 0. If not, execution is sent back to the top and the
remainder is re-calculated with the next pair of values. Otherwise, the program exits the loop and
returns the LowNumber value which serves as the GCD.
The variable names imply that there is some way to assign the high and low numbers to the right
variables and you could do this manually in the program or by constraints in whatever user
interface you program. You should also test for or prevent 0 from being entered for either of the
values. Either way, this code can fit neatly in its own little function as shown below.
int Remainder;
do
if (Remainder != 0)
HighNumber = LowNumber;
LowNumber = Remainder;
return LowNumber;
Code variants represent alternative implementations of a computation, and are common in high-
performance libraries and applications to facilitate selecting the most appropriate implementation
for a specific execution context (target architecture and input dataset). Automating code variant
selection typically relies on machine learning to construct a model during an offline learning
phase that can be quickly queried at runtime once the execution context is known. In this paper,
we define a new approach called architecture-adaptive code variant tuning, where the variant
selection model is learned on a set of source architectures, and then used to predict variants on a
new target architecture without having to repeat the training process. We pose this as a multi-task
learning problem, where each source architecture corresponds to a task; we use device features in
the construction of the variant selection model. This work explores the effectiveness of multi-
task learning and the impact of different strategies for device feature selection. We evaluate our
approach on a set of benchmarks and a collection of six NVIDIA GPU architectures from three
distinct generations. We achieve performance results that are mostly comparable to the previous
approach of tuning for a single GPU architecture without having to repeat the learning phase.
Conclusion:
In this task, I have briefly described about the implementation of an algorithm in a suitable
language along with the relationship between the written algorithm and the code variant. I have
also clarified the process of turning an algorithm into code variant which helps in the developing
the application.
P [2]
Procedural Programming:
Procedural Programming can be defined as a programming model which is derived from
structured programming, based upon the concept of calling procedure. Procedures, also known
as routines, subroutines or functions, simply consist of a series of computational steps to be
carried out. During a program’s execution, any given procedure might be called at any point,
including by other procedures or itself.
Languages used in Procedural Programming:
FORTRAN, ALGOL, COBOL,
Programming Paradigm:
Paradigm can also be termed as method to solve some problem or do some task. Programming
paradigm is an approach to solve problem using some programming language or also we can
say it is a method to solve a problem using tools and techniques that are available to us
following some approach. There are lots for programming language that are known but all of
them need to follow some strategy when they are implemented and this methodology/strategy
is paradigms. Apart from varieties of programming language there are lots of paradigms to
fulfil each and every demand.
Programming paradigms are a way to classify programming languages based on their features.
Languages can be classified into multiple paradigms.
Some paradigms are concerned mainly with implications for the execution model of the
language, such as allowing side effects, or whether the sequence of operations is defined by the
execution model. Other paradigms are concerned mainly with the way that code is organized,
such as grouping a code into units along with the state that is modified by the code. Yet others
are concerned mainly with the style of syntax and grammar.
Paradigm is a school of thought or model that has distinct features, frameworks, patterns, and
style which help you solve a particular problem. Paradigms are used in all fields such as
psychology, sociology, etymology, computer science and so on. In the field of computer science,
new programming languages emerge from existing languages and add, remove and combine
features in a new way. The languages may follow a particular paradigm or can be a combination
of many paradigms. Did you know that there are 256 programming languages? It is evident that
each of them has evolved from the other with an amalgamation of various programming
paradigms.
Programming languages are tools and not all tools are good for all jobs. Some tasks are easier to
solve functionally. Some are clearly suited for Objected Oriented programming. Others get
simpler when you use constraint solving or pattern matching.
Advantage:
1. Very simple to implement
2. It contains loops, variables etc.
Disadvantage:
1. Complex problem cannot be solved
2. Less efficient and less productive
3. Parallel programming is not possible
Its code:
// average of five number in C
Its code:
import java.io.*;
class GFG {
public static void main(String[] args)
{
System.out.println("GfG!");
Signup s1 = new Signup();
s1.create(22, "riya", "[email protected]", 'F', 89002);
}
}
class Signup {
int userid;
String name;
String emailid;
char sex;
long mob;
predicates
sumoftwonumber(integer, integer)
clauses
sum(0, 0).
sum(n, r):-
n1=n-1,
sum(n1, r1),
r=r1+n
Class:
The building block of C++ that leads to Object-Oriented programming is a Class. It is a user-
defined data type, which holds its own data members and member functions, which can be
accessed and used by creating an instance of that class. A class is like a blueprint for an object.
For Example: Consider the Class of Cars. There may be many cars with different names and
brand but all of them will share some common properties like all of them will have 4 wheels,
Speed Limit, Mileage range etc. So here, Car is the class and wheels, speed limits, mileage are
their properties.
A Class is a user-defined data-type which has data members and member functions.
Data members are the data variables and member functions are the functions used to
manipulate these variables and together these data members and member functions define
the properties and behaviour of the objects in a Class.
In the above example of class Car, the data member will be speed limit, mileage etc and
member functions can apply brakes, increase speed etc.
We can say that a Class in C++ is a blue-print representing a group of objects which shares
some common properties and behaviours.
Object:
class person
char name[20];
int id;
public:
void getdetails(){}
};
int main()
Object take up space in memory and have an associated address like a record in pascal or
structure or union in C.
When a program is executed the objects interact by sending messages to one another.
Each object contains data and code to manipulate the data. Objects can interact without having
to know details of each other’s data or code, it is sufficient to know the type of message
accepted and type of response returned by the objects.
Encapsulation:
Consider a real-life example of encapsulation, in a company, there are different sections like
the accounts section, finance section, sales section etc. The finance section handles all the
financial transactions and keeps records of all the data related to finance. Similarly, the sales
section handles all the sales-related activities and keeps records of all the sales. Now there may
arise a situation when for some reason an official from the finance section needs all the data
about sales in a particular month. In this case, he is not allowed to directly access the data of
the sales section. He will first have to contact some other officer in the sales section and then
request him to give the particular data. This is what encapsulation is. Here the data of the sales
section and the employees that can manipulate them are wrapped under a single name “sales
section”.
Encapsulation also leads to data abstraction or hiding. As using encapsulation also hides the
data. In the above example, the data of any of the section like sales, finance or accounts are
hidden from any other section.
Abstraction:
Data abstraction is one of the most essential and important features of object-oriented
programming in C++. Abstraction means displaying only essential information and hiding the
details. Data abstraction refers to providing only essential information about the data to the
outside world, hiding the background details or implementation.
Consider a real-life example of a man driving a car. The man only knows that pressing the
accelerators will increase the speed of the car or applying brakes will stop the car but he does
not know about how on pressing accelerator the speed is actually increasing, he does not know
about the inner mechanism of the car or the implementation of accelerator, brakes etc in the
car. This is what abstraction is.
Abstraction using Classes: We can implement Abstraction in C++ using classes. The class
helps us to group data members and member functions using available access specifiers. A
Class can decide which data member will be visible to the outside world and which is not.
Abstraction in Header files: One more type of abstraction in C++ can be header files. For
example, consider the pow() method present in math.h header file. Whenever we need to
calculate the power of a number, we simply call the function pow() present in the math.h
header file and pass the numbers as arguments without knowing the underlying algorithm
according to which the function is actually calculating the power of numbers.
Polymorphism:
The word polymorphism means having many forms. In simple words, we can define
polymorphism as the ability of a message to be displayed in more than one form.
A person at the same time can have different characteristic. Like a man at the same time is a
father, a husband, an employee. So the same person posses different behaviour in different
situations. This is called polymorphism.
An operation may exhibit different behaviours in different instances. The behaviour depends
upon the types of data used in the operation.
Inheritance:
The capability of a class to derive properties and characteristics from another class is called
Inheritance. Inheritance is one of the most important features of Object-Oriented
Programming.
Sub Class: The class that inherits properties from another class is called Sub class or
Derived Class.
Super Class:The class whose properties are inherited by sub class is called Base Class or
Super class.
Reusability: Inheritance supports the concept of “reusability”, i.e. when we want to create
a new class and there is already a class that includes some of the code that we want, we can
derive our new class from the existing class. By doing this, we are reusing the fields and
methods of the existing class.
Example: Dog, Cat, Cow can be Derived Class of Animal Base Class.
Dynamic Binding:
In dynamic binding, the code to be executed in response to function call is decided at runtime.
C++ has virtual functions to support this.
Message Passing:
Objects communicate with one another by sending and receiving information to each other. A
message for an object is a request for execution of a procedure and therefore will invoke a
function in the receiving object that generates the desired results. Message passing involves
specifying the name of the object, the name of the function and the information to be sent.
Event-driven programming is structured according to the Hollywood principle “Don’t call us, we
call you”. Event-driven programming is a paradigm of system architecture where the logic flow
within the program is driven by events such as user actions, messages from other programs, GPS
signals or hardware (sensor) inputs.
Events govern the overall flow of program execution, and the application runs and waits for
events to occur. When an event is triggered, the application code, which is listening to the events,
responds by running a specific handling function (callback function).
Oftentimes, a program has to deal with external events through inputs/outputs (I/O). I/O and
event management are the foundations of any computer system: reading or writing from storage,
handling touch events, drawing on a screen, sending or receiving information on a network link,
and so on.
Most of us write imperative applications, where statements are executed in a specific order to
change the application state. The code is executed and we arrive at a final state. After the state is
calculated, the state does not change when the underlying factors do.
On the other hand, event-driven programming is about the propagation of change. It is also
referred to as declarative programming, where we express our intent but the application’s state is
dynamically determined by changes of underlying factors. Event-driven programs can be built
using imperative techniques, like callbacks. This may be fine for a program that has a single
event. However, for applications where hundreds of events are happening, this could easily lead
to callback hell. We could have numerous callbacks relying on one another, and it would be
really difficult to figure out which ones were being executed.
As a result, we require a new set of abstractions that enable us to seamlessly build asynchronous,
event-driven interactions across a network boundary. There are libraries in different languages,
like Java, that provide us with these abstractions. These libraries are referred to as Reactive
Extensions.
Reactive programs can be classified as push-based and pull-based. The pull-based system waits
for a request from the subscriber to push the data. This is the classic example where the data
source is actively polled for more information. At the code level, this employs the iterator
pattern, and Iterable<T> interface is specifically designed for such scenarios that are
synchronous in nature since the application can block while pulling data.
On the other hand, a push-based model aggregates events and pushes through a series of listeners
to achieve the computation. In this case, unlike the pull-based system, data and related updates
are handed to the subscriber from the source (Observable sequences in this case). This
asynchronous nature is achieved by not blocking the subscriber, but rather making it react to the
changes. As you can see, employing this push pattern is more beneficial in rich UI environments
where you wouldn’t want to block the main UI thread while waiting for some events. This
becomes ideal, thus making event-driven programs responsive.
Asynchronous means no waiting time. The caller function does not wait for a response from the
called service; it continues doing its next task.
Synchronous means waiting time. The caller should wait for a response from the invoked service
and it cannot continue doing its next task. The caller service should wait until the invoked
service finishes its job and returns results (success or failure).
Asynchronous and non-blocking I/O is about not blocking threads of execution and it’s often
more cost-efficient through more efficient use of resources. It helps minimize congestion on
shared resources in the system, which is one of the biggest impediment to scalability, low
latency, and high throughput.
Adding new data and function is not easy. Adding new data and function is easy.
Procedural programming does not have any Object oriented programming provides
proper way for hiding data so it is less secure. data hiding so it is more secure.
Examples: C, FORTRAN, Pascal, Basic etc. Examples: C++, Java, Python, C# etc.
Conclusion:
In this task, I have demonstrated that procedural programming, object-oriented and event driven
paradigms along their characteristics and the relationship between them. These all are important
to developing an application which are mentioned above.
M [2]
On a more basic level, IDEs provide interfaces for users to write code, organize text groups, and
automate programming redundancies. But instead of a bare bones code editor, IDEs combine the
functionality of multiple programming processes into one. Some IDEs focus on a specific
programming language, such as Python or Java, but many have cross-language capabilities. In
terms of text editing capabilities, IDEs often possess or allow the insertion of frameworks and
element libraries to build upon base-level code.
Throughout the writing process, one or multiple users create hierarchies within the IDE and
assign groups of code to their designated region. From these, groupings can be strung together,
compiled, and built. Most IDEs come with built-in debuggers, which activate upon the build.
Visual debuggers are a substantial benefit of many IDEs. If any bugs or errors are spotted, users
are shown which parts of code have problems.
Benefits of IDE:
Serves as a single environment for most, if not all, of a developer’s needs such as version
control systems, debugging tools, and Platform-as-a-Service.
Code completion capabilities improve programming workflow.
Automatically checks for errors to ensure top quality code.
Refactoring capabilities allow developers to make comprehensive and mistake-free
renaming changes.
Maintain a smooth development cycle.
Increase developer efficiency and satisfaction.
Deliver top-quality software on schedule.
Text editor
Virtually every IDE will have a text editor designed to write and manipulate source code. Some
tools may have visual components to drag and drop front-end components, but most have a
simple interface with language-specific syntax highlighting.
Debugger
Debugging tools assist users in identifying and remedying errors within source code. They often
simulate real-world scenarios to test functionality and performance. Programmers and software
engineers can usually test the various segments of code and identify errors before the application
is released.
Compiler
Compilers are components that translate programming language into a form machines can
process, such as binary code. The machine code is analyzed to ensure its accuracy. The compiler
then parses and optimizes the code to optimize performance.
Code completion
Code complete features assist programmers by intelligently identifying and inserting common
code components. These features save developers time writing code and reduce the likelihood of
typos and bugs.
Conclusion:
Hence, I have explained about the integrated development environment (IDE) along their
benefits. I have also correct about the common features of IDE such as text editor, debugger,
compiler and code completion.
D2
Critically evaluates the source code of an application which implements the programming
paradigms, in terms of code structure and characteristics.
Introduction:
Components of program:
1. Program Structure
The general type of a program, there is always a structure or format in which the programmer
develop programs. A well-organized program uses suitable information structures and formats.
2. Variable Declaration
A variable is used to store values at run time or time of execution. Some programming languages
give flexibility to the programmer to avoid variable declaration, but a good programming
language must have a proper variable declaration method.
3. Looping structures
Looping in computer programming is used to repeat statements, under a given condition. There
are many types of loops with minor difference in different computer programming languages.
Loops will keep executing until the exit condition is met.
4. Control structures
A control structure is a slab in computer programming that examines variables and chooses the
execution direction in which to go, based on given parameters or conditions.
5. Sentence structure
In computer programming, when a programmer writes a program they use programming notation
in an expository. Thus, it is important that the notation has a sentence structure that can be easily
expressed.
Elements of Programming:
Variables: variables in programming tells how the data is represented which can be range
from very simple value to complex one. The value they contain can be change depending
on condition. As we know program consist of instructions that tell the computer to do
things and data the program use when it is running. Data is constant with the fixed values
or variable. They can hold a very simplex value like an age of the person to something very
complex like a student track record of his performance of whole year.
Loops: we can define loop as a sequence of instructions that are repeated continuously till
a certain condition is not satisfied. How a loop start understand this first a certain process
is done, to get any data and changing it after that applied condition on the loop is checked
whether counter reached to prescribed number or not. Basically a loop carry out execution
of a group of instruction of commands a certain number of times. There is also a concept
of infinite loop which is also termed as endless loop is a piece of code that lack from
functional exit and goes to repeat indefinitely.
Subroutines and functions: the element of the programming allow a programmer to use
snippet of code into one location which can be used over and over again. The primary
purpose of the functions is to take arguments in numbers of values and do some calculation
on them after that return a single result. Functions are required where you need to do
complicated calculations and the result of that may or may not be used subsequently used
in an expression. If we talk about subroutines that return several results. Where calls to
subroutines cannot be placed in an expression whether it is in the main program where
subroutine is activated by using CALL statement which include the list of inputs and
outputs that enclosed in the open and closed parenthesis and they are called the arguments
of the subroutines. There are some of the rules follow by both to define name like less than
six letters and start with the letters. The name should be different that used for variables
and functions.
Conclusion:
Hence, I have demonstrated that the programming paradigm that evaluates the source code of an
application which implements the paradigm in terms of code structure and their characteristics. I
have described the basic elements of programming which helps in the developing the application.
P [3]
Write a program that implements and algorithm using an IDE.
Introduction:
We can use the implementation of machine learning algorithms as a strategy for learning about
applied machine learning. We can also carve out a niche and skills in algorithm implementation.
Algorithm Understanding
Implementing a machine learning algorithm will give you a deep and practical appreciation for
how the algorithm works. This knowledge can also help you to internalize the mathematical
description of the algorithm by thinking of the vectors and matrices as arrays and the
computational intuitions for the transformations on those structures.
There are numerous micro-decisions required when implementing a machine learning algorithm
and these decisions are often missing from the formal algorithm descriptions. Learning and
parameterizing these decisions can quickly catapult you to intermediate and advanced level of
understanding of a given method, as relatively few people make the time to implement some of
the more complex algorithms as a learning exercise.
You are developing valuable skills when you implement machine learning algorithms by hand.
Skills such as mastery of the algorithm, skills that can help in the development of production
systems and skills that can be used for classical research in the field.
Mastery: Implementation of an algorithm is the first step towards mastering the algorithm. You
are forced to understand the algorithm intimately when you implement it. You are also creating
your own laboratory for tinkering to help you internalize the computation it performs over time,
such as by debugging and adding measures for assessing the running process.
Production Systems: Custom implementations of algorithms are typically required for
production systems because of the changes that need to be made to the algorithm for efficiency
and efficacy reasons. Better, faster, less resource intensive results ultimately can lead to lower
costs and greater revenue in business, and implementing algorithms by hand help you develop
the skills to deliver these solutions.
Literature Review: When implementing an algorithm you are performing research. You are
forced to locate and read multiple canonical and formal descriptions of the algorithm. You are
also likely to locate and code review other implementations of the algorithm to confirm your
understandings. You are performing targeted research, and learning how to read and make
practical use of research publications.
Process of algorithm
There is a process you can follow to accelerate your ability to learn and implement a machine
learning algorithm by hand from scratch. The more algorithms you implement, the faster and
more efficient you get at it and the more you will develop and customize your own process.
1. Select programming language: Select the programming language you want to use for the
implementation. This decision may influence the APIs and standard libraries you can use in your
implementation.
2. Select Algorithm: Select the algorithm that you want to implement from scratch. Be as specific
as possible. This means not only the class, and type of algorithm, but also go as far as selecting a
specific description or implementation that you want to implement.
3. Select Problem: Select a canonical problem or set of problems you can use to test and validate
your implementation of the algorithm. Machine learning algorithms do not exist in isolation.
4. Research Algorithm: Locate papers, books, websites, libraries and any other descriptions of the
algorithm you can read and learn from. Although, you ideally want to have one keystone
description of the algorithm from which to work, you will want to have multiple perspectives on
the algorithm. This is useful because the multiple perspectives will help you to internalize the
algorithm description faster and overcome roadblocks from any ambiguities or assumptions
made in the description (there are always ambiguities in algorithm descriptions).
5. Unit Test: Write unit tests for each function, even consider test driven development from the
beginning of the project so that you are forced to understand the purpose and expectations of
each unit of code before you implement them.
I strongly suggest porting algorithms from one language to another as a way of making rapid
progress along this path. You can find plenty of open source implementations of algorithms that
you can code review, diagram, internalize and re-implement in another language.
Consider open sourcing your code while you are developing it and after you have developed it.
Comment it well and ensure it provides instructions on how to build and use it. The project will
provide marketing for the skills you are developing and may just provide inspiration and help for
someone else looking to make their start in machine learning. You may even be lucky enough to
find a fellow programmer sufficiently interested to perform an audit or code review for you. Any
feedback you get will be invaluable (even as motivation), actively seek it.
Extensions
Once you have implemented an algorithm you can explore making improvements to the
implementation. Some examples of improvements you could explore include:
Experimentation: You can expose many of the micro-decisions you made in the algorithms
implementation as parameters and perform studies on variations of those parameters. This can
lead to new insights and disambiguation of algorithm implementations that you can share and
promote.
Optimization: You can explore opportunities to make the implementation more efficient by
using tools, libraries, different languages, different data structures, patterns and internal
algorithms. Knowledge you have of algorithms and data structures for classical computer science
can be very beneficial in this type of work.
Specialization: You may explore ways of making the algorithm more specific to a problem. This
can be required when creating production systems and is a valuable skill. Making an algorithm
more problem specific can also lead to increases in efficiency (such as running time) and
efficacy (such as accuracy or other performance measures).
Generalization: Opportunities can be created by making a specific algorithm more general.
Programmers (like mathematicians) are uniquely skilled in abstraction and you may be able to
see how the algorithm could be applied to more general cases of a class of problem or other
problems entirely.
Limitations
You can learn a lot by implementing machine learning algorithms by hand, but there are also
some downsides to keep in mind.
You may find it beneficial to start with a slower intuitive implementation of a complex algorithm
before considering how to change it to be programmatically less elegant, but computationally
more efficient.
Example Projects
Some algorithms are easier to understand than others. In this post I want to make some
suggestions for intuitive algorithms from which you might like to select your first machine
learning algorithm to implement from scratch.
Ordinary Least Squares Linear Regression: Use two dimensional data sets and model x from
y. Print out the error for each iteration of the algorithm. Consider plotting the line of best fit and
predictions for each iteration of the algorithm to see how the updates affect the model.
k-Nearest Neighbor: Consider using two dimensional data sets with 2 classes even ones that
you create with graph paper so that you can plot them. Once you can plot and make predictions,
you can plot the relationships created for each prediction decision the model makes.
Perceptron: Considered the simplest artificial neural network model and very similar to a
regression model. You can track and graph the performance of the model as it learns a dataset.
Conclusion:
In this task, I have explained about the machine learning algorithm using IDE. I have also
demonstrated that the process of implementation, extensions and limitations. I have also correct
the example projects of an algorithm. These all are important for the developing and
implementation of an algorithm by using IDE.
M [3]
Discussion
High-level language programs are usually written (coded) as ASCII text into a source code file.
A unique file extension (Examples: .asm .c .cpp .java .js .py) is used to identify it as a source
code file. As you might guess for our examples – Assembly, “C”, “C++”, Java, JavaScript, and
Python, however, they are just ASCII text files (other text files usually use the extension of .txt).
The source code produced by the programmer must be converted to an executable machine code
file specifically for the computer’s CPU (usually an Intel or Intel-compatible CPU within today’s
world of computers). There are several steps in getting a program from its source code stage to
running the program on your computer. Historically, we had to use several software programs (a
text editor, a compiler, a linker, and operating system commands) to make the conversion and
run our program. However, today all those software programs with their associated tasks have
been integrated into one program. However, this one program is really many software items that
create an environment used by programmers to develop software. Thus the name: Integrated
Development Environment or IDE.
Programs written in a high-level language are either directly executed by some kind of
interpreter or converted into machine code by a compiler (and assembler and linker) for the CPU
to execute. JavaScript, Perl, Python, and Ruby are examples of interpreted programming
languages. C, C++, C#, Java, and Swift are examples of compiled programming
languages.[2] The following figure shows the progression of activity in an IDE as a programmer
enters the source code and then directs the IDE to compile and run the program.
Upon starting the IDE software the programmer usually indicates the file he or she wants to open
for editing as source code. As they make changes they might either do a “save as” or “save”.
When they have finished entering the source code, they usually direct the IDE to “compile &
run” the program. The IDE does the following steps:
1. If there are any unsaved changes to the source code file it has the test editor save the changes.
2. The compiler opens the source code file and does its first step which is executing the pre-
processor compiler directives and other steps needed to get the file ready for the second step.
The #include will insert header files into the code at this point. If it encounters an error, it stops
the process and returns the user to the source code file within the text editor with an error
message. If no problems encountered it saves the source code to a temporary file called a
translation unit.
3. The compiler opens the translation unit file and does its second step which is converting the
programming language code to machine instructions for the CPU, a data area, and a list of items
to be resolved by the linker. Any problems encountered (usually a syntax or violation of the
programming language rules) stops the process and returns the user to the source code file within
the text editor with an error message. If no problems encountered it saves the machine
instructions, data area, and linker resolution list as an object file.
4. The linker opens the program object file and links it with the library object files as needed.
Unless all linker items are resolved, the process stops and returns the user to the source code file
within the text editor with an error message. If no problems encountered it saves the linked
objects as an executable file.
5. The IDE directs the operating system’s program called the loader to load the executable file into
the computer’s memory and have the Central Processing Unit (CPU) start processing the
instructions. As the user interacts with the program, entering test data, he or she might discover
that the outputs are not correct. These types of errors are called logic errors and would require
the user to return to the source code to change the algorithm.
Resolving Errors
Despite our best efforts at becoming perfect programmers, we will create errors. Solving these
errors is known as debugging your program. The three types of errors in the order that they
occur are:
1. Compiler
2. Linker
3. Logic
There are two types of compiler errors; pre-processor (1st step) and conversion (2nd step). A
review of Figure 1 above shows the four arrows returning to the source code so that the
programmer can correct the mistake.
During the conversion (2nd step) the compiler might give a warning message which in some
cases may not be a problem to worry about. For example: Data type demotion may be exactly
what you want your program to do, but most compilers give a warning message. Warnings don’t
stop the compiling process but as their name implies, they should be reviewed.
The next three figures show IDE monitor interaction for the Bloodshed Dev-C++ 5
compiler/IDE.
Linker Error (no red line with an error message describing a linking problem)
Logic Error (from the output within the “Black Box” area)
Key Terms
Compiler
Debugging
Linker
Loader
Part of the operating system that loads executable files into memory and directs the CPU
to start running the program.
Pre-processor
The first step the compiler does in converting source code to object code.
Text editor
Warning
Conclusion:
In this task, I have manage the development process of the program by using IDE. I have also
explained about the resolving errors which helps to solve the bugs of programming. Hence, I
have also correct the key terms of programming.
D [3]
Here are just five (the list is long) important differences why editor users should consider an
IDE:
1) Debugging
Stop using `print()` or `console.log()` or even `echo` commands to debug! Coding and bugs go
hand-in-hand. Since your code will probably have a bug in it (especially if you’re new to
programming), the faster you find it, the sooner you can fix it. Printing variable state is an
exhausting, tedious means of figuring out program state.
The debugger is a tool for analyzing programs on a line-by-line basis, monitoring and altering
variables, and watching output as it is generated. The debugging features included in Komodo
IDE (such as breakpoint and spawnpoint control, remote debugging, stepping, watching
variables, viewing the call stack, etc.) make what is a tedious (and sometimes frustrating) part of
programming software a breeze.
2) Unit Testing
With the success and resurgence of test driven design (TDD) in software, and writing quality
code in mind, developers know it’s important to conduct proper unit testing. Your editor needs to
be able to support major frameworks for your language in order to do this (e.g. PHPUnit, Perl
TAP, Python unit test, and Ruby’s rake test.)
Some editors include some basic code intel (such as calltips and auto-complete), but code
refactoring and profiling are more advanced features that you usually only find in an IDE. Code
refactoring makes it easier to perform global code changes, saving time compared to making
changes manually.
As well, IDEs can provide you with code profiling. Code profiling lets you analyze your code
performance on a function-by-function basis, allowing you to quickly detect bottlenecks. The
code profiler tracks which function calls are made, how many times those functions get called
and how long the calls take to complete.
When performing most of the Version Control System (VCS) tasks, you don’t typically need to
be running complicated commands. An IDE should (and most do) facilitate most necessary
commands (push, pull/update, commit, history, etc.) that allow you to keep up to date with your
team and vice versa without having to run another tool. Komodo IDE supports Subversion,
Mercurial, Git, Perforce, Bazaar, CVS.
The most important thing an IDE should do for you (and to be fair, this goes for Editors too, and
some do it really well) is allow you to easily integrate tools that the software created didn’t
cover, then allow you to easily access them. Staying in the zone is of critical importance (as you
well know!), so whatever existing integration your IDE can provide through addons, the more
productive you’ll be.
IDEs differ in how easily they integrate with other systems. The integration should be so
seamless that you shouldn’t need to leave your IDE to perform tasks. (Those who use gulp,
Grunt, PhoneGap/Cordova, Docker or Vagrant can take advantage of these technologies without
ever leaving Komodo.)
A few of features that help with coding include spell-checking, track changes and database
integrations with the database explorer. And, if you’re part of a team, you may want some of the
team features only an IDE can provide.
Conclusion:
In this task, I have demonstrated that the evaluation of the use of an IDE for the development
process of an applications contrasted not using an IDE.
P [4]
Explain the debugging process and explain the debugging facilities available
in an IDE.
Debugging:
The important technique to find and remove the number of errors or bugs or defects in a program
is called Debugging. It is a multistep process in software development. It involves identifying the
bug, finding the source of the bug and correcting the problem to make the program error-free. In
software development, the developer can locate the code error in the program and remove it using
this process. Hence, it plays a vital role in the entire software development lifecycle.
Debugging
Types of Debugging
Depending upon the type of code error, there are different types of toolset plugins. It is necessary
to understand what is happening and what type of tool is used for debugging. There are two types
of debugging to solve any general issue of the toolset plugin and provides technical information.
In PHP, the PHP code can be debugged to attach a debugger client using any one of these tools.
Debug utilities like Xdebug and Zendbugger are used to work with PHPstorm. Kint is used as
a debugging tool for PHP debugging.
For example, to enable the PHP debugging in WordPress, edit the file wp-config.php and add the
code needed. An error file (error_log.txt) is produced in the word root dictionary which can be
created and writable using a sever web. Else use an FTP program to create and write. Hence all
the errors that occurred in the front-end and back-end can be logged into that error file.
Javascript debugging uses the browser’s debugger tool and javascript console. Any javascript
error can be occurred and stops the execution and functioning of the operations in WordPress.
When the javascript console is open, all the error messages will be cleared. However, some
console warnings appeared can create an error message that should be fixed.
There are different types of debugging for different operating systems. They are,
For Linux and Unix operating systems, GDB is used as a standard debugger.
For Windows OS, the visual studio is a powerful editor and debugger.
For Mac OS, LLDB is a high-level debugger.
Intel parallel inspector is used as a source of debugging for memory errors in C/C++ operations.
Debugging Process
The process of finding bugs or errors and fixing them in any application or software is called
debugging. To make the software programs or products bug-free, this process should be done
before releasing them into the market. The steps involved in this process are,
Identifying the error – It saves time and avoids the errors at the user site. Identifying errors at
an earlier stage helps to minimize the number of errors and wastage of time.
Identifying the error location – The exact location of the error should be found to fix the bug
faster and execute the code.
Analyzing the error – To understand the type of bug or error and reduce the number of errors
we need to analyze the error. Solving one bug may lead to another bug that stops the application
process.
Prove the analysis – Once the error has been analyzed, we need to prove the analysis. It uses
a test automation process to write the test cases through the test framework.
Cover the lateral damage – The bugs can be resolved by making the appropriate changes and
move onto the next stages of the code or programs to fix the other errors.
Fix and Validate – This is the final stage to check all the new errors, changes in the software
or program and executes the application.
Debugging Software
This software plays a vital role in the software development process. Software developers use it to
find the bugs, analyze the bugs and enhance the quality and performance of the software. The
process of resolving the bugs using manual debugging is very tough and time-consuming. We need
to understand the program, it’s working, and the causes of errors by creating breakpoints.
As soon as the code is written, the code is combined with other stages of programming to form a
new software product. Several strategies like unit tests, code reviews, and pair programming are
used to debug the large program (contains thousands of lines of code). The standard debugger tool
or the debug mode of the Integral Development Environment (IDE) helps determine the code’s
logging and error messages.
The bug is identified in a system and defect report is created. This report helps the developer to
analyze the error and find the solutions.
The debugging tool is used to know the cause of the bug and analyze it by step-by-step
execution process.
After identifying the bug, we need to make the appropriate changes to fix the issues.
The software is retested to ensure that no error is left and checks all the new errors in the
software during the debugging software process.
A sequence-based method used in this software process made it easier and more convenient for
the developer to find the bugs and fix them using the code sequences.
Debugging Techniques
To perform the debugging process easily and efficiently, it is necessary to follow some techniques.
The most commonly used debugging strategies are,
Induction strategy includes the Location of relevant data, the Organization of data, the Devising
hypothesis (provides possible causes of errors), and the Proving hypothesis.
The backtracking strategy is used to locate errors in small programs. When an error occurs, the
program is traced one step backward during the evaluation of values to find the cause of bug or
error.
Debugging by testing is the conjunction with debugging by induction and debugging by deduction
technique. The test cases used in debugging are different from the test cases used in the testing
process.
Conclusion:
Hence, Debugging is a multistep process in software development. It involves identifying the bug,
finding the source of the bug and correcting the problem to make the program error-free. I have
also described about the debugging process and debugging techniques as well as debugging
software. And I also explained about types of debugging.
M [4]
Evaluate how the debugging process can be used to help develop more secure,
robust applications.
Debugging tools:
Debugging tools is a computer program that is used to test and debug the programs. There are
various debugging tools available. They may be a separate program or they might come inside an
existing program like an IDE. Some debugging tools are tracers, profilers, interpreters, step
command etc.
Debugging in Visual Studio: In visual studio IDE as well there are various debugging facilities
available. Visual studios debugger can help find problems in code and provide solutions. To
debug any errors in visual studio first of all we need to start our app in debugger mode. To get in
debug mode we need to press F5 (Debug>Start debugging) or we can click on debug>>Start
debugging.
Once we are in debug mode we can use various debugging facilities. Some debugging facilities
that are available in visual studio are:
Break Point: Breakpoints are one of the most important debugging technique for a developer. We
can set breakpoints wherever we want to pause debugger execution. For example, we may want
to see the state of code variables or look at the call stack at a certain breakpoint. Breakpoints are
a useful feature when we know the line of code or section of code that we want to examine in
detail.
To set a breakpoint in a program we click on the far left margin next to a line of code. Or we can
select the line and press F9 to set breakpoint
The red dotted line shows the breakpoint. Once the program reaches this point while it is running
in debug mode, the execution of code stops at this point. Whenever a breakpoint is hit, the
application and the debugger are in break mode. While in this mode the following actions can be
executed:
1. Inspecting the values of local variables set in current block of code in a separate local window.
2. Terminate the execution of a single or multiple application.
3. Making adjustments to program by viewing and modifying values of variables.
4. Move the execution point so as to resume the application execution from that point.
Tracers: Tracing is a feature in visual studio that allows the programmer to put a log message
onto the main output window. Tracers can only be activated in debug mode. The TRACE macro
contains a format specified string argument that can contain any number of variables.
Error list: Error list is a window that displays information about a specific error message. It
displays the errors, warnings that are produced while writing code. It helps to know where
exactly the error has occurred. Error list example is shown below:
Error list helps to find where exactly the error is occurred. If we double click on the error, it
shows in which line error has occurred.
Watch points: Watch points shows where exactly the error is and what the error is. You can add
watch points when an error is shown in your code. Once an error is shown you can add watch
point so that we know where the error is and we can fix it later.
You can set up watch points by selecting the error. Then we click on add watch and it will create
a watch point so that it can be fixed later. You can add as many watch as you want.
Using breakpoints
Error lists:
Error list is a small list located on bottom of visual studio screen. Error lists shows all the error
that occur in the code. In my application as well I error lists helped me to find where the errors
occurred and in which place the error is.
In the above picture error list is shown. There are various errors and if we double click on the
errors, it showed where exactly the error is. Error list helped me find where error occurred.
Importance of debugging:
Debugging is an important process that is required to solve errors and bugs in program. While
writing a program many errors may occur. The code might contain various bugs and errors that
causes problems in the program. A user will not be manually able to solve those errors because it
is hard and time consuming. For that there are various debugging tools that can be used to not
only identify where the bug is but also remove those bugs. Debugging is an important procedure
that can be used by all person. Since most projects and codes of application contains a huge
number of lines of code, any new items is liable to contain some bugs. Buggy programs are very
annoying and hampers with the programs structure. Debugging helps to solve those bugs and
errors. Due to this reasons debugging is important.
Conclusion:
Debugging is required for any programmers because it helps to debug the program they write. A
program may have many bugs and errors and to solve and remove bugs debugging is required.
Various debugging tools are available that helps debug a program. For program effectiveness
debugging is necessary.
P [5]
Coding standards:
Coding standards are a set of guidelines, best practices, programming styles and conventions that
developers adhere to when writing source code for an application or project. They are like a set
of rules and regulations that every developer must follow to write code and application. Every
individual developer and development teams should use a coding standard. There are different
coding standards for different programming languages. Each coding standards may differ for
different programming languages.
Advantages of coding standards:
1. Safe: Coding standards can be used without causing any harm. It is a safe way for writing
code.
2. Secure: Coding standards are secured and they can’t be hacked because they are part of large
organizations.
3. Reliable: Coding standards ensures that your application is reliable and can function properly
without any errors.
4. Portable: Coding standards are portable way of writing codes and they works the same in
every environment for a programming languages.
Identification: Identification is one the most important features in any programming language. In
programming, an indent style is a conversation governing the identifying of blocks of code to
convey program structure. Indentation helps to convey a better structure of program to the
readers. It is used to clarify the link between control flow construct such as conditions or loops
and code contained within or outside them. Indentation is meaningful to the interpreter.
The program on the left looks messy while the program at the right looks neat to clean. It is
because proper indentation was applied to that program. Program code indentation will make
program:
1. East to read.
2. Easy to understand.
3. Easy to modify.
4. Easy to maintain.
Comments: comments are the part of program that does not get execute and does not affect
program. Comments are used to when need to add a piece of information to code but don’t want
it to affect the code. There are various ways comments can be added:
1. Single line comment: single line comments start with // and continues until end of the line. If
the last character in a comment is at/and/the comment will continue in next line.
2. Multiline comment: multiline comments start with /* and ends with */.
Variable Declaration: Variable are the type that stores values. When declaring variables, we
should use consistent formatting. Variable can be declared following way: <Data type> <variable
name> <initial value> For example:
Int x=5
Naming conventions: Naming conventions are general rules applied when creating text scripts in
software or code. They may have different purposes such as adding clarity and uniformity to
scripts and to maintain the code. A naming convention includes capacitating an entire word to
denote a constant or variable or to capitalize only first letter.
Types of naming conventions:
Pascal case: A pascal case naming is capitalizing first letter of a word. Pascal case combines
words by capitalizing all words and removing spaces. Pascal case is the subset of camel case.
Pascal case are used for class names or constructors.
An example:
Pascal Case: “UserLoginCount”
Camel case: A camel case combines words by capitalizing all words following the first word and
removing the space.
An example:
Camel Case: “userLoginCount”
This type of naming conventions is often used for variable declaration.
Conclusion:
Hence, Coding standards are a set of guidelines, best practices, programming styles and
conventions that developers adhere to when writing source code for an application or project.
They are like a set of rules and regulations that every developer must follow to write code and
application. Every individual developer and development teams should use a coding standard. I
have also demonstrated that advantages of coding standard and common aspects of coding
standard.
D [4]
Conclusion:
It is vital to remember that coding standards are only as useful as the method which his used to
enforce them. Even if a company has a well thought out, comprehensive coding standard, if they
are not enforced, then they are useless. This is however not a fully comprehensive solution and it
is ultimately the responsibility of the programmer to comply with the standard. Coding standards
is necessary because it provides a medium for coders and development teams to code and to
implement their codes.
References
https://www.analyticssteps.com/blogs/what-algorithm-types-applications-characteristics
http://way2benefits.com/advantages-disadvantages-algorithm/
https://www.outsource2india.com/software/process.asp
https://www.geeksforgeeks.org/control-structures-in-programming-languages/
https://en.wikipedia.org/wiki/Bubble_sort
https://hackr.io/blog/what-is-coding-used-for
https://francescoimola.medium.com/a-difficult-relationship-between-algorithms-and-code-
bfb1da9bb856
https://www.cs.bham.ac.uk/~rxb/java/intro/2programming.html
https://www.comeausoftware.com/2016/04/from-algorithm-to-code/
https://www.geeksforgeeks.org/introduction-of-programming-paradigms/
https://www.geeksforgeeks.org/object-oriented-programming-in-cpp/
https://distributedsystemsauthority.com/characteristics-of-event-driven-programming/
https://www.g2.com/articles/ide
https://study.com/academy/answer/critically-evaluate-the-source-code-of-an-application-which-
implements-the-programming-paradigms-in-terms-of-the-code-structure-and-characteristics.html
https://www.assignmenthelp.net/assignment_help/elements-of-programming
https://machinelearningmastery.com/how-to-implement-a-machine-learning-algorithm/
https://press.rebus.community/programmingfundamentals/chapter/integrated-development-
environment/
https://www.activestate.com/blog/5-reasons-use-ide-instead-editor/
https://www.elprocus.com/what-is-debugging-types-techniques-in-embedded-systems/