0% found this document useful (0 votes)
45 views22 pages

Unit 2

elements of computer science and engineering

Uploaded by

jnananya2006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views22 pages

Unit 2

elements of computer science and engineering

Uploaded by

jnananya2006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT – II

Software development – waterfall model, Agile, Types of computer languages – Programming,


markup, scripting Program Development – steps in program development, flowcharts,
algorithms, data structures – definition, types of data structures

Software development

Software development refers to a set of computer science activities dedicated to the


process of creating, designing, deploying and supporting software. Software itself is the
set of instructions or programs that tell a computer what to do. It is independent of hardware
and makes computers programmable.

Software Development Life Cycle (SDLC) is a process used by the software industry to
design, develop and test high quality softwares.
SDLC Models

Software Development life cycle (SDLC) is a spiritual model used in project management that
defines the stages include in an information system development project, from an initial
feasibility study to the maintenance of the completed application.

There are different software development life cycle models specify and design, which are
followed during the software development phase. These models are also called "Software
Development Process Models." Each process model follows a series of phase unique to its
type to ensure success in the step of software development.

Here, are some important phases of SDLC life cycle:


Waterfall model

Winston Royce introduced the Waterfall Model in 1970.This model has five phases:
Requirements analysis and specification, design, implementation, and unit testing, integration
and system testing, and operation and maintenance. The steps always follow in this order and
do not overlap. The developer must complete every phase before the next phase begins. This
model is named "Waterfall Model", because its diagrammatic representation resembles a
cascade of waterfalls.

1. Requirements analysis and specification phase: The aim of this phase is to understand
the exact requirements of the customer and to document them properly. Both the customer
and the software developer work together so as to document all the functions, performance,
and interfacing requirement of the software. It describes the "what" of the system to be
produced and not "how."In this phase, a large document called Software Requirement
Specification (SRS) document is created which contained a detailed description of what the
system will do in the common language.

2. Design Phase: This phase aims to transform the requirements gathered in the SRS into a
suitable form which permits further coding in a programming language. It defines the overall
software architecture together with high level and detailed design. All this work is documented
as a Software Design Document (SDD).

3. Implementation and unit testing: During this phase, design is implemented. If the SDD is
complete, the implementation or coding phase proceeds smoothly, because all the information
needed by software developers is contained in the SDD.

During testing, the code is thoroughly examined and modified. Small modules are tested in
isolation initially. After that these modules are tested by writing some overhead code to check
the interaction between these modules and the flow of intermediate output.
4. Integration and System Testing: This phase is highly crucial as the quality of the end
product is determined by the effectiveness of the testing carried out. The better output will lead
to satisfied customers, lower maintenance costs, and accurate results. Unit testing determines
the efficiency of individual modules. However, in this phase, the modules are tested for their
interactions with each other and with the system.

5. Operation and maintenance phase: Maintenance is the task performed by every user
once the software has been delivered to the customer, installed, and operational.

When to use SDLC Waterfall Model?

Some Circumstances where the use of the Waterfall model is most suited are:

 When the requirements are constant and not changed regularly.


 A project is short
 The situation is calm
 Where the tools and technology used is consistent and is not changing
 When resources are well prepared and are available to use.

Advantages of Waterfall model


 This model is simple to implement also the number of resources that are required
for it is minimal.
 The requirements are simple and explicitly declared; they remain unchanged
during the entire project development.
 The start and end points for each phase is fixed, which makes it easy to cover
progress.
 The release date for the complete product, as well as its final cost, can be
determined before development.
 It gives easy to control and clarity for the customer due to a strict reporting system.

Disadvantages of Waterfall model


 In this model, the risk factor is higher, so this model is not suitable for more
significant and complex projects.
 This model cannot accept the changes in requirements during development.
 It becomes tough to go back to the phase. For example, if the application has now
shifted to the coding phase, and there is a change in requirement, It becomes tough
to go back and change it.
 Since the testing done at a later stage, it does not allow identifying the challenges
and risks in the earlier phase, so the risk reduction strategy is difficult to prepare.
Agile Model

The Agile software development life cycle is the structured series of stages that a product
goes through as it moves from beginning to end. It contains six phases: concept, inception,
iteration, release, maintenance, and retirement.

As mentioned, the Agile software development life cycle consists of six phases. Let’s examine
each of these Agile phases in more detail.

1. Concept

First up is the concept phase. Here, a product owner will determine the scope of their project.
If there are numerous projects, they will prioritize the most important ones. The product owner
will discuss key requirements with a client and prepare documentation to outline them,
including what features will be supported and the proposed end results. It is advisable to keep
the requirements to a minimum as they can be added to in later stages. In the concept stage,
the product owner will also estimate the time and cost of potential projects. This detailed
analysis will help them to decide whether or not a project is feasible before commencing work.

2. Inception

Once the concept is outlined, it is time to build the software development team. A product
owner will check their colleagues’ availability and pick the best people for the project while
also providing them with the necessary tools and resources. They can then start the design
process. The team will create a mock-up of the user interface and build the project
architecture. The inception stage involves further input from stakeholders to fully flesh out the
requirements on a diagram and determine the product functionality. Regular check-ins will
help to ensure that all requirements are built into the design process.
3. Iteration

Next up is the iteration phase, also referred to as construction. It tends to be the longest phase
as the bulk of the work is carried out here. The developers will work with UX designers to
combine all product requirements and customer feedback, turning the design into code. The
goal is to build the bare functionality of the product by the end of the first iteration or sprint.
Additional features and tweaks can be added in later iterations. This stage is a cornerstone of
Agile software development, enabling developers to create working software quickly and make
improvements to satisfy the client.

4. Release

The product is almost ready for release. But first, the quality assurance team needs to perform
some tests to ensure the software is fully functional. These Agile team members will test the
system to ensure the code is clean — if potential bugs or defects are detected, the developers
will address them swiftly. User training will also take place during this phase, which will require
more documentation. When all of this is complete, the product’s final iteration can then be
released into production.

5. Maintenance

The software will now be fully deployed and made available to customers. This action moves
it into the maintenance phase. During this phase, the software development team will provide
ongoing support to keep the system running smoothly and resolve any new bugs. They will
also be on hand to offer additional training to users and ensure they know how to use the
product. Over time, new iterations can take place to refresh the existing product with upgrades
and additional features.

6. Retirement

There are two reasons why a product will enter the retirement phase: either it is being replaced
with new software, or the system itself has become obsolete or incompatible with the
organization over time. The software development team will first notify users that the software
is being retired. If there is a replacement, the users will be migrated to the new system. Finally,
the developers will carry out any remaining end-of-life activities and remove support for the
existing software.

Each phase of the Agile life cycle contains numerous iterations to refine deliverables and
deliver great results. Let’s take a look at how this iteration workflow works within each phase:

The Agile iteration workflow

Agile iterations are usually between two and four weeks long, with a final completion date. The
workflow of an Agile iteration will typically consist of five steps:

 Plan requirements
 Develop product
 Test software
 Deliver iteration
 Incorporate feedback

Each Agile phase will contain numerous iterations as software developers repeat their
processes to refine their product and build the best software possible. In essence, these
iterations are smaller cycles within the overarching Agile life cycle.
The Agile life cycle is a key structural model for software development teams, enabling them
to stay on course as they move their product from conception to retirement. To support all
activities in the Agile cycle, team members need to have access to the appropriate resources
and tools, including an Agile project management platform.

Manage your Agile life cycle with Wrike

Wrike’s software is the ultimate solution for Agile life cycle management. It has a range of
versatile features to help you navigate the six phases of Agile software development. These
features include:

Sprint planning template: Want to kickstart your Agile project but don’t know where to
begin? Wrike’s sprint planning template is ideal for initiating your Agile life cycle — use it to
outline your project objectives and structure your iterations.

Gantt charts: Get full visibility of your Agile life cycle with a Gantt chart. Map your project
phases on an interactive timeline. React to changes and adjust dates easily with our drag-
and-drop builder.

Automated reports: Accelerate your Agile phases by automating your reporting process. Get
real-time insights into your software performance and share results with stakeholders.

Types of computer languages


Programming Languages:

A programming language is a way for programmers (developers) to communicate with


computers. Programming languages consist of a set of rules that allows string values to be
converted into various ways of generating machine code, or, in the case of visual programming
languages, graphical elements.
Generally speaking, a program is a set of instructions written in a particular language (C, C++,
Java, Python) to achieve a particular task.

What Are the Best Programming Languages to Learn in 2023?

What coding and programming language should i learn? JavaScript and Python, two of the
most popular languages in the startup industry, are in high demand. Most startups use Python-
based backend frameworks such as Django (Python), Flask (Python), and NodeJS
(JavaScript). These languages are also considered to be the best programming languages to
learn for beginners.

Below is a list of the most popular programming languages that will be in demand in
2023.

1. Javascript
2. Python
3. Go
4. Java
5. Kotlin
6. PHP
7. C#
8. Swift
9. R
10. Ruby
11. C and C++
12. Matlab
13. TypeScript
14. Scala

Markup Languages:

A markup language is a computer language for clarifying the contents of a document. It was
designed to process, define, and present computer text in a form that humans can read. It
specifies the code used to format text, including the style and layout the programmer wants
the document to appear. It uses tags to define these elements.
List of some of the most commonly used markup languages given below.
HTML
HyperText Markup Language (HTML) is perhaps the most widely used markup language
today. It is mainly used to develop the web pages we see on the World Wide Web. Essentially,
every web page can be written using a version of HTML. That makes the code critical in
ensuring that the text and images on a website follow proper formatting. Without it, browsers
would have no clue how the content should be displayed.

HTML is also responsible for giving web pages their basic structure. It is often used with
Cascading Style Sheets (CSS) to improve page appearance.

XML
eXtensible Markup Language (XML) is highly similar to HTML. It allows browsers to display
and interpret information correctly. But as its name implies, XML is extensible. It permits a tag
to define itself based on a given description of the content rather than just display it.

BBC
Bulletin Board Code (BBC or BBCode) is commonly used to format posts in message boards.
The tags are usually placed in-between square brackets ([ ]) before they undergo parsing or
breaking down sentences into parts to describe their roles by the message board system. The
code is then translated to a markup language that most web browsers can interpret, usually
in HTML. Some message boards prefer using BBC since it allows markups to be done without
triggering security alerts when forum users use HTML code in their posts.

SGML
Standard Generalized Markup Language (SGML) is an International Organization for
Standardization (ISO) standard that provides a general structure for all markup languages. It
outlines the rules used to validate and parse markups. Note, though, that not all markup
languages, such as HTML5, adhere to the SGML standard.

Scripting Languages

Scripting languages are a specific kind of computer languages that you can use to give
instructions to other software, such as a web browser, server, or standalone application. Many
of today’s most popular coding languages are scripting languages, such as JavaScript, PHP,
Ruby, Python, and several others.

As scripting languages make coding simpler and faster, it’s not surprising that they are widely
used in web development.

Best Scripting Languages


There are many great scripting languages that would deserve a mention in this guide, but they
are not in active development anymore. However, the following 13 scripting languages
are regularly updated and also being used in production.
So if you are thinking about learning a new scripting language as a new professional path,
they are all worth a shot.
1. JavaScript/ECMAScript
2. PHP
3. Python
4. Ruby
5. Groovy
6. Perl
7. Lua
8. Bash
9. PowerShell
10. R
11. VBA
12. Emacs Lisp
13. GML

Program development

Programming is the process of creating a set of instructions that tell a computer how to
perform a task. Programming can be done using a variety of computer "languages," such as
SQL, Java, Python, and C++.
Syntax refers to the spelling and grammar of a programming language. Computers are
inflexible machines that understand what you type only if you type it in the exact form that the
computer expects. The expected form is called the syntax. Program with syntax errors cannot
execute.
A logic error (or logical error) is a mistake in a program's source code that results in incorrect
or unexpected behavior. It is a type of runtime error that may simply produce the wrong output
or may cause a program to crash while running. Many different types of
programming mistakes can cause logic errors
Program development is the process of creating application programs. Program
development life cycle (PDLC) The process containing the five phases of program
development: analyzing, designing, coding, debugging and testing, and implementing and
maintaining application software.
The following are six steps in the Program Development Life Cycle:

1. Analyze the problem. The computer user must figure out the problem, then decide how
to resolve the problem - choose a program.
2. Design the program. A flow chart is important to use during this step of the PDLC. This
is a visual diagram of the flow containing the program. This step will help you break down
the problem.
3. Code the program. This is using the language of programming to write the lines of code.
The code is called the listing or the source code. The computer user will run an object
code for this step.
4. Debug the program. The computer user must debug. This is the process of finding the
"bugs" on the computer. The bugs are important to find because this is known as errors in
a program.
5. Formalize the solution. One must run the program to make sure there are no syntax and
logic errors. Syntax are grammatical errors and logic errors are incorrect results.
6. Document and maintain the program. This step is the final step of gathering everything
together. Internal documentation is involved in this step because it explains the reasoning
one might of made a change in the program or how to write a program

Writing Code

Computer code is a series of statements that have been assigned a function by a higher level
language (typically referred to as source code). This language is similar to English and has
been converted to machine language using a type of program known as a compiler. Because
code is used to instruct computers to perform a wide array of tasks, there are many different
kinds of languages and programs available. One of the most important aspects of coding is
deciding which jobs (creating a web page, writing a game, etc.) a computer will do. Regardless
of what is chosen, the majority of codes utilize plain-text because of its compatibility. Though
the actual content is written this way, documents are each given a unique file extension that
is indicative of their type. One can write a simple code with a basic word processor or text
editor. However, using a software application (specifically designed for coding in a particular
language) is significantly more effective and efficient. As with a document written in English,
where word processing software is used to aid in detection of spelling errors and non-standard
grammar, a coding editor provides comparable tools to ensure accuracy. A code editor is also
known as an integrated development environment (IDE), which is a software application for
formatting. Using a code editor decreases the chances of errors in codes and time spent
reading a code. A large downfall of working with IDEs is a lack of flexibility. While some IDEs
work with multiple programming languages, a sizable amount are very specific for only one
language.

Flowcharts and Pseudocode

During the design process of the Program Development Life Cycle, it is important that
programmers (and non-programmers) are able to visualize the way in which the program will
work. Certain tools such as flowcharts and pseudocode are used to simplify the design
process and allow the developers to see the program before any actual coding is used. A
common type of design tool is the flowchart. A flowchart can be either handwritten or created
with software such as Visual Logic or Flowgorithm. Using software helps you save your work
digitally which can be more reliable. Many of these software programs have similar symbols
to represent certain actions such as input, output, assignments, and various types of loops.
For example, a rhombus represents inputs and outputs and a rectangle represents a process.
Flowcharts are also useful for education tools because they focus more on the concept of
programming rather than focusing on the syntax of languages. Another type of design tool is
pseudocode. Pseudocode is very similar to a programming language except that it uses non-
syntactical words to summarize the processes of a program. Pseudocode cannot be compiled
or executed but it does serve as a good starting point for programmers. Here is an example
of pseudocode:
If user’s age is greater than or equal to 18:
Print “You can vote”

Else
Print”You cannot vote”

Compiler

A compiler is a special program that processes statements written in a particular


programming language and turns them into machine language or "code" that a computer's
processor uses. When executing (running), the compiler first parses (or analyzes) all of
the language statements syntactically one after the other and then, in one or more
successive stages or "passes", builds the output code, making sure that statements that
refer to other statements are referred to correctly in the final code. A compiler works with
what are sometimes called 3GLs (FORTRAN, BASIC, COBOL, C, etc.) and higher-level
languages. There are one-pass and multi-pass compilers as well as just-in-time compiler,
stage compiler, and source-to-source. The compiler front end analyzes the source code
to build an internal representation of the program, called the intermediate representation.
The compiler backend includes three main phases, such as analysis, optimization, and
code generation.[ Because compilers translate source code into object code, which is
unique for each type of computer, many compilers are available for the same language.
For example, there is a FORTRAN compiler for PCs and another for Apple Macintosh
computers. In addition, the compiler industry is quite competitive, so there are actually
many compilers for each language on each type of computer. More than a dozen
companies develop and sell compilers for the PC. There is also something called a
decompiler - which translates from a low level language to a high level language.

Control Structures

A control structure is a diagram used to show how functions, statements, and instructions
are performed in a program or module. The diagram shows exactly when an instruction
is performed, and how it’s performed. Most importantly, a control structure shows the
order of the instructions. There are three basic types of control structures: sequence,
selection, and repetition. Choosing a specific control structure depends on what you want
the program or module to accomplish. A sequence control structure is the simplest and
least complex control structure. Sequence control structures are instructions that are
executed one after another. The structure could be compared to following a recipe. A
more complex control structure might be a selection control structure, a structure that
involves conditions or decisions. This means that the structure can allow different sets of
instructions to be executed depending on whether a condition is true or false. The last
basic control structure is a repetition control structure, which is sometimes called an
iteration control structure. This control structure is used when repeating a group of code
is necessary. The code will be repeated until a condition is reached. Repetition control
structures are used when looping is needed to reach a specific outcome.

Testing Program Design

Good program design needs to be specific. The program design is very important,
especially because it involves the overall step-by-step directions regarding the program.
A programmer must test the program design to ensure that it runs correctly and that there
are no mistakes. The operation a programmer must do to complete this task is called
desk checking. Desk checking allows the programmer to run through the program design
step-by-step. Essentially, the programmer runs through lines of code to identify potential
errors and to check the logic. The programmer uses tracing tables to keep track of any
loop counters. The goal of checking the program design is to avoid running into mistakes
further on in the program development cycle. The sooner the mistake is caught in the
development cycle the better. If the error is not found until later in the developmental
cycle, it may delay a project. Therefore, a programmer must make sure they pay strict
attention while desk checking. Advantages to desk checking include the convenience of
hands-on "proof-reading" of the programmer’s own code. The programmers wrote the
code themselves, so it is an advantage that they can work immediately with familiar code.
A disadvantage to the desk checking system includes potential human error. Since a
computer is not checking the design code, it is prone to human error.

Debugging

Debugging is basically making sure that a program does not have any bugs (errors) so
that it can run properly without any problems. Debugging is a large part of what a
programmer does. The first step to debugging is done before you can actually debug the
program; the program needs to be changed into machine language so that the computer
can read it. It is converted using a language translator. The first goal of debugging is to
get rid of syntax errors and any errors that prevent the program from running. Errors that
prevent the program from running are compiler errors. These need to be removed right
away because otherwise you cannot test any of the other parts of the program. Syntax
errors occur when the programmer has not followed the correct rules of the programming
language. Another kind of error is a runtime error, which occurs while the program is
running and it is not noticed until after all syntax errors are corrected. Many run time errors
are because of logic errors, which are errors in the logic of the program. It could occur
when a formula is written incorrectly or when a wrong variable name is used.
There are different types of debugging techniques that can be used. One technique called
print debugging, or also known as the printf method, finds errors by watching the print (or
trace) statement live or recorded to see the execution flow of the process. This method
originated in the early versions of the BASIC programming language. Remote debugging
is the method of finding errors using a remote system or network, and using that different
system to run the program and collect information to find the error in the code. If the
program has already crashed, then post-mortem debugging can be used through various
tracing techniques and by analyzing the memory dump of the program. Another technique
is one created by Edward Gauss called wolf-fence debugging. Basically, this method find
the error by zeroing in on the problem by continuous divisions or sectioning until the bug
is found. Similar to this is the saff squeeze technique which uses progressive inlining of
a failure test to isolate the problem.
Debugging a program can be done by using the tools provided in the debugging software.
Typically, and especially with high-level programming languages, specific debugging
tools are already included in the. Having language-specific debugging tools make it easier
to detect the errors in a code, because they can look for known errors as opposed to
tediously “walking through” the code manually. It also good to note that fixing one bug
manually may lead to there being another bug; this is also why language-specific
debugging tools are helpful. There are also debugging software for embedded system as
well.

Testing/Implementation and Maintenance

Relating to getting a program up and running, many things need to happen before it can
be used. One step is to test the program. After the debugging process occurs, another
programmer needs to test the program for any additional errors that could be involved in
the background of the program. This person needs to perform all of the tasks that an
actual user of the program would use and follow. To ensure privacy rights, test data is
used in the testing process. However, this still has the same structure and feel to the
actual data. The tester needs to check for possible input errors as well, as this would
create many problems and issues in the future had it not been checked. Companies
usually implement different types of tests. An Alpha test is first conducted, which is on-
site at the company, and Beta tests are sent out to different states or countries to ensure
the program is 100% ready for use. The Alpha test occurs before the Beta test. Once the
debugging and testing are finished, the program is now in the system and the program
implementation and maintenance phase are completed. Program maintenance still needs
to be kept up, in case of future errors. This is the most costly to organizations because
the programmers need to keep improving and fixing issues within the program.
As stated earlier, a program goes through extensive testing before it is released to the
public for use. The two types of testing are called Alpha and Beta testing. First, it is
important to know what each test does. Alpha testing is done “in house” so to speak. It is
done within a company prior to sending it to Beta testing and its intention in this early
stage is to improve the product as much as possible to get it Beta ready. Beta testing is
done “out of house” and gives real customers a chance to try the program with the set
intention of catching any bugs or errors prior to it being fully released. Alpha testing is the
phase that takes the longest and can sometimes last three to five times longer than Beta.
However, Beta testing can be completed in just a few weeks to a month, assuming no
major bugs are detected. Alpha testing is typically performed by engineers or other
employees of the company while Beta testing occurs in the “real world”, temporarily being
released to the public to get the widest range of feedback possible. During Alpha testing,
it is common for there to be a good amount of bugs detected as well missing features.
During Beta testing, there should be a big decrease in the amount of these problems.
When testing in the Alpha phase is over, companies have a good sense of how the
product performs. After Beta testing is complete, the company has a good idea of what
the customer thinks and what they experienced while testing. If all goes well in both
phases, the product is ready to be released and enjoyed by the public. The length of time
and effort that is put forth in order for the world to enjoy and utilize the many programs on
computers today is often overlooked. Information such as this gives the user a new
appreciation for computers and computer programs.
Flowcharts and Algorithms

The algorithm and flowchart are two types of tools to explain the process of a program. In
this page, we discuss the differences between an algorithm and a flowchart and how to create
a flowchart to illustrate the algorithm visually.

Algorithms and flowcharts are two different tools that are helpful for creating new programs,
especially in computer programming. An algorithm is a step-by-step analysis of the process,
while a flowchart explains the steps of a program in a graphical way.

Part 1: Definition of Algorithm

Writing a logical step-by-step method to solve the problem is called the algorithm. In other
words, an algorithm is a procedure for solving problems. In order to solve a mathematical or
computer problem, this is the first step in the process.

An algorithm includes calculations, reasoning, and data processing. Algorithms can be


presented by natural languages, pseudocode, and flowcharts, etc.

Part 2: Definition of Flowchart


A flowchart is the graphical or pictorial representation of an algorithm with the help of different
symbols, shapes, and arrows to demonstrate a process or a program. With algorithms, we can
easily understand a program. The main purpose of using a flowchart is to analyze different
methods. Several standard symbols are applied in a flowchart:

Terminal Box - Start / End

Input / Output

Process / Instruction

Decision

Connector / Arrow

The symbols above represent different parts of a flowchart. The process in a flowchart can be
expressed through boxes and arrows with different sizes and colors. In a flowchart, we can
easily highlight certain elements and the relationships between each part.

Part 3: Difference between Algorithm and Flowchart

If you compare a flowchart to a movie, then an algorithm is the story of that movie. In other
words, an algorithm is the core of a flowchart. Actually, in the field of computer
programming, there are many differences between algorithm and flowchart regarding various
aspects, such as the accuracy, the way they display, and the way people feel about them.
Below is a table illustrating the differences between them in detail.
Data structures – definition

A data structure is a specialized format for organizing, processing, retrieving and storing
data.

There are several basic and advanced types of data structures, all designed to arrange data
to suit a specific purpose.

Data structures make it easy for users to access and work with the data they need in
appropriate ways.

Five factors to consider when picking a data structure include the following:

1. What kind of information will be stored?


2. How will that information be used?
3. Where should data persist, or be kept, after it is created?
4. What is the best way to organize the data?
5. What aspects of memory and storage reservation management should be considered?

Some examples of how data structures are used include the following:

 Storing data. Data structures are used for efficient data persistence, such as specifying
the collection of attributes and corresponding structures used to store records in a
database management system.

 Managing resources and services. Core operating system (OS) resources and
services are enabled through the use of data structures such as linked lists for memory
allocation, file directory management and file structure trees, as well as process
scheduling queues.

 Data exchange. Data structures define the organization of information shared between
applications, such as TCP/IP packets.

 Ordering and sorting. Data structures such as binary search trees -- also known as an
ordered or sorted binary tree -- provide efficient methods of sorting objects, such as
character strings used as tags. With data structures such as priority queues,
programmers can manage items organized according to a specific priority.

 Indexing. Even more sophisticated data structures such as B-trees are used to index
objects, such as those stored in a database.
 Searching. Indexes created using binary search trees, B-trees or hash tables speed the
ability to find a specific sought-after item.

 Scalability. Big data applications use data structures for allocating and managing data
storage across distributed storage locations, ensuring scalability and performance.
Certain big data programming environments.

Characteristics of data structures

Data structures are often classified by their characteristics. The following three
characteristics are examples:

1. Linear or non-linear. This characteristic describes whether the data items are arranged
in sequential order, such as with an array, or in an unordered sequence, such as with a
graph.

2. Homogeneous or heterogeneous. This characteristic describes whether all data items


in a given repository are of the same type. One example is a collection of elements in an
array, or of various types, such as an abstract data type defined as a structure in C or a
class specification in Java.

3. Static or dynamic. This characteristic describes how the data structures are compiled.
Static data structures have fixed sizes, structures and memory locations at compile time.
Dynamic data structures have sizes, structures and memory locations that can shrink or
expand, depending on the use.

Data types

If data structures are the building blocks of algorithms and computer programs, the primitive
-- or base -- data types are the building blocks of data structures. The typical base data
types include the following:

 Boolean, which stores logical values that are either true or false.
 integer, which stores a range on mathematical integers -- or counting numbers. Different
sized integers hold a different range of values -- e.g., a signed 8-bit integer holds values
from -128 to 127, and an unsigned long 32-bit integer holds values from 0 to
4,294,967,295.
 Floating-point numbers, which store a formulaic representation of real numbers.
 Fixed-point numbers, which are used in some programming languages and hold real
values but are managed as digits to the left and the right of the decimal point.
 Character, which uses symbols from a defined mapping of integer values to symbols.
 Pointers, which are reference values that point to other values.
 String, which is an array of characters followed by a stop code -- usually a "0" value -- or
is managed using a length field that is an integer value.

Types of data structures

The data structure type used in a particular situation is determined by the type of operations
that will be required or the kinds of algorithms that will be applied. The various data structure
types include the following:

 Array. An array stores a collection of items at adjoining memory locations. Items that are
the same type are stored together so the position of each element can be calculated or
retrieved easily by an index. Arrays can be fixed or flexible in length.

An
array can hold a collection of integers, floating-point numbers, stings or even other arrays.

 Stack. A stack stores a collection of items in the linear order that operations are applied.
This order could be last in, first out (LIFO) or first in, first out (FIFO).

 Queue. A queue stores a collection of items like a stack; however, the operation order can
only be first in, first out.

 Linked list. A linked list stores a collection of items in a linear order. Each element, or
node, in a linked list contains a data item, as well as a reference, or link, to the next item
in the list.
Linked list data structures are a set of nodes that contain data and the address or a pointer to
the next node.

 Tree. A tree stores a collection of items in an abstract, hierarchical way. Each node is
associated with a key value, with parent nodes linked to child nodes -- or subnodes. There
is one root node that is the ancestor of all the nodes in the tree.

A
binary search tree is a set of nodes where each has a value and can point to two child nodes.

 Heap. A heap is a tree-based structure in which each parent node's associated key value
is greater than or equal to the key values of any of its children's key values.

 Graph. A graph stores a collection of items in a nonlinear fashion. Graphs are made up of
a finite set of nodes, also known as vertices, and lines that connect them, also known as
edges. These are useful for representing real-world systems such as computer networks.

 Trie. A trie, also known as a keyword tree, is a data structure that stores strings as data
items that can be organized in a visual graph.
 Hash table. A hash table -- also known as a hash map -- stores a collection of items in an
associative array that plots keys to values. A hash table uses a hash function to convert
an index into an array of buckets that contain the desired data item.

Hashing is a data structure technique where key values are converted into indexes of an array
where the data is stored.

These are considered complex data structures as they can store large amounts of
interconnected data.

You might also like