0% found this document useful (0 votes)
97 views65 pages

Ada

Nice program in c++ system In +2 science

Uploaded by

Santosh biswal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views65 pages

Ada

Nice program in c++ system In +2 science

Uploaded by

Santosh biswal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

PROJECT REPORT

ON

knapsack
Submitted to Panjab University,Chandigarh
In the fulfillment of the requirement for the degree of
MSc.(IT) 1st semester

SUBMITTED TO:- SUBMITTED BY:-

Mr. Varun jain Manish kumar


(6522)

Department Of Computer Science And Applications


S.C.D Govt. College
Ludhiana.
1
CERTIFICATE

This is to certify that the project entitle (knapsack) has been submitted for the
fulfillment of the requirement for the degree of Masters In Information Technology
of Panjab University Chandigarh. This project is bonafide work of and no part of
it has been submitted for any other degree.

Principal
MR.DHARAM SINGH

Head of Department Project Guide


_________________
MR.VARUN JAIN

2
STUDENT DECLARATION

We hereby declare that the project entitled “knapsack” submitted by manish


kumar in partial fulfillment of the requirement for the degree of master of
science in information technology (msc(it)) session: 2014-2015 to panjab uni-
versity,chandigarh in an original work and has not been submitted for the re-
ward of any other degree/diploma/scholarship or any other similar title or
prize.

Place: Ludhiana

Date:

MANISH KUMAR

Project guide Head of Department

Mr varun jain ________________

3
ACKNOWLEDGEMENT

We owe a great many thanks to a great many people who helped and sup-
ported us in the development of this project report.
Our deepest thanks to our lecturer (the Guide of the
project) for guiding and correcting various documents of ours with attention and
care. She has taken pain to go through the project and make necessary correction as
and when needed.
We express our thanks to the Principal Mr.Dharam singh, of, S.C.D Govt.
College, for extending his support.
We would also thank our Institution and our faculty members without whom
this project would have been a distant reality. We also extend our heartfelt thanks
to our families and well wishers.

MANISH KUMAR

4
TABLE OF CONTENTS
Contents Page No signature
1. Introduction To ADA 6
1.1. Algorithm 7
1.2. Pseudocode 7-8
1.3. ADA 9
1.4. Algorithm Specifications 10
2. Performance Analysis 12
2.1. Algorithm Complexity 13
2.1.1. Space Complexity 13-14
2.1.2. Time Complexity

2.2. Asymptotic Notations 15


2.3. Three Cases To Analyze An Algorithm 16-17
3. Methods 18
3.1. Divide And Conquer 19
3.2. Greedy Method 19
3.3. Dynamic Programming 20
3.4. Backtracking 21-23
3.5. Branch And Bound 24

4. Introduction To Project 25
4.1. abstract 26
4.2. basic introduction 27-30
4.2.1 about intro
4.3. Working of project 31
4.4. 0-1 knapsack problem 33
4.5. example 39
4.6.Hardware And Software Requirement & 40-44
Software development life cycle
4.7.Dynamic programming 45
4.8.Source code 55
4.9.Screen shot & bibilography 59-60
5.0.Thank you 65

5
1.1 Algorithm
An Algorithm is a set of rules for carrying out calculation either by hand or on a machine.
An Algorithm is a well defined computational procedure that takes input and produces output.
An Algorithm is a finite sequence of instructions or steps (i.e. inputs) to achieve some particular
output.
Any Algorithm must satisfy the following criteria (or Properties) :
1. Input: It generally requires finite no. of inputs.
2. Output: It must produce at least one output.
3. Uniqueness: Each instruction should be clear and unambiguous
4. Finiteness: It must terminate after a finite no. of steps.
5.Effectiveness: Every instruction must be very basic so that it can carried out,in princi-
ple, by a perspn using only pencil and paper.

Expressing algorithms
Algorithms can be expressed in many kinds of notation, including flowchart, natural lan-
guage,pseudocode or control tables). Natural language expressions of algorithms tend to be ver-
bose and ambiguous, and are rarely used for complex or technical algorithms. Pseudocode,
flowcharts, and control tables are structured ways to express algorithms that avoid many of the
ambiguities common in natural language statements. Programming languages are primarily in-
tended for expressing algorithms in a form that can be executed by a computer, but are often
used as a way to define or document algorithms.

Analysis Issues of algorithm is :


I. WHAT DATA STRUCTURES TO USE! (Lists, queues, stacks, heaps, trees, etc.)
II. IS IT CORRECT! (All or only most of the time)
III. HOW EFFICIENT IS IT! (Asymptotically fixed or does it depend on the inputs)
IV. IS THERE AN EFFICIENT ALGORITHM!! (i.e. P = NP or not)

1.2 Pseudocode
Pseudocode is an informal high-level description of the operating principle of a computer pro-
gram or other algorithm.

6
It uses the structural conventions of a programming language, but is intended for human reading
rather than machine reading. Pseudocode typically omits details that are not essential for human
understanding of the algorithm, such as variable declarations, system-specific code and some
subroutines. The programming language is augmented with natural language description details,
where convenient, or with compact mathematical notation. The purpose of using pseudocode is
that it is easier for people to understand than conventional programming language code, and that
it is an efficient and environment-independent description of the key principles of an algorithm.
It is commonly used in textbooks and scientific publications that are documenting various algo-
rithms, and also in planning of computer program development, for sketching out the structure of
the program before the actual coding takes place.
No standard for pseudocode syntax exists, as a program in pseudocode is not an executable pro-
gram. Pseudocode resembles, but should not be confused with skeleton programs, including
dummy code, which can be compiled without errors. Flowcharts and Unified Modeling Lan-
guage (UML) charts can be thought of as a graphical alternative to pseudocode, but are more
spacious on paper.
Syntax
As the name suggests, pseudocode generally does not actually obey the syntax rules of any par-
ticular language; there is no systematic standard form, although any particular writer will gener-
ally borrow style and syntax.Variable declarations are typically omitted. Function calls and
blocks of code, such as code contained within a loop, are often replaced by a one-line natural
language sentence.
C style pseudo code:
void function fizzbuzz
for (i = 1; i<=100; i++) {
set print_number to true;
if i is divisible by 3
print "Fizz";
set print_number to false;
if i is divisible by 5
print "Buzz";
set print_number to false;
if print_number, print i;
print a newline;

7
}
1.3 ADA(Algorithm Design And Analysis):

Ada is an internationally standardized, general-purpose programming language used in a wide


variety of applications -- from missile control to payroll processing to air traffic control.
Ada contains features commonly found in other programming languages and provides additional
support for modern programming practices, for controlling special purpose hardware to meet re-
al-time deadlines, and for the creation and enhancement of large and complex programs by
groups of programmers over long periods of time.
Ada encourages good programming practices by incorporating software engineering principles
with strong typing, modularity, portability, reusability and readability. These features reduce
costs in software development, verifying, debugging, and maintenance that typically puts strain
on an organization's resources over the life of the software.
Portability
Ada developed for one system can easily be recompiled and ported to other systems, since all
Ada compilers are validated up-front and Ada is an internationally standardized software lan-
guage by MIL-STD-1815A, ANSI, and ISO.
Modularity
Ada organizes code into self-contained units that can be planned, written, compiled, and tested
separately; this feature allows programs to be written in portions by teams working in parallel
before being integrated into the final product.
Reusability
Ada's package concept allows users to develop software components that may be retrieved, used,
and/or changed without affecting the rest of the program. Ada's Generic program units also allow
programmers to perform the same logical function on more than one type of data. Packages and
Generics also support data abstraction and object-oriented design.
Reliability
Ada strong typing detects errors more easily in both initial and separate unit compilations. Ada's
exception handling mechanism supports fault-tolerant applications by providing a complete and
portable way of detecting and gracefully responding to error conditions. Ada's tasking features
support parallelism using high-level constructs instead of ad-hoc, error-prone calls to operating
system primitives.

8
Maintainability

Ada's program structuring based on modularity and high-level of readability makes it easier for
one programmer to modify or enhance software written by another. Modularity also allows pack-
age modification without affecting other program modules.

.
1.4 Algorithm Specifications:

1. Comments begin with // and continue till the end of line.


2. Blocks are indicated with matching braces { and }. A compound statement can be represented
as a block.
3. Anindetifier begins with a letter.Data types of variable are not explicitly declared. The types
will be cleared from context. Wheteher a variable will be local or global will be cleared from the
context.
4. Assignment of values to variables is done using assignment operator.
(variable) := (expression);
5. There two booolean values true and false. In order to produce these values, the logical opera-
tors and,or, and not and the relational operators<,<=,=,!=, and >are provided.
6. Elements of multidimensional arrays are accessed using [ and ] .
7. A conditional statements has following forms:
(i) if (condition) then (statement)
(ii) If(condition) then (statement1) else (statement2)
Here (condition) is a Boolean expression and (statement), (statement1) and (statement2) are arbi-
trary statements(simple or compound)
(iii) We can also employ the following case statement:
case:
{
: (condition1): (statement)
:else: (statement n+1)
} .
8.Inputs and Outputs are done using the instruction read and write.
9.There is only one type of procedure:Algorithm. An algorithm cosists of a heading and a body.
This heading takes the following form form :

9
Algorithm Name((parameter list))
10.The following looping statements are employed:
(i) General form of while is:

While(condition) do
{
(Statement 1)

(Statement n)
}

(ii) General form of for loop is


For variable=:value1 to value2 do
{
(Statement 1)
.
(Statement n)
}
end for

(iii) Repeat-until statement is constructed as follows:


Repeat
(Statement1)
. (Statement n)
Until(condition)

10
11
2.1 Algorithm Complexity

Algorithmic complexity is concerned about how fast or slow particular algorithm performs. We
define complexity as a numerical function T(n) - time versus the input size n. We want to define
time taken by an algorithm without depending on the implementation details. But you agree that
T(n) does depend on the implementation! A given algorithm will take different amounts of time
on the same inputs depending on such factors as: processor speed; instruction set, disk speed,
brand of compiler and etc. The way around is to estimate efficiency of each algorithm asymptoti-
cally. We will measure time T(n) as the number of elementary "steps" (defined in any way), pro-
vided each such step takes constant time.

Let us consider two classical examples: addition of two integers. We will add two integers digit
by digit (or bit by bit), and this will define a "step" in our computational model. Therefore, we
say that addition of two n-bit integers takes n steps. Consequently, the total computational time is
T(n) = c * n, where c is time taken by addition of two bits. On different computers, additon of
two bits might take different time, say c1 and c2, thus the additon of two n-bit integers takes T(n)
= c1 * n and T(n) = c2* n respectively. This shows that different machines result in different
slopes, but time T(n) grows linearly as input size increases.

The process of abstracting away details and determining the rate of resource usage in terms of
the input size is one of the fundamental ideas in computer science.

2.2 Space complexity


It is a function describing the amount of memory (space) an algorithm takes in terms of the
amount of input to the algorithm. We often speak of "extra" memory needed, not counting the
memory needed to store the input itself. Again, we use natural (but fixed-length) units to meas-
ure this. We can use bytes, but it's easier to use, say, number of integers used, number of fixed-
sized structures, etc. In the end, the function we come up with will be independent of the actual
number of bytes needed to represent the unit. Space complexity is sometimes ignored because
the space used is minimal and/or obvious, but sometimes it becomes as important an issue as
time.
Algorithm abc(a,b,c)
{

12
Return a+b+b*c+(a+b-c)/(a+b)+4.0;
}
For example, we might say "this algorithm takes n2 time," where n is the number of items in the
input. Or we might say "this algorithm takes constant extra space," because the amount of extra
memory needed doesn't vary with the number of items processed.

2.3 Time complexity


It is a function describing the amount of time an algorithm takes in terms of the amount of input
to the algorithm. "Time" can mean the number of memory accesses performed, the number of
comparisons between integers, the number of times some inner loop is executed, or some other
natural unit related to the amount of real time the algorithm will take. We try to keep this idea of
time separate from "wall clock" time, since many factors unrelated to the algorithm itself can af-
fect the real time (like the language used, type of computing hardware, proficiency of the pro-
grammer, optimization in the compiler, etc.). It turns out that, if we chose the units wisely, all of
the other stuff doesn't matter and we can get an independent measure of the efficiency of the al-
gorithm. The time T(p) taken by program P is the sum of compile time and run time.
Algorithm Sum(a,n)
{
for i:=1 to n do count:= count+2;
count:= count+3;
}

13
2.4 Asymptotic Notations

Let f be a nonnegative function. Then we define the three most common asymptotic bounds as
follows:
Big-O
 We say that f(n) is Big-O of g(n), written as f(n) = O(g(n)), iff there are positive constants c and
n0 such that
f(n) <= c*g(n) for all n >= n0
If f(n) = O(g(n)), we say that g(n) is an upper bound on f(n).
Big-Omega
 We say that f(n) is Big-Omega of g(n), written as f(n) = Ω(g(n)), iff there are positive constants
c and n0 such that
c* g(n) <=f(n) for all n >= n0
If f(n) = (g(n)), we say that g(n) is a lower bound on f(n).
Big-Theta
 We say that f(n) is Big-Theta of g(n), written as f(n) = theta(g(n)), iff there are positive constants
c1, c2 and n0 such that
c1 g(n) <= f(n) <= c2 g(n) for all n >= n0
Equivalently, f(n) = theta(g(n)) if and only if f(n) = O(g(n)) and f(n) = (g(n)). If f(n) =
theta(g(n)),
we say that g(n) is a tight bound on f(n).
Little-o
 The function f(n)=o(g(n)) (read as “f of n is little o of g of n”) iff
Lim f(n)/g(n)=0
n→∞
Little omega
 The function f(n)=w(g(n) (read as “ f of n is little omega of g of n”) iff
Lim g(n)/f(n)=0
n→∞

14
2.5 Three Cases to Analyze an Algorithm
Average Case
Average-case complexity is a subfield of computational complexity theory that studies the com-
plexity of algorithms on random inputs. Average case analysis always seemed more relevant than
the worst case. Indeed, although NP-complete problems are generally thought of as being com-
putationally intractable, some are easy on average; and some are complete in the average case,
indicating that they remain difficult on randomly generated instances. Motivated and guided by
the desires to distinguish (standard, worst-case) NP-complete problems that are "easy on aver-
age" from those that are "difficult on average," the study of average-case NP-completeness opens
a new front in complexity theory. This forum provides an overview of the recent research on av-
erage complexity, and shows the subtleties in formulating a coherent framework for studying av-
erage-case NP-completeness. It also provides an up-to-date list of works published in the area.
the average-case complexity of the algorithm is the function defined by the average number of
steps taken on any instance of size n. Determining what average input means is difficult, and of-
ten that average input has properties which make it difficult to characterise mathematically (con-
sider, for instance, algorithms that are designed to operate on strings of text). Similarly, even
when a sensible description of a particular "average case" (which will probably only be applica-
ble for some uses of the algorithm) is possible, they tend to result in more difficult analysis of
equations.
Best Case
The term best-case performance is used in computer science to describe an algorithm's behavior
under optimal conditions. For example, the best case for a simple linear search on a list occurs
when the desired element is the first element of the list.
The best-case complexity of the algorithm is the function defined by the minimum number of
steps taken on any instance of size n. It represents the curve passing through the lowest point of
each column.
Development and choice of algorithms is rarely based on best-case performance: most academic
and commercial enterprises are more interested in improving Average-case complexity and
worst-case performance.

15
Worst Case
In computer science, the worst-case complexity (usually denoted in asymptotic notation)
measures the resources (e.g. running time, memory) an algorithm requires in the worst-case. It
gives an upper bound on the resources required by the algorithm.
In the case of running time, the worst-case time-complexity indicates the longest running time
performed by an algorithm given any input of size n, and thus this guarantees that the algorithm
finishes on time. Moreover, the order of growth of the worst-case complexity is used to compare
the efficiency of two algorithms.
The worst-case complexity of an algorithm should be contrasted with its average-case complexi-
ty, which is an average measure of the amount of resources the algorithm uses on a random in-
put.
Worst-case performance analysis and average case performance analysis have some similarities,
but in practice usually require different tools and approaches.

16
17
3.1 Divide and Conquer
In computer science, Divide and Conquer is an important algorithm design paradigm. It works by
recursively breaking down problem into two or more sub-problems of the same type , until these
become simple enough to be solved directly. The solution to the sub-problem are then combined
to give a solution to the original problem. A Divide and Conquer algorithm is closely tied to a
type of recurrence relation between functions of data in question. Data is “divided” in to smaller
portions and the result calculated thence. The Divide and Conquer algorithm consist of three
steps:

1. Breaking the problem in to several sub-problems that are similar to the original problem but
smaller in size.
2. Solve the sub-problem recursively(successively and independently). And then
3. Combine these solutions to sub-problems to create a solution to the original problem.

The technique is named ”Divide and Conquer” because a problem is conquered by divid-
ing it into several smaller problems. This technique yields elegant, simple and quite often very
efficient algorithm. For example, if the work of splitting the problem and combining the partial
solutions is proportional to the problem’s size n. There are a bounded number b of sub-problems
of size n/b at each stage, and the base cases require O(1) (constant-bounded) time, then the Di-
vide and Conquer algorithm will have O(n log n) complexity. This is used for problems such as
sorting to reduce the complexity from O(n*n). Although in general there may also be other ap-
proaches to designing efficient algorithms.

3.2 Greedy Method


The greedy method is one the simplest approaches to solve the optimization problem in which
we want to determine the global optimum of a given function by a sequence of steps where at
each stage we can make a choice among a class of possible decision. In the greedy method, the
choice of optimal decision is made on the information at hand without worrying about the effect
these decision may have in the future. The “greedy choice property” and “optimal sub-
structure/sub-problem” are two ingredients in the problem that lend to a greedy strategy. It says
that a globally optimal solution can be arrived at by making a locally optimal choice.
 Often we are looking at optimization problems whose performance is exponential.

18
 For an optimization problem, we are given a set of constraints and an optimization
function.
 Solutions that satisfy the constraints are called feasible solutions.
 A feasible solution for which the optimization function has the best possible value
is called an optimal solution.
In a greedy method we attempt to construct an optimal solution in stages.
 At each stage we make a decision that appears to be the best (under some
 criterion) at the time.
 A decision made at one stage is not changed in a later stage, so each decision
 should assure feasibility.
Consider getting the best major: What is best now, may be worst later.
 A greedy criterion could be, at each stage increase the total amount of change
constructed as much as possible.

 A greedy solution is optimal for some change systems.


Machine scheduling:
 Have a number of jobs to be done on a minimum number of machines. Each job
has a start and end time.
 Order jobs by start time.
 If an old machine becomes available by the start time of the task to be assigned,
assign the task to this machine; if not assign it to a new machine.
 This is an optimal solution.

3.3 Dynamic Programming

Dynamic programming is a method for solving complex problems by breaking them down into simpler
subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems and op-
timal substructure (described below). When applicable, the method takes far less time than naive meth-
ods that don't take advantage of the subproblem overlap (like depth-first search).

The idea behind dynamic programming is quite simple. In general, to solve a given problem, we
need to solve different parts of the problem (subproblems), then combine the solutions of the

19
subproblems to reach an overall solution. Often when using a more naive method, many of the
subproblems are generated and solved many times. The dynamic programming approach seeks to
solve each subproblem only once, thus reducing the number of computations: once the solution
to a given subproblem has been computed, it is stored: the next time the same solution is needed,
it is simply looked up. This approach is especially useful when the number of repeating subprob-
lems grows exponentially as a function of the size of the input.

Dynamic programming algorithms are used for optimization (for example, finding the shortest
path between two points, or the fastest way to multiply many matrices). A dynamic programming
algorithm will examine all possible ways to solve the problem and will pick the best solution.
Therefore, dynamic programming enables us to go through all possible solutions to pick the best
one. If the scope of the problem is such that going through all possible solutions is possible and
fast enough, dynamic programming guarantees finding the optimal solution. The alternatives are
many, such as using a greedy algorithm, which picks the best possible choice "at any possible
branch in the road". While a greedy algorithm does not guarantee the optimal solution, it is fast-
er. Fortunately, some greedy algorithms (such as minimum spanning trees) are proven to lead to
the optimal solution.

For example, let's say that you have to get from point A to point B as fast as possible, in a given
city, during rush hour. A dynamic programming algorithm will look into the entire traffic report,
looking into all possible combinations of roads you might take, and will only then tell you which
way is the fastest. Of course, you might have to wait for a while until the algorithm finishes, and
only then can you start driving. The path you will take will be the fastest one (assuming that
nothing changed in the external environment). On the other hand, a greedy algorithm will start
you driving immediately and will pick the road that looks the fastest at every intersection. As
you can imagine, this strategy might not lead to the fastest arrival time, since you might take
some "easy" streets and then find yourself hopelessly stuck in a traffic jam.

3.4 Backtracking

Backtracking is a general algorithm for finding all (or some) solutions to some computational
problem, that incrementally builds candidates to the solutions, and abandons each partial candi-
date c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solu-
tion.

20
 The classic textbook example of the use of backtracking is the eight queens puzzle, that
asks for all arrangements of eight chess queens on a standard chessboard so that no queen
attacks any other. In the common backtracking approach, the partial candidates are ar-
rangements of k queens in the first k rows of the board, all in different rows and columns.
Any partial solution that contains two mutually attacking queens can be abandoned, since
it cannot possibly be completed to a valid solution.
 Useful technique for optimizing search under some constraints
 Express the desired solution as an n-tuple (x1; : : : ; xn) where each xi 2 Si, Si being a fi-
nite set
 The solution is based on finding one or more vectors that maximize, minimize, or satisfy
a criterion function P(x1; : : : ; xn)
 Sorting an array a[n]
 Find an n-tuple where the element xi is the index of ith smallest element in a.
 Criterion function is given by a[xi] _ a[xi+1] for 1 _ i < n
 Set Si is a finite set of integers in the range [1,n]
Brute force approach:
 Let the size of set Si be mi
 There are m = m1*m2 _ _ _mn*n-tuples that satisfy the criterion function P
 In brute force algorithm, you have to form all the m n-tuples to determine the op-
timal solutions by evaluating against P
 Backtrack approach
 Requires less than m trials to determine the solution
 Form a solution (partial vector) one component at a time, and check at
every step if this has any chance of success
 If the solution at any point seems not-promising, ignore it
 If the partial vector (x1; x2; : : : ; xi) does not yield an optimal solution,
ignore mi+1……m*n possible test vectors even without looking at them
 Effectively, find solutions to a problem that incrementally builds candi-
dates to the solutions, and abandons each
 partial candidate that cannot possibly be completed to a valid solution
 Only applicable to problems which admit the concept of partial candidate solution and a
relatively quick test
 of whether the partial solution can grow into a complete solution

21
 If a problem does not satisfy the above constraint, backtracking is not applicable
 Backtracking is not very efficient to find a given value in an unordered list
 All the solutions require a set of constraints divided into two categories: explicit and im-
plicit constraints
 Determine problem solution by systematically searching the solution space for the given
problem instance
 Use a tree organization for solution space
 8-queens problem
 Place eight queens on an 8 by 8 chessboard so that no queen attacks another queen
 A queen attacks another queen if the two are in the same row, column, or diagonal.

1 2 3 4 5 6 7 8

1 Q

2 Q

3 Q

4 Q

5 Q

6 Q

7 Q

8 Q

Fig: Solution to the 8-Queens Problem

22
3.5 Branch and bound

Branch and bound (BB or B&B) is a general algorithm for finding optimal solutions of various
optimization problems, especially in discrete and combinatorial optimization. A branch-and-
bound algorithm consists of a systematic enumeration of all candidate solutions, where large
subsets of fruitless candidates are discarded en masse, by using upper and lower estimated
bounds of the quantity being optimized.

Terminology:

Live node is a node that has been generated but whose children have not yet been generated.
E-node is a live node whose children are currently being explored. In other words, an E-node is
a node currently
being expanded.
Dead node is a generated node that is not to be expanded or explored any further.

The term Branch and Bound refers to all state space search methods in which all children
of E-node are generated before any other live node can become the E-node. To graph search
strategies, BFS and D-Search, in which the exploration of a new node can not begin until the
node currently being explored is fully explored. Both of these generalize to branch and bound
strategies.

In branch and bound terminology, A BFS-like state space search where we called FIFO
(First-In-First-Out) search as the list of live nodes is a First in first out list ( or queue). A D-
search like state space search will be called LIFO (Last-In-First-Out) search as the list of live
nodes is a last in first out list (or stack). As in the case of backtracking, bounding functions are
used to help avoid the generation of sub-trees that do not contain an answer node.

23
24
Abstract
This paper describes a research project on using Genetic
Algorithms (GAs) to solve the 0-1 Knapsack Problem (KP).
The Knapsack Problem is an example of a combinatorial op-
timization problem, which seeks to maximize the benefit of
objects in a knapsack without exceeding its capacity. The
paper contains three sections: brief description of the
basic idea and elements of the GAs, definition of the
Knapsack Problem, and implementation of the 0-1 Knapsack
Problem using GAs. The main focus of the paper is on the
implementation of the algorithm for solving the problem.
In the program, we implemented two selection functions,
roulette-wheel and group selection. The results from both
of them differeddepending on whether we used elitism or
not. Elitism significantly improved the performance of the
roulette-wheel function. Moreover, we tested the program
with different crossover ratios and single and double
crossover points but the results given were not that dif-
ferent.

25
BASIC Introduction

In this project we use Genetic Algorithms to solve


the 0 1Knapsack problem where one has to maximize
the benefit of objects in a knapsack without exceed-
ing its capacity.
Since the Knapsack problem is a NP problem, ap-
proaches such as dynamic programming, backtracking,
branch and bound, etc. are not very useful for solv-
ing it. Genetic Algorithms definitely rule them all
and prove to be the best approach in obtaining solu-
tions to problems traditionally thought of as compu-
tationally infeasible such as the Knapsack problem.

26
The Knapsack Problem (KP)
Definition
The KP problem is an example of a combinatorial optimiza-
tion problem, which seeks for a best solution from among
many other solutions. It is concerned with a knapsack that
has positive integer volume (or capacity) V. There are n
distinct items that may potentially be placed in the knap-
sack. Item i has a positive integer volume Vi and positive
integer
Benefit Bi. In addition, there are Qi copies of item i
available, where quantity Qi is a positive integer satisfy-
ing 1 ≤ Qi ≤ ∞.
Let Xi determines how many copies of item i are to be placed
into the knapsack. The goal is to:
Maximize
N
Σ Bi Xi
i = 1
Subject to the constraints
N
Σ Vi Xi ≤ V
i = 1
And
0 ≤ Xi ≤ Qi.

If one or more of the Qi is infinite, the KP is unbounded;


otherwise, the KP is bounded [3]. The bounded KP can be ei-
ther 0-1 KP or Multiconstraint KP. If Qi = 1 for i = 1, 2, …,
N, the problem is a 0-1 knapsack problem In the current pa-
per, we have worked on

27
the bounded 0-1 KP, where we cannot have more than one
copy of an item in the knapsack.

0-1 Knapsack Algorithm


for w = 0 to W
V[0,w] = 0
for i = 1 to n
V[i,0] = 0
for i = 1 to n
for w = 0 to W
if wi <= w // item i can be part of the solu-
tion
if bi + V[i-1,w-wi] > V[i-1,w]
V[i,w] = bi + V[i-1,w- wi]

28
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w

Running time
for w = 0 to W
V[0,w] = 0
for i = 1 to n
V[i,0] = 0
for i = 1 to n
for w = 0 to W
< the rest of the code >

What is the running time of this al-


gorithm?
Remember that the brute-force algorithm
takes O(2n)

29
Working of Project

30
R

What actually Problem Says ?


1. Given a set of items, each with a weight and a value.
2. Determine the number of each item to include in a
collection so that the total weight is less than a
given limit and the total value is as large as possi-
ble.
3. It derives its name from the problem faced by someone
who is constrained by a fixed-size knapsack and must
fill it with the most useful items.

31
The 0-1 Knapsack Problem

The difference between this problem and the fractional one is that
you can't take a fraction of an item. You either take the whole thing
or none of it. So here, is the problem formally described:

Your goal is to maximize the value of a knapsack that can hold at


most W units worth of goods from a list of items I0, I1, ... In-1. Each
item has two attributes:

1) Value - let this be vi for item Ii.


2) Weight - let this be wi for item Ii.

Now, instead of being able to take a certain weight of an item, you


can only either take the item or not take the item.

The naive way to solve this problem is to cycle through all 2 n subsets
of the n items and pick the subset with a legal weight that maximizes
the value of the knapsack. But, we can find a dynamic programming
algorithm that will USUALLY do better than this brute force tech-
nique.

Our first attempt might be to characterize a sub-problem as follows:

32
Let Sk be the optimal subset of elements from {I0, I1,... Ik}. But what
we find is that the optimal subset from the elements {I0, I1,... Ik+1}
may not correspond to the optimal subset of elements from {I0, I1,...
Ik} in any regular pattern. Basically, the solution to the optimization
problem for Sk+1 might NOT contain the optimal solution from
problem Sk.

To illustrate this, consider the following example:

Item Weight Value


I0 3 10
I1 8 4
I2 9 9
I3 8 11

The maximum weight the knapsack can hold is 20.

The best set of items from {I0, I1, I2} is {I0, I1, I2} but the best set of
items from {I0, I1, I2, I3} is {I0, I2, I3}. In this example, note that this
optimal solution, {I0, I2, I3}, does NOT build upon the previous op-
timal solution, {I0, I1, I2}. (Instead it build's upon the solution, {I0,
I2}, which is really the optimal subset of {I0, I1, I2} with weight 12
or less.)

33
So, now, we must rework our example. In particular, after trial and
error we may come up with the following idea:

Let B[k, w] represent the maximum total value of a subset S k with


weight w. Our goal is to find B[n, W], where n is the total number of
items and W is the maximal weight the knapsack can carry.

Using this definition, we have B[0, w] = w0, if w >= w0.


= 0, otherwise

Now, we can derive the following relationship that B[k, w] obeys:

B[k, w] = B[k - 1,w], if wk > w


= max { B[k - 1,w], B[k - 1,w - wk] + vk}

In English, here is what this is saying:

1) The maximum value of a knapsack with a subset of items from


{I0, I1, ... Ik} with weight w is the same as the maximum value of a
knapsack with a subset of items from {I0, I1, ... Ik-1} with weight w, if
item k weighs greater than w.

34
Basically, you can NOT increase the value of your knapsack with
weight w if the new item you are considering weighs more than w –
because it WON'T fit!!!

2) The maximum value of a knapsack with a subset of items from


{I0, I1, ... Ik} with weight w could be the same as the maximum value
of a knapsack with a subset of items from {I1, I2, ... Ik-1} with weight
w, if item k should not be added into the knapsack.

OR

3) The maximum value of a knapsack with a subset of items from


{I0, I1, ... Ik} with weight w could be the same as the maximum value
of a knapsack with a subset of items from {I0, I1, ... Ik-1} with weight
w-wk, plus item k.

You need to compare the values of knapsacks in both case 2 and 3


and take the maximal one.

Recursively, we will STILL have an O(2n) algorithm. But, using dy-


namic programming, we simply have to do a double loop - one loop
running n times and the other loop running W times.

35
Here is a dynamic programming algorithm to solve the 0-1 Knap-
sack problem:

Input: S, a set of n items as described earlier, W the total weight of


the knapsack. (Assume that the weights and values are stored in
separate arrays named w and v, respectively.)

Output: The maximal value of items in a valid knapsack.

int w, k;
for (w=0; w <= W; w++)
B[w] = 0

for (k=0; k<n; k++) {

for (w = W; w>= w[k]; w--) {

if (B[w – w[k]] + v[k] > B[w])


B[w] = B[w – w[k]] + v[k]
}
}

36
Note on run time: Clearly the run time of this algorithm is O(nW),
based on the nested loop structure and the simple operation inside
of both loops. When comparing this with the previous O(2n), we find
that depending on W, either the dynamic programming algorithm is
more efficient or the brute force algorithm could be more efficient.
(For example, for n=5, W=100000, brute force is preferable, but for
n=30 and W=1000, the dynamic programming solution is prefera-
ble.)

37
Let's run through an example:

I Item wi vi
0 I0 4 6
1 I1 2 4
2 I2 3 5
3 I3 1 3
4 I4 6 9
5 I5 4 7

W = 10

Item 0 1 2 3 4 5 6 7 8 9 10

0 0 0 0 0 6 6 6 6 6 6 6

1 0 0 4 4 6 6 10 10 10 10 10

2 0 0 4 5 6 9 10 11 11 15 15

3 0 3 4 7 8 9 12 13 14 15 18

4 0 3 4 7 8 9 12 13 14 16 18

5 0 3 4 7 8 10 12 14 15 16 19

38
Minimum Hardware And Software Requirement
Hardware Requirements
 RAM ----------------------------- -60 Mb

 Hard Disk-------------------------20 Mb or more hard disk

 Processor--------------------------Core 2 Duo

 I/O devices------------------------Keyboard, Mouse, monitor

Software Requirements
1. Window 3.0, window XP, or higher operating system.

2. TURBO C

The above given system requirement are the minimum requirement that are needed to run
this system.

39
Software Development Life Cycle

A Development process consists of different phases, each phase having different


well defined role and having a definite output. These phases, also called phases of software evolution,
are performed in a specified order as in process model being followed. . The tools
include compilers, debuggers, environment and change management, source control, project manage-
ment etc. The documents produced include requirements that define the problem, customer manuals,
test plans and implementation plans. Followings are the phases of system development life cycle:

1. Feasibility study
2. Requirement Analysis
3. Design
4. Coding
5. Testing
6. Implementation
7. Maintenance

In General, SDLC (Software Development Life Cycle) is the process of developing


software through business needs, analysis, design, implementation and maintenance.

Objectives: The main objectives of SDLC are:

1. Develop the system by using identical, measurable and repeatable process.

2. Identify and assign the roles and the responsibilities to affected process
including the technical manages..

3. Ensure the project management accountability.

4. Identify the errors and correct them before they become large.

SDLC Phases

40
Project Planning

Requirements

Definition

Design

Coding

Testing

Implementation

Maintenance

41
SDLC With Respect To Project

1. Feasibility Phase
The Feasibility study is an important phase in any software development
process. This is because it makes analysis of different aspects like cost for developing and executing the
system, the time requirement for each phase of the system and many other things related with efforts
and peoples.

The feasibility study is an analysis of a proposed projects with emphasis on the attainable income and
most advantages design and useThe main things that need to be studed in this phase include:

 To study about skills required for maintenance in later stage of the project.
 To study that the project completes within estimated budget.
 To study the scope of future expansion of project .

2.Requirement Analysis Phase:


The analysis phase defines the requirements of
the system, independent of how these requirements will be accomplished. This phase de-
fines the problem that the customer is trying to solve.
Reqirement analysis encompases those tasks that go into determining the needs to
meet for a view or altered product,taking account of the fessible conficting requirements
of the various stakeholders, such as beneficiaries or users.

3. The Design Phase:

42
In the design phase the architecture is established. This phase starts with
the requirement document delivered by the requirement phase and maps the requirement into
the architecture. The architecture defines the components, their interfaces and behaviors. The
deliverable design document is he document it is the architecture. The design document de-
scribes a plan to implement the requirement.
System or Software design is the activity where software requirements
are analyzed in order to produce a description of the internal structure and organization of the
system that will serve as the basic for its construction.

43
DYNAMIC PROGRAM ON KNAPSACK
With screen shots:

#include <stdio.h>
#define max(a,b) (a > b ? a : b)

int matrix[100][100] = {0};

int knapsack(int index, int size, int


weights[],int values[]){
int take,dontTake;

take = dontTake = 0;
44
if (matrix[index][size]!=0)
return matrix[index][size];

if (index==0){
if (weights[0]<=size){
matrix[index][size] = values[0];
return values[0];
}
else{
matrix[index][size] = 0;
return 0;
}
}

45
if (weights[index]<=size)
take = values[index] + knapsack(index-
1, size-weights[index], weights, values);

dontTake = knapsack(index-1, size,


weights, values);

matrix[index][size] = max (take,


dontTake);

return matrix[index][size];

int main(){
int nItems = 4;
46
int knapsackSize = 10;
int weights[4] = {5,4,6,3};
int values[4] = {10,40,30,50};

printf("Max value =
%dn",knapsack(nItems-
1,knapsackSize,weights,values));

return 0;
}

47
48
#include <stdio.h>
#define max(a,b) (a > b ? a : b)

int matrix[100][100] = {0};


int picks[100][100] = {0};

int knapsack(int index, int size, int


weights[],int values[]){
int take,dontTake;

take = dontTake = 0;

if (matrix[index][size]!=0)
return matrix[index][size];
49
if (index==0){
if (weights[0]<=size){
picks[index][size]=1;
matrix[index][size] = values[0];
return values[0];
}
else{
picks[index][size]=-1;
matrix[index][size] = 0;
return 0;
}
}

if (weights[index]<=size)

50
take = values[index] + knapsack(index-
1, size-weights[index], weights, values);

dontTake = knapsack(index-1, size,


weights, values);

matrix[index][size] = max (take,


dontTake);

if (take>dontTake)
picks[index][size]=1;
else
picks[index][size]=-1;

return matrix[index][size];

51
}

void printPicks(int item, int size, int


weights[]){

while (item>=0){
if (picks[item][size]==1){
printf("%d ",item);
item--;
size -= weights[item];
}
else{
item--;
}
}

52
printf("n");

return;
}

int main(){
int nItems = 4;
int knapsackSize = 10;
int weights[4] = {5,4,6,3};
int values[4] = {10,40,30,50};

printf("Max value =
%dnn",knapsack(nItems-
1,knapsackSize,weights,values));

printf("Picks were: ");


53
printPicks(nItems-1,knapsackSize,
weights);

return 0;
}

54
55
# include<stdio.h>
#include<graphics.h>
Void knapsack(int n, float weight[], float profit[], float
capacity) {
float x[20], tp = 0;
int i, j, u;
u = capacity;

for (i = 0; i < n; i++)


x[i] = 0.0;

for (i = 0; i < n; i++) {


if (weight[i] > u)
break;
else {
x[i] = 1.0;
tp = tp + profit[i];
u = u - weight[i];
}
}

if (i < n)
x[i] = u / weight[i];

tp = tp + (x[i] * profit[i]);

printf("\nThe result vector is:- ");


for (i = 0; i < n; i++)
printf("%f\t", x[i]);

56
printf("\nMaximum profit is:- %f", tp);

int main() {
float weight[20], profit[20], capacity;
int num, i, j;
float ratio[20], temp;

printf("\nEnter the no. of objects:- ");


scanf("%d", &num);

printf("\nEnter the wts and profits of each object:-


");
for (i = 0; i < num; i++) {
scanf("%f %f", &weight[i], &profit[i]);
}

printf("\nEnter the capacityacity of knapsack:- ");


scanf("%f", &capacity);

for (i = 0; i < num; i++) {


ratio[i] = profit[i] / weight[i];
}

for (i = 0; i < num; i++) {


for (j = i + 1; j < num; j++) {
if (ratio[i] < ratio[j]) {
temp = ratio[j];
ratio[j] = ratio[i];
ratio[i] = temp;

57
temp = weight[j];
weight[j] = weight[i];
weight[i] = temp;

temp = profit[j];
profit[j] = profit[i];
profit[i] = temp;
}
}
}

knapsack(num, weight, profit, capacity);


return(0);
}

58
Screen Shots

59
60
61
62
63
Bibliography:-

Beyond the guidance of the staff members we have con-


sulted some books and websites while developing the pro-
ject some of these books and websites are:

 Fundamentals of computer algorithm


 Theory and programs of Computer graphics
 www.geeksforgeeks.org
 www.wikipedia.org
 www.thenewboston.org

64
THANK YOU
65

You might also like