Ada
Ada
ON
knapsack
Submitted to Panjab University,Chandigarh
In the fulfillment of the requirement for the degree of
MSc.(IT) 1st semester
This is to certify that the project entitle (knapsack) has been submitted for the
fulfillment of the requirement for the degree of Masters In Information Technology
of Panjab University Chandigarh. This project is bonafide work of and no part of
it has been submitted for any other degree.
Principal
MR.DHARAM SINGH
2
STUDENT DECLARATION
Place: Ludhiana
Date:
MANISH KUMAR
3
ACKNOWLEDGEMENT
We owe a great many thanks to a great many people who helped and sup-
ported us in the development of this project report.
Our deepest thanks to our lecturer (the Guide of the
project) for guiding and correcting various documents of ours with attention and
care. She has taken pain to go through the project and make necessary correction as
and when needed.
We express our thanks to the Principal Mr.Dharam singh, of, S.C.D Govt.
College, for extending his support.
We would also thank our Institution and our faculty members without whom
this project would have been a distant reality. We also extend our heartfelt thanks
to our families and well wishers.
MANISH KUMAR
4
TABLE OF CONTENTS
Contents Page No signature
1. Introduction To ADA 6
1.1. Algorithm 7
1.2. Pseudocode 7-8
1.3. ADA 9
1.4. Algorithm Specifications 10
2. Performance Analysis 12
2.1. Algorithm Complexity 13
2.1.1. Space Complexity 13-14
2.1.2. Time Complexity
4. Introduction To Project 25
4.1. abstract 26
4.2. basic introduction 27-30
4.2.1 about intro
4.3. Working of project 31
4.4. 0-1 knapsack problem 33
4.5. example 39
4.6.Hardware And Software Requirement & 40-44
Software development life cycle
4.7.Dynamic programming 45
4.8.Source code 55
4.9.Screen shot & bibilography 59-60
5.0.Thank you 65
5
1.1 Algorithm
An Algorithm is a set of rules for carrying out calculation either by hand or on a machine.
An Algorithm is a well defined computational procedure that takes input and produces output.
An Algorithm is a finite sequence of instructions or steps (i.e. inputs) to achieve some particular
output.
Any Algorithm must satisfy the following criteria (or Properties) :
1. Input: It generally requires finite no. of inputs.
2. Output: It must produce at least one output.
3. Uniqueness: Each instruction should be clear and unambiguous
4. Finiteness: It must terminate after a finite no. of steps.
5.Effectiveness: Every instruction must be very basic so that it can carried out,in princi-
ple, by a perspn using only pencil and paper.
Expressing algorithms
Algorithms can be expressed in many kinds of notation, including flowchart, natural lan-
guage,pseudocode or control tables). Natural language expressions of algorithms tend to be ver-
bose and ambiguous, and are rarely used for complex or technical algorithms. Pseudocode,
flowcharts, and control tables are structured ways to express algorithms that avoid many of the
ambiguities common in natural language statements. Programming languages are primarily in-
tended for expressing algorithms in a form that can be executed by a computer, but are often
used as a way to define or document algorithms.
1.2 Pseudocode
Pseudocode is an informal high-level description of the operating principle of a computer pro-
gram or other algorithm.
6
It uses the structural conventions of a programming language, but is intended for human reading
rather than machine reading. Pseudocode typically omits details that are not essential for human
understanding of the algorithm, such as variable declarations, system-specific code and some
subroutines. The programming language is augmented with natural language description details,
where convenient, or with compact mathematical notation. The purpose of using pseudocode is
that it is easier for people to understand than conventional programming language code, and that
it is an efficient and environment-independent description of the key principles of an algorithm.
It is commonly used in textbooks and scientific publications that are documenting various algo-
rithms, and also in planning of computer program development, for sketching out the structure of
the program before the actual coding takes place.
No standard for pseudocode syntax exists, as a program in pseudocode is not an executable pro-
gram. Pseudocode resembles, but should not be confused with skeleton programs, including
dummy code, which can be compiled without errors. Flowcharts and Unified Modeling Lan-
guage (UML) charts can be thought of as a graphical alternative to pseudocode, but are more
spacious on paper.
Syntax
As the name suggests, pseudocode generally does not actually obey the syntax rules of any par-
ticular language; there is no systematic standard form, although any particular writer will gener-
ally borrow style and syntax.Variable declarations are typically omitted. Function calls and
blocks of code, such as code contained within a loop, are often replaced by a one-line natural
language sentence.
C style pseudo code:
void function fizzbuzz
for (i = 1; i<=100; i++) {
set print_number to true;
if i is divisible by 3
print "Fizz";
set print_number to false;
if i is divisible by 5
print "Buzz";
set print_number to false;
if print_number, print i;
print a newline;
7
}
1.3 ADA(Algorithm Design And Analysis):
8
Maintainability
Ada's program structuring based on modularity and high-level of readability makes it easier for
one programmer to modify or enhance software written by another. Modularity also allows pack-
age modification without affecting other program modules.
.
1.4 Algorithm Specifications:
9
Algorithm Name((parameter list))
10.The following looping statements are employed:
(i) General form of while is:
While(condition) do
{
(Statement 1)
(Statement n)
}
10
11
2.1 Algorithm Complexity
Algorithmic complexity is concerned about how fast or slow particular algorithm performs. We
define complexity as a numerical function T(n) - time versus the input size n. We want to define
time taken by an algorithm without depending on the implementation details. But you agree that
T(n) does depend on the implementation! A given algorithm will take different amounts of time
on the same inputs depending on such factors as: processor speed; instruction set, disk speed,
brand of compiler and etc. The way around is to estimate efficiency of each algorithm asymptoti-
cally. We will measure time T(n) as the number of elementary "steps" (defined in any way), pro-
vided each such step takes constant time.
Let us consider two classical examples: addition of two integers. We will add two integers digit
by digit (or bit by bit), and this will define a "step" in our computational model. Therefore, we
say that addition of two n-bit integers takes n steps. Consequently, the total computational time is
T(n) = c * n, where c is time taken by addition of two bits. On different computers, additon of
two bits might take different time, say c1 and c2, thus the additon of two n-bit integers takes T(n)
= c1 * n and T(n) = c2* n respectively. This shows that different machines result in different
slopes, but time T(n) grows linearly as input size increases.
The process of abstracting away details and determining the rate of resource usage in terms of
the input size is one of the fundamental ideas in computer science.
12
Return a+b+b*c+(a+b-c)/(a+b)+4.0;
}
For example, we might say "this algorithm takes n2 time," where n is the number of items in the
input. Or we might say "this algorithm takes constant extra space," because the amount of extra
memory needed doesn't vary with the number of items processed.
13
2.4 Asymptotic Notations
Let f be a nonnegative function. Then we define the three most common asymptotic bounds as
follows:
Big-O
We say that f(n) is Big-O of g(n), written as f(n) = O(g(n)), iff there are positive constants c and
n0 such that
f(n) <= c*g(n) for all n >= n0
If f(n) = O(g(n)), we say that g(n) is an upper bound on f(n).
Big-Omega
We say that f(n) is Big-Omega of g(n), written as f(n) = Ω(g(n)), iff there are positive constants
c and n0 such that
c* g(n) <=f(n) for all n >= n0
If f(n) = (g(n)), we say that g(n) is a lower bound on f(n).
Big-Theta
We say that f(n) is Big-Theta of g(n), written as f(n) = theta(g(n)), iff there are positive constants
c1, c2 and n0 such that
c1 g(n) <= f(n) <= c2 g(n) for all n >= n0
Equivalently, f(n) = theta(g(n)) if and only if f(n) = O(g(n)) and f(n) = (g(n)). If f(n) =
theta(g(n)),
we say that g(n) is a tight bound on f(n).
Little-o
The function f(n)=o(g(n)) (read as “f of n is little o of g of n”) iff
Lim f(n)/g(n)=0
n→∞
Little omega
The function f(n)=w(g(n) (read as “ f of n is little omega of g of n”) iff
Lim g(n)/f(n)=0
n→∞
14
2.5 Three Cases to Analyze an Algorithm
Average Case
Average-case complexity is a subfield of computational complexity theory that studies the com-
plexity of algorithms on random inputs. Average case analysis always seemed more relevant than
the worst case. Indeed, although NP-complete problems are generally thought of as being com-
putationally intractable, some are easy on average; and some are complete in the average case,
indicating that they remain difficult on randomly generated instances. Motivated and guided by
the desires to distinguish (standard, worst-case) NP-complete problems that are "easy on aver-
age" from those that are "difficult on average," the study of average-case NP-completeness opens
a new front in complexity theory. This forum provides an overview of the recent research on av-
erage complexity, and shows the subtleties in formulating a coherent framework for studying av-
erage-case NP-completeness. It also provides an up-to-date list of works published in the area.
the average-case complexity of the algorithm is the function defined by the average number of
steps taken on any instance of size n. Determining what average input means is difficult, and of-
ten that average input has properties which make it difficult to characterise mathematically (con-
sider, for instance, algorithms that are designed to operate on strings of text). Similarly, even
when a sensible description of a particular "average case" (which will probably only be applica-
ble for some uses of the algorithm) is possible, they tend to result in more difficult analysis of
equations.
Best Case
The term best-case performance is used in computer science to describe an algorithm's behavior
under optimal conditions. For example, the best case for a simple linear search on a list occurs
when the desired element is the first element of the list.
The best-case complexity of the algorithm is the function defined by the minimum number of
steps taken on any instance of size n. It represents the curve passing through the lowest point of
each column.
Development and choice of algorithms is rarely based on best-case performance: most academic
and commercial enterprises are more interested in improving Average-case complexity and
worst-case performance.
15
Worst Case
In computer science, the worst-case complexity (usually denoted in asymptotic notation)
measures the resources (e.g. running time, memory) an algorithm requires in the worst-case. It
gives an upper bound on the resources required by the algorithm.
In the case of running time, the worst-case time-complexity indicates the longest running time
performed by an algorithm given any input of size n, and thus this guarantees that the algorithm
finishes on time. Moreover, the order of growth of the worst-case complexity is used to compare
the efficiency of two algorithms.
The worst-case complexity of an algorithm should be contrasted with its average-case complexi-
ty, which is an average measure of the amount of resources the algorithm uses on a random in-
put.
Worst-case performance analysis and average case performance analysis have some similarities,
but in practice usually require different tools and approaches.
16
17
3.1 Divide and Conquer
In computer science, Divide and Conquer is an important algorithm design paradigm. It works by
recursively breaking down problem into two or more sub-problems of the same type , until these
become simple enough to be solved directly. The solution to the sub-problem are then combined
to give a solution to the original problem. A Divide and Conquer algorithm is closely tied to a
type of recurrence relation between functions of data in question. Data is “divided” in to smaller
portions and the result calculated thence. The Divide and Conquer algorithm consist of three
steps:
1. Breaking the problem in to several sub-problems that are similar to the original problem but
smaller in size.
2. Solve the sub-problem recursively(successively and independently). And then
3. Combine these solutions to sub-problems to create a solution to the original problem.
The technique is named ”Divide and Conquer” because a problem is conquered by divid-
ing it into several smaller problems. This technique yields elegant, simple and quite often very
efficient algorithm. For example, if the work of splitting the problem and combining the partial
solutions is proportional to the problem’s size n. There are a bounded number b of sub-problems
of size n/b at each stage, and the base cases require O(1) (constant-bounded) time, then the Di-
vide and Conquer algorithm will have O(n log n) complexity. This is used for problems such as
sorting to reduce the complexity from O(n*n). Although in general there may also be other ap-
proaches to designing efficient algorithms.
18
For an optimization problem, we are given a set of constraints and an optimization
function.
Solutions that satisfy the constraints are called feasible solutions.
A feasible solution for which the optimization function has the best possible value
is called an optimal solution.
In a greedy method we attempt to construct an optimal solution in stages.
At each stage we make a decision that appears to be the best (under some
criterion) at the time.
A decision made at one stage is not changed in a later stage, so each decision
should assure feasibility.
Consider getting the best major: What is best now, may be worst later.
A greedy criterion could be, at each stage increase the total amount of change
constructed as much as possible.
Dynamic programming is a method for solving complex problems by breaking them down into simpler
subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems and op-
timal substructure (described below). When applicable, the method takes far less time than naive meth-
ods that don't take advantage of the subproblem overlap (like depth-first search).
The idea behind dynamic programming is quite simple. In general, to solve a given problem, we
need to solve different parts of the problem (subproblems), then combine the solutions of the
19
subproblems to reach an overall solution. Often when using a more naive method, many of the
subproblems are generated and solved many times. The dynamic programming approach seeks to
solve each subproblem only once, thus reducing the number of computations: once the solution
to a given subproblem has been computed, it is stored: the next time the same solution is needed,
it is simply looked up. This approach is especially useful when the number of repeating subprob-
lems grows exponentially as a function of the size of the input.
Dynamic programming algorithms are used for optimization (for example, finding the shortest
path between two points, or the fastest way to multiply many matrices). A dynamic programming
algorithm will examine all possible ways to solve the problem and will pick the best solution.
Therefore, dynamic programming enables us to go through all possible solutions to pick the best
one. If the scope of the problem is such that going through all possible solutions is possible and
fast enough, dynamic programming guarantees finding the optimal solution. The alternatives are
many, such as using a greedy algorithm, which picks the best possible choice "at any possible
branch in the road". While a greedy algorithm does not guarantee the optimal solution, it is fast-
er. Fortunately, some greedy algorithms (such as minimum spanning trees) are proven to lead to
the optimal solution.
For example, let's say that you have to get from point A to point B as fast as possible, in a given
city, during rush hour. A dynamic programming algorithm will look into the entire traffic report,
looking into all possible combinations of roads you might take, and will only then tell you which
way is the fastest. Of course, you might have to wait for a while until the algorithm finishes, and
only then can you start driving. The path you will take will be the fastest one (assuming that
nothing changed in the external environment). On the other hand, a greedy algorithm will start
you driving immediately and will pick the road that looks the fastest at every intersection. As
you can imagine, this strategy might not lead to the fastest arrival time, since you might take
some "easy" streets and then find yourself hopelessly stuck in a traffic jam.
3.4 Backtracking
Backtracking is a general algorithm for finding all (or some) solutions to some computational
problem, that incrementally builds candidates to the solutions, and abandons each partial candi-
date c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solu-
tion.
20
The classic textbook example of the use of backtracking is the eight queens puzzle, that
asks for all arrangements of eight chess queens on a standard chessboard so that no queen
attacks any other. In the common backtracking approach, the partial candidates are ar-
rangements of k queens in the first k rows of the board, all in different rows and columns.
Any partial solution that contains two mutually attacking queens can be abandoned, since
it cannot possibly be completed to a valid solution.
Useful technique for optimizing search under some constraints
Express the desired solution as an n-tuple (x1; : : : ; xn) where each xi 2 Si, Si being a fi-
nite set
The solution is based on finding one or more vectors that maximize, minimize, or satisfy
a criterion function P(x1; : : : ; xn)
Sorting an array a[n]
Find an n-tuple where the element xi is the index of ith smallest element in a.
Criterion function is given by a[xi] _ a[xi+1] for 1 _ i < n
Set Si is a finite set of integers in the range [1,n]
Brute force approach:
Let the size of set Si be mi
There are m = m1*m2 _ _ _mn*n-tuples that satisfy the criterion function P
In brute force algorithm, you have to form all the m n-tuples to determine the op-
timal solutions by evaluating against P
Backtrack approach
Requires less than m trials to determine the solution
Form a solution (partial vector) one component at a time, and check at
every step if this has any chance of success
If the solution at any point seems not-promising, ignore it
If the partial vector (x1; x2; : : : ; xi) does not yield an optimal solution,
ignore mi+1……m*n possible test vectors even without looking at them
Effectively, find solutions to a problem that incrementally builds candi-
dates to the solutions, and abandons each
partial candidate that cannot possibly be completed to a valid solution
Only applicable to problems which admit the concept of partial candidate solution and a
relatively quick test
of whether the partial solution can grow into a complete solution
21
If a problem does not satisfy the above constraint, backtracking is not applicable
Backtracking is not very efficient to find a given value in an unordered list
All the solutions require a set of constraints divided into two categories: explicit and im-
plicit constraints
Determine problem solution by systematically searching the solution space for the given
problem instance
Use a tree organization for solution space
8-queens problem
Place eight queens on an 8 by 8 chessboard so that no queen attacks another queen
A queen attacks another queen if the two are in the same row, column, or diagonal.
1 2 3 4 5 6 7 8
1 Q
2 Q
3 Q
4 Q
5 Q
6 Q
7 Q
8 Q
22
3.5 Branch and bound
Branch and bound (BB or B&B) is a general algorithm for finding optimal solutions of various
optimization problems, especially in discrete and combinatorial optimization. A branch-and-
bound algorithm consists of a systematic enumeration of all candidate solutions, where large
subsets of fruitless candidates are discarded en masse, by using upper and lower estimated
bounds of the quantity being optimized.
Terminology:
Live node is a node that has been generated but whose children have not yet been generated.
E-node is a live node whose children are currently being explored. In other words, an E-node is
a node currently
being expanded.
Dead node is a generated node that is not to be expanded or explored any further.
The term Branch and Bound refers to all state space search methods in which all children
of E-node are generated before any other live node can become the E-node. To graph search
strategies, BFS and D-Search, in which the exploration of a new node can not begin until the
node currently being explored is fully explored. Both of these generalize to branch and bound
strategies.
In branch and bound terminology, A BFS-like state space search where we called FIFO
(First-In-First-Out) search as the list of live nodes is a First in first out list ( or queue). A D-
search like state space search will be called LIFO (Last-In-First-Out) search as the list of live
nodes is a last in first out list (or stack). As in the case of backtracking, bounding functions are
used to help avoid the generation of sub-trees that do not contain an answer node.
23
24
Abstract
This paper describes a research project on using Genetic
Algorithms (GAs) to solve the 0-1 Knapsack Problem (KP).
The Knapsack Problem is an example of a combinatorial op-
timization problem, which seeks to maximize the benefit of
objects in a knapsack without exceeding its capacity. The
paper contains three sections: brief description of the
basic idea and elements of the GAs, definition of the
Knapsack Problem, and implementation of the 0-1 Knapsack
Problem using GAs. The main focus of the paper is on the
implementation of the algorithm for solving the problem.
In the program, we implemented two selection functions,
roulette-wheel and group selection. The results from both
of them differeddepending on whether we used elitism or
not. Elitism significantly improved the performance of the
roulette-wheel function. Moreover, we tested the program
with different crossover ratios and single and double
crossover points but the results given were not that dif-
ferent.
25
BASIC Introduction
26
The Knapsack Problem (KP)
Definition
The KP problem is an example of a combinatorial optimiza-
tion problem, which seeks for a best solution from among
many other solutions. It is concerned with a knapsack that
has positive integer volume (or capacity) V. There are n
distinct items that may potentially be placed in the knap-
sack. Item i has a positive integer volume Vi and positive
integer
Benefit Bi. In addition, there are Qi copies of item i
available, where quantity Qi is a positive integer satisfy-
ing 1 ≤ Qi ≤ ∞.
Let Xi determines how many copies of item i are to be placed
into the knapsack. The goal is to:
Maximize
N
Σ Bi Xi
i = 1
Subject to the constraints
N
Σ Vi Xi ≤ V
i = 1
And
0 ≤ Xi ≤ Qi.
27
the bounded 0-1 KP, where we cannot have more than one
copy of an item in the knapsack.
28
else
V[i,w] = V[i-1,w]
else V[i,w] = V[i-1,w] // wi > w
Running time
for w = 0 to W
V[0,w] = 0
for i = 1 to n
V[i,0] = 0
for i = 1 to n
for w = 0 to W
< the rest of the code >
29
Working of Project
30
R
31
The 0-1 Knapsack Problem
The difference between this problem and the fractional one is that
you can't take a fraction of an item. You either take the whole thing
or none of it. So here, is the problem formally described:
The naive way to solve this problem is to cycle through all 2 n subsets
of the n items and pick the subset with a legal weight that maximizes
the value of the knapsack. But, we can find a dynamic programming
algorithm that will USUALLY do better than this brute force tech-
nique.
32
Let Sk be the optimal subset of elements from {I0, I1,... Ik}. But what
we find is that the optimal subset from the elements {I0, I1,... Ik+1}
may not correspond to the optimal subset of elements from {I0, I1,...
Ik} in any regular pattern. Basically, the solution to the optimization
problem for Sk+1 might NOT contain the optimal solution from
problem Sk.
The best set of items from {I0, I1, I2} is {I0, I1, I2} but the best set of
items from {I0, I1, I2, I3} is {I0, I2, I3}. In this example, note that this
optimal solution, {I0, I2, I3}, does NOT build upon the previous op-
timal solution, {I0, I1, I2}. (Instead it build's upon the solution, {I0,
I2}, which is really the optimal subset of {I0, I1, I2} with weight 12
or less.)
33
So, now, we must rework our example. In particular, after trial and
error we may come up with the following idea:
34
Basically, you can NOT increase the value of your knapsack with
weight w if the new item you are considering weighs more than w –
because it WON'T fit!!!
OR
35
Here is a dynamic programming algorithm to solve the 0-1 Knap-
sack problem:
int w, k;
for (w=0; w <= W; w++)
B[w] = 0
36
Note on run time: Clearly the run time of this algorithm is O(nW),
based on the nested loop structure and the simple operation inside
of both loops. When comparing this with the previous O(2n), we find
that depending on W, either the dynamic programming algorithm is
more efficient or the brute force algorithm could be more efficient.
(For example, for n=5, W=100000, brute force is preferable, but for
n=30 and W=1000, the dynamic programming solution is prefera-
ble.)
37
Let's run through an example:
I Item wi vi
0 I0 4 6
1 I1 2 4
2 I2 3 5
3 I3 1 3
4 I4 6 9
5 I5 4 7
W = 10
Item 0 1 2 3 4 5 6 7 8 9 10
0 0 0 0 0 6 6 6 6 6 6 6
1 0 0 4 4 6 6 10 10 10 10 10
2 0 0 4 5 6 9 10 11 11 15 15
3 0 3 4 7 8 9 12 13 14 15 18
4 0 3 4 7 8 9 12 13 14 16 18
5 0 3 4 7 8 10 12 14 15 16 19
38
Minimum Hardware And Software Requirement
Hardware Requirements
RAM ----------------------------- -60 Mb
Processor--------------------------Core 2 Duo
Software Requirements
1. Window 3.0, window XP, or higher operating system.
2. TURBO C
The above given system requirement are the minimum requirement that are needed to run
this system.
39
Software Development Life Cycle
1. Feasibility study
2. Requirement Analysis
3. Design
4. Coding
5. Testing
6. Implementation
7. Maintenance
2. Identify and assign the roles and the responsibilities to affected process
including the technical manages..
4. Identify the errors and correct them before they become large.
SDLC Phases
40
Project Planning
Requirements
Definition
Design
Coding
Testing
Implementation
Maintenance
41
SDLC With Respect To Project
1. Feasibility Phase
The Feasibility study is an important phase in any software development
process. This is because it makes analysis of different aspects like cost for developing and executing the
system, the time requirement for each phase of the system and many other things related with efforts
and peoples.
The feasibility study is an analysis of a proposed projects with emphasis on the attainable income and
most advantages design and useThe main things that need to be studed in this phase include:
To study about skills required for maintenance in later stage of the project.
To study that the project completes within estimated budget.
To study the scope of future expansion of project .
42
In the design phase the architecture is established. This phase starts with
the requirement document delivered by the requirement phase and maps the requirement into
the architecture. The architecture defines the components, their interfaces and behaviors. The
deliverable design document is he document it is the architecture. The design document de-
scribes a plan to implement the requirement.
System or Software design is the activity where software requirements
are analyzed in order to produce a description of the internal structure and organization of the
system that will serve as the basic for its construction.
43
DYNAMIC PROGRAM ON KNAPSACK
With screen shots:
#include <stdio.h>
#define max(a,b) (a > b ? a : b)
take = dontTake = 0;
44
if (matrix[index][size]!=0)
return matrix[index][size];
if (index==0){
if (weights[0]<=size){
matrix[index][size] = values[0];
return values[0];
}
else{
matrix[index][size] = 0;
return 0;
}
}
45
if (weights[index]<=size)
take = values[index] + knapsack(index-
1, size-weights[index], weights, values);
return matrix[index][size];
int main(){
int nItems = 4;
46
int knapsackSize = 10;
int weights[4] = {5,4,6,3};
int values[4] = {10,40,30,50};
printf("Max value =
%dn",knapsack(nItems-
1,knapsackSize,weights,values));
return 0;
}
47
48
#include <stdio.h>
#define max(a,b) (a > b ? a : b)
take = dontTake = 0;
if (matrix[index][size]!=0)
return matrix[index][size];
49
if (index==0){
if (weights[0]<=size){
picks[index][size]=1;
matrix[index][size] = values[0];
return values[0];
}
else{
picks[index][size]=-1;
matrix[index][size] = 0;
return 0;
}
}
if (weights[index]<=size)
50
take = values[index] + knapsack(index-
1, size-weights[index], weights, values);
if (take>dontTake)
picks[index][size]=1;
else
picks[index][size]=-1;
return matrix[index][size];
51
}
while (item>=0){
if (picks[item][size]==1){
printf("%d ",item);
item--;
size -= weights[item];
}
else{
item--;
}
}
52
printf("n");
return;
}
int main(){
int nItems = 4;
int knapsackSize = 10;
int weights[4] = {5,4,6,3};
int values[4] = {10,40,30,50};
printf("Max value =
%dnn",knapsack(nItems-
1,knapsackSize,weights,values));
return 0;
}
54
55
# include<stdio.h>
#include<graphics.h>
Void knapsack(int n, float weight[], float profit[], float
capacity) {
float x[20], tp = 0;
int i, j, u;
u = capacity;
if (i < n)
x[i] = u / weight[i];
tp = tp + (x[i] * profit[i]);
56
printf("\nMaximum profit is:- %f", tp);
int main() {
float weight[20], profit[20], capacity;
int num, i, j;
float ratio[20], temp;
57
temp = weight[j];
weight[j] = weight[i];
weight[i] = temp;
temp = profit[j];
profit[j] = profit[i];
profit[i] = temp;
}
}
}
58
Screen Shots
59
60
61
62
63
Bibliography:-
64
THANK YOU
65