0% found this document useful (0 votes)
38 views10 pages

Lec 2

The document discusses linear programming problems (LPP) and their mathematical modeling and solutions. It defines key concepts like convex sets, extreme points, and basic solutions. The key points are: 1) A convex set is one where any point on the line segment between two points in the set is also in the set. Extreme points cannot be written as combinations of other points in the set. 2) Any LPP can be written as Ax=b, where A is the constraint matrix. A basic solution is obtained from the rank of A using linearly independent columns to form the basis matrix B. 3) Basic variables correspond to columns of B, with other variables being non-basic. There are multiple
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views10 pages

Lec 2

The document discusses linear programming problems (LPP) and their mathematical modeling and solutions. It defines key concepts like convex sets, extreme points, and basic solutions. The key points are: 1) A convex set is one where any point on the line segment between two points in the set is also in the set. Extreme points cannot be written as combinations of other points in the set. 2) Any LPP can be written as Ax=b, where A is the constraint matrix. A basic solution is obtained from the rank of A using linearly independent columns to form the basis matrix B. 3) Basic variables correspond to columns of B, with other variables being non-basic. There are multiple
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Constrained and Unconstrained Optimization

Prof. Adrijit Goswami


Department of Mathematics
Indian Institute of Technology, Kharagpur

Lecture – 02
Assumptions & Mathematical Modeling of LPP

Now, let us start the next things, the convex set. Earlier we have talked about the basis,
dimension and the spanning set.

(Refer Slide Time: 00:25)

You have a set ; we say that the set is a convex set. If we can find out the 2 points
which belongs to , and if I join these 2 points the line segment whatever you are
obtaining will also lie in the set. That means any point you take on the line segment that
point also should belongs to . Mathematically if I have to say then I will tell that if
belongs to then if I can find out this one where
. Then must be a member of the set . So, if this is satisfied for any 2
points belongs to if where , and must be a
member of the set .

For an example it will be easy if I take any triangle set. This is a convex set. You take
any 2 points here. Suppose I am taking these and these if I join these, all the points lie
inside this convex set only. Similarly, so this is a convex set. If I take a rectangle like this
then any 2 points you take here you join them, they will all the intermediate points will
lied inside this rectangle only. So, this will also be convex set, but if I take something
like this, suppose I am taking this set. If I take a point here, if I take a point here, if I join
you will see that some of the points are not in the set itself.

So, this one is not convex set. So, I hope it is clear that mathematical definition is this
one and geometrically if I have to say then I will tell like this. From here one more thing
comes that is extreme point. A point x is said to be an extreme point of a convex if and
only if that does not exist any 2 points where in the set , such that
, where .

So, one point x is and extreme point of a convex set, if and only if there does not exist
any 2 points where and they satisfy where
. That is that is not a linear combination convex combination of these .
So, if you see this rectangle here, in this rectangle if I take a point here if I take a point
here. These point is an extreme point of this rectangle for this point is any 2
points rectangle. Because, I cannot find any 2 points are some other 2 points,
where if I make the join them then lies here, but you think this one . If I
denote this by , then is not extreme point. Because the reason is that your lies
whenever I am joining and . is a combination of these . So, is a not
an extreme point where as these are the extreme points.

(Refer Slide Time: 05:46)


Already you have done just I am writing this one to talk about the linear simultaneous
equations. Let us think about a set of equations ;
and ; that
means you are having variables and equations here. variables and
you have the equations. In matrix notation I can write it in the form as , where
is the matrix I can write . Your will be nothing but

, and similarly your b .

So, in other sense whenever you are some equations in matrix notation always we can
write it in the form of .

(Refer Slide Time: 07:49)

Because from now whatever we are going to talk this matrix notation will be very useful.
Let us come to the basic solution of that system of equations. You have a system of
equations with variables. I have variables as I discussed earlier and you have
equations. So, and we are assuming that is less than that is number of equations a
less than number of variables and rank of the matrix, .

So, since rank of the matrix is equal to . So, therefore, there will exists one matrix of
order which will be since rank is please note that you may know the definition
of the rank. So, it will be nonsingular. So, I am getting a matrix of order , since
I am assuming the rank of the matrix is m. So, I must get one square matrix
which will be nonsingular. So, let these are the linearly independent
columns. Since the rank is therefore I must get linearly independent columns of .
So, are linearly independent columns. So your can be transformed
into a system with equations and unknowns which I am representing by . So, this
is equivalent to a matrix with equations and unknowns and the remaining
variables what you are having, these variables will be 0.

So, one linearly independent column is there. So, I have decision variables in the
matrix, from where I can obtain the solution of those decision variables. And the
remaining variables we will make matrix. So, this matrix we call it as the
basis matrix, this matrix B is known as the basis matrix. Please note this one, matrix B
how you are forming since the rank is m rank of the matrix A is m I must get n linearly
independent column vectors and using those I am forming one matrix B which will be
automatically nonsingular and this is non as the basic matrix.

Now, these linearly independent columns are there. And corresponding decision
variables are known as basic variables. So, in one sense the variable attach to the linearly
independent columns are known as basic variables. And remaining variables
we call it as non-basic variables. So, I have some basic variables, I have some non-basic
variables, basic variables are the variables which are attached to the corresponding to the
linearly independent columns. And the remaining variables are known as non-
basic variables.

So, basically I can say that . These are the basic variables of the
system . And the solution can be achieved from here solution whatever

you are obtaining that will be . So, is known as the basic solution, of the

system of equations . So, I am not saying that this is the only solution. I am
saying this is basic solutions. How you are obtaining the basic solution? The basic
solution we are obtaining by taking the rank that is n linearly independent variables from
there, and I am forming a matrix B. I am associating the variables which are attached to
these linearly independent columns which I am saying as the basic variables. And all

other variables will be the non-basic variables. And this is basic solution.
So, if you have n variables and m equations you will have factorial n by factorial m into
factorial n minus m basic solutions. So, maximum; what you can get that is by making
all the possible combinations you can obtain Basic solutions. Just let us take an

example because just we are giving the solution of a system of equations.

(Refer Slide Time: 14:06)

Let us take the equations like this So, in

matrix notation if I have to write down your and . If you find

the rank of this matrix A your rank of a will be 2, that I am leaving for you. You just
check it very easy. So .

So, number of basics solutions here will be number of basic solutions will be . So,

I can write down possible basic solutions like this . I am telling this column

as . So, this will be . One can be this, since the rank is 2.

So, I have to take 2 linearly independent columns. can be . And

the third one which you can obtain can be .

So, if you see I am obtaining these basic solutions like this. Since the rank was

2. So, the maximum number of basic solutions I can obtain using the formula ,
this is equal to 3. So, I obtain like this. So, what can be the basic solutions
now? I can write down . You know what is and what is b. You know it if
you calculate you can obtain these values, this I am leaving for you because of shortage

of time. Similarly, your . And for the third one

Therefore we can say that the basic solution one is this is which one this corresponds to

. So, will be 0. So, one solution for this problem can be , a 3 is 0. For

this case it is , that is . So, it will be this is another solution and

the third solution this is , that is . So, it will be . So, if you see

here; what is a happening, in the process of finding the solution basically we are trying to
find out the basic solution at first. How to find the basic solution? For that we are using
the linearly independent columns or the rank from the rank we can obtain it. Once I am
getting the linearly independent columns of the matrix in that case I am forming a matrix
like this. Where your b’s are combination of these linearly independent
columns, and from there we are obtaining the solution.

So, this we call afterword you will see that this is an initial solution. And from this initial
solution we have to improve it and we have to obtain the optimum solutions. So now, let
us come to the original thing.
(Refer Slide Time: 19:10)

That is formation of formulation of linear programming problem (LPP). In the linear


programming problem basically I have to take certain decision, subject to certain
constraints whatever we are having. Suppose I want to set up one factory to set up the
factory I need the man power. I need the land. I need that technical though, how I need
there may be have certain constraint on my cash flow.

So, I have to find out the solution that what should I do or how much quantity should I
produce which will satisfy my constraints. So, as we have seen in LPP we can formulate
that thing the decision making problem basically consist of certain steps. First is
identification of the problem. Basically you have to understand what the problem is
because if you are understanding is wrong then the formulation of the model always will
be wrong.

So, first I have to identify and I have to understand what the problem is. So, once I
identify then next step will be formulation of mathematical model. Here if you see
whatever problem is been given I have to analyze it, if I want to analyze it from the
problem that is from the hardcore problem I have to convert into some model. Usually
we convert it into some mathematical model, because it has been observe that
mathematical models are very easy to analyze.

Therefore once we are trying getting the problem, we will formulate the corresponding
mathematical model corresponding to that. This will be your second step. The third step
is deriving the solution of the model. Now what technique or what methods you will find
to solve the mathematical model for that I have to find out this one deriving the solution
of the model. And optimization comes into picture basically at this stage. Optimization is
the technique by which you can find out the solution of some mathematical model. And
the last step is not last step, the validation of the model. Means you have to test the
model whether it is giving proper result or not. If required you have to improve it and
improve your technique. So you can obtain the better result. So, this is the next part is
validation, and the last part will be implementation.

Implementation comes into picture, because whenever you will take about the various
models in that case number of variables number of constraints can be many. Many means
I may have to find out and problem where I may have 100 variables. I may have 200
equations, and which is not possible to make hand calculation. So, I have to use certain
techniques and I have to computerize it, I have to write down certain algorithms. So, for
that part our implementation comes into picture.

(Refer Slide Time: 23:40)

So, the general form of LPP, if I have to tell general form of linear programming
problem. It is I am writing maximize it can be minimize also I will come to that
maximize or minimize .

Subject to:
And .

And this we call as the resource constraint, resource constraint and please note this one,
the decision variable should be which we call as non-negativity
constraint.

So, in an LPP please note the decision variables with which we want to work all must be
nonnegative. And we have to optimize the function it can be maximize or minimize the
function subject to satisfying certain constraints. And
please note that linear programming problem means from the name itself it is clear the
objective function as well as the constraints will be linear in nature. The variables will be
only non-negative it may be discrete it may be continuous.

So, if I have to write down in the compact form I will say maximize

subject to summation where . And , for all

. So, this is in compact form maximize subject to

and . This again we can write it in the matrix notation also, which

becomes much easier to represent maximize .

(Refer Slide Time: 27:00)


We are writing it as subject to . And where all are the
column vectors I can represent it from there. And if I have to minimize the problem then
in place of maximize here it will come minimize the function, but sometimes what we do
in general cases we write down minimize subject to and .

So, basically if you see for minimization the constraint we are taking greater than equal
and for this one we are taking the constraint as less than equals, and if you see in LPP
then basically you have m variables because here if you see we are having m equations
and n variables. I have to find out the solution of this variables which will be written
which will satisfy the constraints.

So, in stand up form we can write down 2 things, I am just writing standard form and
also in standard form if I have to write down I am writing minimize

subject to summation where and , for all .

So, minimize subject to , in standard form we are means


whenever I will convert it I may have inequality, but before solution I have to transform
it in to the equality. And here I am writing the canonical form in canonical form it is
minimization. I am writing the maximization subject to . I will

take the values one to b will take one to m and .

So, here in general canonical form means what whenever I will formulate the problem I
the constraint will be maybe equality may be inequality. It will be greater than equal or
less than equal depending upon I am minimizing or maximizing the problem. But
whenever I am trying to find out the solution then the inequality I have to convert it into
equality and then we will find out the solutions. So, next we will see in the next class
what kind of assumptions we have used to find the solution of LPP and then we will go
for the graphical solution and other things.

You might also like