Thursday, November 09, 2006

Inventory Control- Insensitivity of Total Cost near Optimum

You must have studied the famous EOQ (Economic Order Quantity) formula—

EOQ = Sqrt (AD/ VR) where Sqrt means square root of, A = Ordering cost/ order, D = total demand or consumption or requirement, V = unit cost of the item and R = rate of inventory carrying cost. This is a deterministic model because all these variables are assumed to be certain.

EOQ is economic order quantity because, if this quantity is ordered in every order of the item then the total cost is minimum. If you order for lesser quantity than EOQ then you will have to order more number of times to meet the same demand D, thus increasing one component of the total cost. If you order for more quantity than EOQ, then inventory level will become higher and so the inventory carrying cost will increase. The total of these two costs i.e. ordering cost and inventory carrying cost is minimum if EOQ is used as order quantity.

You know that if values of A, D, V and R are known then you substitute the values in the formula to get EOQ. But have you thought how the value of these variables is obtained? The total demand or requirement or consumption (D) may be obtained by using suitable forecasting technique. Ordering cost/ order (A) may be obtained by using previous data. Similarly you may analyse the data of various components of inventory carrying cost to get the value of R. But all these cannot be considered as deterministic. No forecasting technique can forecast with certainty what the demand would be. Then, how can this formula be used in a real life scenario? And what sense it will make to have a data on EOQ where all the variables used in computation are likely to change? Estimating the value of R, A and D involves cost and one cannot keep doing it before every order.

But there is a general characteristic of optimum solution in every field. The performance becomes insensitive to changes in effort near the optimum level. If you see the graph between Total Cost and Order Quantity, you find that the curve is flat near EOQ. It means that if you change the order quantity but keep it near the EOQ value then the minimum or optimum total cost will remain almost same. A relatively larger change in order quantity will change the total cost very little if the order quantity is close to EOQ. But if you operate away from EOQ, then a small change in order quantity can bring a big change in total cost. That is why if a good student is performing much below his optimum level then a small improvement in effort can bring big improvement in performance. But once he reaches closer to his optimum level he needs bigger effort to bring improvement in performance.

In inventory model, insensitivity plays a very important role. It permits you to make errors in estimation of A, D, V and R as long as you use EOQ and try to be near the optimum. These errors will not be affecting the total cost much so you can afford to use simpler methods in estimation. But if you operate away from the optimum level which you will if scientific management techniques are not used, then the total cost will be very sensitive to order quantity and error in estimation can adversely affect the total cost drastically.

Hence, in spite of all the possibilities of uncertainty, errors etc. this deterministic model is of immense use in keeping the total cost near the minimum possible level. Uncertainty and error in estimation factors are taken care by building system of automatic response to these in the method of inventory control itself. This will be discussed in some other article.

Sometimes we hear an Inventory Manager saying ‘whenever the current stock goes below some figure say 100, we order for the item a quantity equivalent to last 3 months consumption and this method works fine for us’. There is no harm in agreeing to him on similar statements. In fact it is not necessary to do any complex mathematical modeling and computation to reach to an optimum decision. Concept of optimization techniques can be built in applications in much simpler ways. We will discuss this too in some other article.

Wednesday, November 01, 2006

Deterministic and Probabilistic Models

To understand it better, let us visualize deterministic and probabilistic situations.

A deterministic situation is one in which the system parameters can be determined exactly. This is also called a situation of certainty because it is understood that whatever are determined, things are certain to happen the same way. It also means that the knowledge about the system under consideration is complete then only the parameters can be determined with certainty. At the same time you also know that in reality such system rarely exists. There is always some uncertainty associated.

Probabilistic situation is also called a situation of uncertainty. Though this exists everywhere, the uncertainty always makes us uncomfortable. So people keep trying to minimize uncertainty. Automation, mechanization, computerization etc. are all steps towards reducing the uncertainty. We want to reach to a situation of certainty.

Deterministic optimization models assume the situation to be deterministic and accordingly provide the mathematical model to optimize on system parameters. Since it considers the system to be deterministic, it automatically means that one has complete knowledge about the system. Relate it with your experience of describing various situations. You might have noticed that as you move towards certainty and clarity you are able to explain the situation with lesser words. Similarly, in mathematical models too you will find that volume of data in deterministic models appears to be lesser compared to probabilistic models. We now try to understand this using few examples.

Take an example of inventory control. Here there are few items that are consumed/ used and so they are replenished too either by purchasing or by manufacturing. Give a thought on what do you want to achieve by doing inventory control. You may want that whenever an item is needed that should be available in required quantity so that there is no shortage. You can achieve it in an unintelligent way by keeping a huge inventory. An intelligent way will be to achieve it by keeping minimum inventory. And hence, this situation requires optimization. You do this by making decisions about how much to order and when to order for different items. These decisions are mainly influenced by system parameters like the demand/ consumption pattern of different items, the time taken by supplier in supplying these items, quantity or off-season discount if any etc. Let us take only two parameters -- demand and time taken by supplier to supply, and assume that rest of the parameters can be ignored.

If the demand is deterministic, it means that it is well known and there is no possibility of any variation in that. If you know that demand will be 50 units, 70 units and 30 units in 1st, 2nd and 3rd months respectively it has to be that only. But in a probabilistic situation you only know various possibilities and their associated probabilities. May be that in the first month the probability of demand being 50 units is 0.7 and that of it being 40 units is 0.3. The demand will be following some probability distribution. And you can see that the visible volume of data will be higher in case of probabilistic situation.

You have different mathematical models to suit various situations. Linear Programming is a deterministic model because here the data used for cost/ profit/ usage/ availability etc. are taken as certain. In reality these may not be certain but still these models are very useful in decision making because

1. It provides an analytical base to the decision making
2. The sensitivity of performance variables to system parameters is low near optimum.
3. Assuming a situation to be deterministic makes the mathematical model simple and easy to handle.

But if the uncertainty level is high and assuming the situation to be deterministic will make the model invalid then it is better to use probabilistic models. Popular queuing models are probabilistic models as it is the uncertainty related to arrival and service that form a queue.

Thursday, October 19, 2006

An LP problem asked in the comment

This is the only problem where it looked to me that the student has tried something from his side also. The question is as follows

min z=y1+y2
s.t
2y1+4y2>=4
y1+7y2>=7
y1,y2>=0
in simplex method.........solution........
max z*=-y1-y2
2y1+4y2-s1+a1=4
y1+7y2-s2+a2=7
y1,y2>=0
now.........z*+y1+y2=0
2y1+4y2-s1+a1=4
y1+7y2-s2+a2=7
y1,y2>=0
now here o.f line is positive so we apply M method(or due to artificial var.)then what is the next process i only want first iteration then after that i solve it.......

Before going further read the post titled "About Artificial Variables in Linear Programming (LP)" on this blog.

Artificial variables are needed to get the initial basic variables from the constraints of >= and = type. It will help to recall the properties of basic variables to understand the reason behind this.
To ensure that the artificial variables don't enter the basis after leaving, you have to associate a high penalty M in the o.f.

The revised O.f. is
max z* = -y1-y2. In maximization problem, high penalty means high negative coefficient of the variable.
so the o.f should be max z* = -y1-y2-Ma1-Ma2
Now, since a1 and a2 are to be in the initial basis, their co-efficients ahould be 0. To get this substitute
a1 = 4-2y1-4y2+s1 and a2 = 7-y1-7y2+s2 in this o.f. You can notice that these values of a1 and a2 have been taken from the two constraints where these variables were introduced.
So the o.f is max z* = -y1-y2-4M+2My1+4My2-Ms1-7M+My1+7My2-Ms2
= (3M-1)y1+ (11M-1)y2-Ms1-Ms2-11M
So now you have z*-(3M-1)y1-(11M-1)y2+Ms1+Ms2 = 11M

I hope, you can solve from here. You have to write the initial basic feasible solution table and do the simplex iterations. You can see that the most negative coefficient in the above o.f.line is for y2.

Tuesday, October 17, 2006

Revised Simplex Method

As we have been discussing that the revised simplex method is nothing different than the simplex method except that it's operations are based on matrix operations instead of elementary row operations in simplex. This approach is specially suited for computerization because it provides better computational performance and reduces truncation/ overflow errors. The below link will be useful. Go through this at your own pace and try to understand each step instead of memorizing them.

http://www.me.utexas.edu/~jensen/ORMM/methods/unit/linear/subunits/teach/teach_lp_revised.html

Sunday, October 15, 2006

Sensitivity Analysis

Sensitivity analysis is also known as post-optimality analysis. And as the names indicate, it is some kind of analysis done after obtaining the optimal solution. This analysis is related to sensitivity of the solution towards possible changes in the problem.

Such analysis is very important considering that an LP represents a real life problem and there is always some possibility of changes in real life situation. So, if you formulate a LP for finding optimum product mix, solve it and the organization starts producing as per this; and one fine day the supplier of an important raw material cuts-down the supply changing the availability of the raw material. Will you have to again formulate the LP and solve? Similarly, if the management decides to introduce another product, will we have to do all the computation once again?

Sensitivity analysis analyses the impact of such changes on the optimal solution and helps us in getting the new optimal solution, if needed, without doing the entire exercise afresh. This analysis covers all possible changes such as:

-- Changes in the right hand side values in the LP,

-- Changes in the objective function coefficients,

-- Changes in the constraint coefficients,

-- Introduction or elimination of new constraint,

-- Introduction or removal of new product,

-- Changes in the valid range of decision variables, etc.

These changes can affect the optimality and feasibility of the solution. There is always a range of such changes in which the optimal solution remains unchanged. But, if a change is outside that range then the optimal solution changes. The changed optimal can be obtained easily from the previous optimal solution itself.

Wednesday, October 11, 2006

The basic Simplex iteration through an example

Following link describes a basic simplex iteration. Read it at your own pace and try to understand the concepts involved. Practice on at least one problem.

http://www2.isye.gatech.edu/~spyros/LP/node23.html#SECTION00050010000000000000

Post your comments freely so that I can understand the difficulty areas.

Monday, October 09, 2006

About Artificial Variables in Linear Programming (LP)

Another frequently asked question by students is related to use of artificial variables
while preparing the initial basic feasible solution table. The common flow of discussion
forces the student to think in a logical way as he has been thinking about slack and surplus
variables but the artificial variables can not be considered in the same logical category as
the previous two.

Just to recall, slack and surplus variables are used in LP to convert inequality constraints
to that of equality. If the constraint is of <= type, we add a slack variable to the left
hand side expression to make it equal to the right hand side value. It has some meaning. If
we write a constraint related to a raw material in a product mix problem, the left hand side
expression gives the raw material consumption while the r.h.s. value is the availabilty of
that raw material. The consumption has to be less than or equal to the availability, it can
not be more. And so the constraint is of <= type. The value of the slack variable is the
difference between the availability and the consumption. So at any stage it gives the
quantity of raw material unused.

Similarly, when the constraint is of >= type, we subtarct a surplus variable from the l.h.s.
expression to make it equal to the r.h.s. value. Why should such type of constraint arise in
real life situation? May be that the production of a product has to be more than a given
quantity because this much is needed by a very important customer. Or may be that intake of
a combination of items by human body has to be more than a prescribed quantity to keep the
body healthy. You can understand that here again the surplus variable has some meaning and
it's value gives an idea as to how much surplus one has produced or how much surplus one has
eaten.

Coming to the artificial variables, they don't have such meaning. Here, suddenly you have to
reduce your understanding capability. Don't try to find much meaning. Artificial variables
are not there to make out much meaning. They are used to get an initial basic variable from
the constraints while preparing the initial basic feasible solution table. Constraints of >=
type and = type don't provide any basic variable. So, artificial variable is added
arbitrarily to get the basic variable. And also a heavy penalty is associated for this
misdeed so that these variables are pushed out of the basis. Values of these variables don't
make much sense because they should go out of the basis and never come back. But if they
remain in the optimal basis then you have to say that there is no feasible solution to the
given LP. This conclusion depends just on the presence of the articial variable in the basis
of the optimal table, it doesn't change with it's value.