Lecture Notes of STC On SCM
Lecture Notes of STC On SCM
ON
LECTURE NOTES
Prepared By
II Materials Management 11
IX E-procurement 103
XIII Automatic Data Capture using RFID and its Implications 155
INTRODUCTORY TOPICS
CHAPTER I
Introduction to Supply Chain Management
-An Overview
1.1 Introduction
A Supply Chain encompasses all activities in fulfilling customer demands and requests. These
activities are associated with the flow and transformation of goods from the raw materials stage,
through to the end user, as well as the associated information and funds flows. There are four stages in
a supply chain: the supply network, the internal supply chain (which are manufacturing plants),
distribution systems, and the end users. Moving up and down the stages are the four flows: material
flow, service flow, information flow and funds flow.
Different entities of the suply chain may be owned by one individual/organization or by several
individuals/organizations. Most supply chains of today belong to the later category. In such supply
chains, the owner of each entity attempts to maximize its benefit. Focus on individual links of the
supply chain invariably leads to inefficient and high cost product/service delivery system. In the
process, such a supply chain looses to supply chain that is customer focussed where the individual
links orient their business processes and decisions to ensure least cost delivery of products/services to
the ultimate customer.
2
range of subject areas. This has however, helped supply chain management research to grow at much
faster rate.
The definition of supply chain is given in the eighth edition of APICS dictionary (1995) as “The
processes from initial raw materials to the ultimate consumption of finished product linking across
supplier-user companies.”
Houlihan (1985) is credited with coining the term Supply Chain and he mentioned some
characteristics unique to SCM. Jones and Riley (1985) defined supply chain as an integrative
approach to dealing with the planning and control of the materials flow from supplier to end-users.
Oliver and Webber (1992) state that a supply chain should be viewed as a single entity that is guided
by strategic decision-making. Stevens (1989) defines it as a connected series of activities from
supplier to customer. Villa (2001) has commented that in principle, all the activities from raw material
supplies to the final delivery of product to the customers can be included within the purview of the
supply chain.
From the large number of definitions available in the literature of supply chain, it is observed that
though they are apparently different from each other, yet, they are carrying more or less the same
meaning representing a system of supplier, manufacture, distributor, retailer and customer, where
materials flow downstream from supplier to customer whereas information and financial flows are bi-
directional.
Further, another important observation is that each definition has recognized that supply chain works
beyond the boundary of one organization and as a result a clear cut demarcation of boundary in a
supply chain is difficult since, where the boundary of one organization finishes, the boundary of the
other member of the supply chain starts.
Though above definitions of supply chain are mostly related with manufacturing organizations, yet,
there exists supply chain related with service organizations also. Some authors (e.g. Anderson et al.,
2000a, 2000b) have focused their study relating to supply chain of service organization.
3
purposes of improving the long term performance of the individual companies and the supply chain as
whole”
Lee and Billington (1992) mentioned that supply chain management focuses in the coordination of the
manufacturing, logistics, and materials management functions within an organization. They have
defined supply chain management
“The integration activities taking place among a network of facilities that procure raw materials,
transform them into immediate goods and then final products, and deliver products to customers
through distribution system.”
Mentzer (2001) has mentioned SCM as the systematic, strategic coordination of the traditional
business functions within a particular company and across business within the supply chain for the
purpose of improving the long-term performance of the individual companies and the supply chain as
a whole.
New and Pyne (1995) have described supply chain management as the chain linking each element of
the manufacturing and supply process from raw materials through to the end user, encompassing
several organizational boundaries.
Further, La Londe (1997) elaborating the activities of SCM states that it is the process of managing
relationships, information, and materials flow across enterprise borders to deliver enhanced customer
service and economic value through synchronized management of the flow of physical goods, money
and associated information from sourcing to consumption.
Table 1.1 shows the important components of a supply chain management function. These
components can be grouped into two main business processes. They are: Materials Management and
Physical Distribution Management.
As a whole, supply chain management is a set of approaches used to integrate various activities of
suppliers, manufacturers, warehouses efficiently, so that merchandise is produced and distributed at
the right locations and the right time to minimize system-wide costs while satisfying customers’
service level requirements. Successful integration of various processes depends on the accurate and
timely sharing of information by all members of the supply chain. Due to various activities involved
in a supply chain, there may be multiple stakeholders (e.g. suppliers, manufacturers, distributors,
retailers and customers) in a supply chain.
From the above-mentioned views of various authors, it is clear that the scope of SCM is not only
confined to functional activities but also organizational. The functional scope includes a broad range
of traditional business functions, whereas the organizational scope is concerned with relationship
issues important to the participating firms. There is a need to develop a partnership relation between
the participating firms to reap the full benefit of SCM. Therefore, firms must take steps to break down
both intra and inter firm barriers to smoothen uncertainty and to improve control over distribution
channels. From intra-firm functional integration to external integration is the demand of the situation
and supply chain partnership can bridge the gap between the buyer and the supplier.
4
Table 1.1 Components of Total Supply Chain Management (Source: Monackza et al., 2002;
Min and Zhou, 2002)
5
In adopting a supply chain management philosophy, firms must establish management practices that
permit them to act or behave consistently with the philosophy. Some of the activities necessary for
implementing SCM philosophy can be mentioned as follows
• Integration of processes
• Mutually aggreable goal with same focus of customer service
• Mutual sharing of information
• Mutual sharing of channel risks and benefits
• Development of partnership to maintain long term relationship and cooperation.
6
(ii) Operations Elements
Once materials, components and other purchased products are delivered to the buying organization, a
number of internal operations elements become important in assembling and or processing the items
into finished products, ensuring that the right amount of product is produced and that finished product
meets specific quality, cost and customer service requirements. When actual demand does not
materialize with forecasted demand then the firm is left with either too much inventory or not enough.
In both the situations, firm is incurring cost. To minimize these costs, firms rely on demand
management strategies. The objective of the system is to match demand with the available capacity,
either by improving production scheduling, curtailing demand, using a back order system, or
increasing capacity. Further controlling or managing inventory is one of the most important aspects of
operations and certainly valuable to the firm. Firms can and typically do have some sort of material
requirement planning software system for managing their inventory. These systems can be linked
through out the organization and its supply chain partner using enterprise resource planning systems
providing real time sales data, inventory, and production information to supply chain partners.
(iii) Distribution elements
The finished good are delivered to customer a number of different nodes of transportation. Delivering
products to customers at right time, right quality, and right time and in right volume require a high
level of planning and cooperation between the firm, its customers, and the various distribution
elements. Transportation management decisions involve a tradeoff between cost and delivery timing
and customer service. In order to provide the desired level of customer service, firms must identify
customer requirements and then provide the right combination of transportation, storage, packaging,
and information services to successfully satisfy those requirements. Further, designing and building a
distribution network is one method of ensuring successful product delivery. Again there is a trade off
between the cost of the distribution system’s design and customer service.
(iv) Integration Elements
Activities in supply chain are said to be coordinated when members of the supply chain work together
while making delivery, inventory, production, and purchasing decisions that impact the profit of the
supply chain. Successful supply chain integration occurs when the participants realize that supply
chain management must become part of all of the firms’ strategic planning processes, in which
objectives and policies are jointly determined on the basis of final customers’ needs and what the
supply chain as a whole does well. Finally, firms act together to maximize total supply chain profits
by determining optimal purchase quantities, product availabilities, service levels, lead times and
production quantities.
7
better customer service, better capacity utilization, technological innovation, and to new products. A
number of authors (e.g. Ellram, 1991) are quite optimistic about the success of supply chain
partnership. Ellram (1991) has mentioned that the central theme of these partnership relationships is
the establishment of, and commitment to, an interactive exchange where both parties benefit from
sharing of risks and resources.
Further, supply chain management to be effective, it requires mutually sharing of channel risks and
rewards among partners (Ellram and Cooper, 1990). This assures competitive advantage in the long
run. In a complex relationship in which performance is difficult to measure, profit or income sharing
based on incentive scheme is an important cooperation mechanism. Again, how to share the extra
benefit due to cooperation is an important issue. Probably a win-win approach to sharing of benefit
between parties is essential for cooperation. For sharing of benefits, many a times, a negotiation
process may be needed where both the parties are free to exchange views.
According to Rubin and Carter (1990), negotiation is the process of reviewing, planning and
analyzing used by two parties to reach acceptable agreement or compromise. It is a process where
both parties adjust their expectations during the resolution of conflict, as one party does not have
absolute power over the other party. In partnership relationship, it is important to note that each party
must be rational. Negotiation building is integral to successful long-term business relationship
(Sharland, 2001).
In the negotiation process, it is also important what to say and how to say since; it may have
significant impact on the outcome of the process. When one party inappropriately presents the things
to the other party, then the later party may perceive that long-term relationship may not be feasible
with the former party. Kelle et al. (2003) have recently studied buyer supplier partnership in JIT
environment and developed models that can be used as quantitative tools for contract negotiation
between the two parties. This could be done through either price correction or through price premium.
Another important facet of cooperation is information sharing, an essential enabler to minimize
inventory in the supply chain. Information systems must be able to track and communicate production
and customer requirements at different levels in the supply chain (Cooper, Lambert and Pagh, 1997).
Despite the perceived benefits of cooperation, it is seen that there is hardly perfect cooperation as
dominant player tend to be opportunistic. Munson et al. (1999) have nicely elaborated about misuse of
power by channel leader. Sometimes, it is noticed that a powerful manufacturer in the channel
controls dependent suppliers, subcontractors and retailers.
Another new dimension to the supplier-buyer partnership is the vendor managed inventory concept. In
vendor managed inventory (VMI) strategy, supplier manages inventory at customer’s premises and
assumes responsibility to replenish the inventory to meet the needs of the buyer who withdraws items
as per his/her requirements. In this strategy, supplier takes decision on inventory replenishment
without waiting for the customer to order the product. Recently, few articles as mentioned below have
cited the benefits of vendor-managed inventory or supplier owned inventory (SOI). VMI helps in
supply chain coordination leading to improvement in the performance of a supply chain.
Dong and Xu (2002) in their work have stated that VMI is an effective strategy of realizing most of a
fully coordinated supply chain. Piplani et al. (2003) have used the term supplier owned inventory
(SOI) instead of VMI. In their study on the effect of SOI strategy on the cost of a supply chain
concluded that total cost in SOI is never be more than the non-coordinated supply chain. Some other
authors such as Cetinkaya and Lee (2000), Waller et al. (1999), Hung et al. (1995) have also studied
the VMI strategy and its benefits. From the study of VMI literature, it is seen that by implementation
of VMI strategy, a buyer reduces total inventory related cost. However, whether VMI reduces the
supplier cost is still an open question.
8
chain management is an incredibly complex undertaking involving cultural change among most or all
of the participants, investment and training in software and communication system, and realignment
of the competitive strategies employed among the participating firm. In the competitive business
environment products, technology and customers change and subsequently the priorities for the
supply chain must also change, requiring supply chains to be ever more flexible to respond quickly to
these changes. The future issues for the supply chains that need to be addressed include increasing
supply chain responsiveness, creating an environment friendly supply chain, and reducing total supply
chain cost.
(i) Supply chain expansion
The supply chain dynamic to day is changing and companies are now working with firms located all
over the globe to coordinate purchasing, manufacturing, shipping and distribution activities. While
this global expansion of the supply chain is occurring, firms are also trying to expand their control of
the supply chain to include second and third-tier suppliers and customers. Thus supply chain
expansion is occurring in two fronts : increasing breadth of the supply chain to include foreign
manufacturing, office and retail sites, along with foreign suppliers and customers; and increasing the
depth of the supply chain to include second and third tier suppliers and customers. As the firm
becomes more comfortable and experienced with their supply chain relationships with immediate
suppliers and customers, there is a tendency to expand the depth of the supply chain by creating
relationships with second and third tier suppliers and customers. This span expansion phenomenon is
just now taking place in most industries and will continue to increase as the practice of supply chain
management matures.
(ii) Increasing supply chain responsiveness
Agile manufacturing, JIT, mass customization, efficient consumer response and quick response are all
terms referring to concepts that are intended to make the make the firm more flexible and
responsiveness to customer requirements and changes. In to-days intense competitive market
environment firms and their supply chains are looking today at ways to become more responsive to
their customers. To achieve greater levels of customer responsiveness, supply chains must identify the
end customers’ needs and position the supply chain’s products and services to successfully compete,
and then consider the impact of these requirements on the supply chain participants and the
intermediate products and services they provide. Once these issues have been adequately addressed
among the firms in the supply chain, additional improvement in responsiveness comes from designing
more effective and faster product and service delivery systems as the products are passed through the
supply chain and by continuously monitoring changes occurring the market place and using this
information to reposition the supply chain to stay competitive.
To improve customer responsiveness, firms require to reevaluate their supply chain relationships, to
utilize business process reengineering, to reposition warehouses, design new products and services,
reduce new product design cycles, standardize processes and products, empower and train workers on
multiple skills, build customer feedback into daily operations, and, finally, link together all of the
supply chain participants’ information and communication systems. To day web based systems are
proving to be ideal for connecting supply chain members efficiently. One such tool is Formation
systems’ Optiva 4.0. a web based product life cycle management platform that provides business
intelligence and collaboration from product concept through introduction to improvement. It can be
integrated within a supply chain to help product gets to market faster.
(iii) Greening of supply chains
Producing, packaging, moving, storing, repackaging, and delivering products to their final
destinations can pose a significant threat to the environment in terms of discarded packaging
materials, carbon monoxide emission, noise, traffic congestion, and other forms of industrial
pollution. As the practice of supply chain management becomes more widespread, firms and their
supply chain partners will be working harder to reduce these environmental problem.
***********
9
References
[1] Bowersox, D.J. and Closs, D.J., 1996, Logistical management: The integrated supply chain
process, New York, NY: Mc Graw-Hill
[2] Christopher, M., 1992, Logistics and Supply Chain Management: Strategies for reducing costs
and improving services, Publisher: FT Pitman Publishing.
[3] Croomi, Simon, Roamno, P. and Giannakis, M., 2000, Supply chain management: an analytical
framework for critical literature review, European Journal of Purchasing and Supply
Management, Vol. 6, No. 1, pp. 67-83.
[4] Cetinkaya, S. and Lee, C.Y., 2000, Stock replenishment and shipment scheduling for vendor
managed inventory systems, Management Science, Vol. 46, No. 2, pp. 217-232.
[5] Dong, Y. and Xu, K., 2002, A supply chain model of vendor managed inventory, Transportation
Research Part E, Vol. 38, No. 2, pp. 75-95.
[6] Ertogral, K., and Wu, D. S., 2001, A bargaining game of supply chain contracting, Source internat
website www.lehigh.edu/sdw1/ertogral3.pdf
[7] Houlihan, J.B., 1985, International supply chain management, International Journal Of Physical
Distribution and Material Management, Vol. 15, No. 1, pp. 22-38.
[8] Kelle, P., Khateeb, F. and Miller. A.P., 2003, Partnership and negotiation support by joint optimal
ordering/set up policies for JIT, International Journal of Production Economics, Vol. 81-82, pp.
433-443.
[9] Lee, H.L. and Billington, C., 1993, Material management in decentralized supply chains,
Operations Research, Vol. 41, No. 5, pp. 835-847.
[10] Maloni, J.M. and Benton, C.W., 1997, Supply chain partnership: Opportunities for operations
research, European Journal of Operational Research, Vol. 101, No. 3, pp. 419-429.
[11] Mentzer, T.J., 2001, Supply Chain Management, Sage Publisher.
[12] Min, Hockey., Zhaou, G., 2002, Supply chain modelling: past,present and future, Computers
and Industrial Engineeering, Vol. 43, No. 1-2, pp. 231-249
[13] Monczka, R., Trent, R. and Handfield, R., 2002, Purchasing and Supply Chain Management,
Second Edition: publisher: Thomson Asia Pte Ltd. Singapore
[14] Narasimhan, R., Carter,J.R., 1998, Linking business unit and material sourcing strategies.
Journal of Business Logistics. Vol. 19, No. 2, pp. 155-171
[15] Piplani, R. and Viswanathan, S., 2003, A model for evaluating supplier owned inventory
strategy, International Journal of Production Economics, Vol. 81-82, pp. 565-571.
[16] Thomas, D. J. and Grifin, P. J., 1996, Coordinated supply chain management, European Journal
of Operational Research, Vol. 94, No. 1, pp. 1-15.
[17] TSay, A., Nahmias,S., & Agarwal,N., 1999, Modeling supply chain contracts: A review, In: S.
Tayur, M. Magazine, R. Ganeshan, (Eds.), Quantitative models for supply chain management,
Published by Kluwer academic publishers, 1999, pp. 301-336
[18] Waller, M., Johnson, M.E., Davis, T., 1999. Vendor-managed inventory in the retail supply
chain. Journal of Business Logistic, Vol. 20, No. 1, pp. 183-203.
10
CHAPTER II
Materials Management
2.1 Introduction
Putting in the simplest terms materials management is about moving the materials within an
organization. What do “materials” mean? Materials can basically be defined as those objects or things
that are to be moved in order to produce goods. Material is one of the 5M’s that a manager has at his
command, the other being Men, Machine, Methods and Money. Materials could be in the form of raw
materials, paperwork, messages or information etc. So materials can be both tangible and intangible.
You see the newspaper boy delivering the newspaper to your doorstep everyday or the mi1kman
delivering the milk packets to you. These are tangible materials. There is also some material moved
when you watch a movie on your television or when you receive a phone call. These are the intangible
materials that are moved. So materials management is an important function of every business. The
better is the materials management in a company the better is the health of that company.
Sl.
Costs Description
No.
1 Cost of materials The basic cost of materials that has to be paid to suppliers
The cost incurred in purchases e.g. cost on staff, tendering,
2 Purchasing cost
stationary, postage, processing supplies, receiving, inspection
The cost incurred on storage including buildings, costs on staff,
3 Inventory carrying costs
interest on capital locked/ borrowed, obsolescence
Costs incurred on paper, plastic, metal foils, metal and wood
4 Packaging cost
containers etc.
Costs incurred on moving the goods to different desired
5 Transportation cost
locations from time to time
Costs incurred on handling equipments like cranes and
6 Material handling cost
conveyors
Wastage during Costs incurred on holding scrap, obsolete stock and their
7
production disposal
12
it is so, what do the materials management function does? The answer is that from the time the
materials enter the warehouse of the organization from the suppliers, the role of materials
management starts and gets going till the final product is obtained. The interrelated activities that are
carried out to achieve this are sequenced after each other in a systematic manner. Management of this
flow of materials is called materials management. This flow of materials is met through a set of
activities presented in Table 1.2 given below
Table 1.2: Set of activities for flow of materials.
Source: Dutta A.K (1998), Materials Management: Procedures, Text and Cases
The table above highlights the importance of integrated systems and dependence of function models
for decision-making. The organizations have now become multidimensional in nature. Total materials
management concept evolved to address this dimension and avoid conflicting objectives. Total
material management helps in establishing accountability so that response to a problem is quick and
appropriate. The material functions are accomplished in more coordinated ways with the help of this
integrated approach. When this happens there is increased communication for the need of materials
and hence one gets lower costs, better inventory turnover, reduce stock outs and other significant
benefits. Data processing systems are designed on the basis of the integrated material function.
13
Value-added materials flow
Industrial enterprise
Physical
Customers distribution Manufacturing Purchasing Suppliers
14
and physical distribution are collated.
• Just in time (JIT) scheduling or Kanban means bringing inventories to zero level. To reduce the
inventory, methods like reducing lot sizes, load leveling, quality control and preventive
maintenance can be used.
• Flexibility should be achieved by using “pull” systems, computer-based planning and control
systems. Achieving flexibility is important to manufacturing as it will reduce manufacturing
activities unless or otherwise specifically asked for.
From the point of view of the purchasing interface,
• Supply management identifies the manufacturing trends and initiates effective purchasing for
long-term competitive advantage.
• Schedule requirements expedite purchasing. They must be specified so that suppliers provide
exact lead-time information and purchasers provide exact requirement information to the supply
network. This can be achieved by employing a suitable integrated data-processing system.
• Responsiveness of the supply network identifies frequent changes in customer requirements and
product life cycles.
15
Table 1.4: Various Interfaces of Materials Management
S.No. Interface Description
1 Market forecasting Forecast demands to determine production
on the basis of existing/ expanded facilities,
equipment, processes, manpower and
materials.
2 Production Materials flow begins before the production
cycle, runs throughout and continues even
after production, ensuring uninterrupted
flow of materials to feed the production
process. Internal
3 Finance Materials budget is affected by non-
availability of finance. A major chunk of
Internal
Source: Based on Dutta A.K (1998), Materials Management: Procedures, Text and Cases
The materials flow is starting from the vendor/supplier from which material is to be purchased. Once
the material is purchased, it is received and inspected. After inspection stores accepts it. Production/
manufacturing and its subsystems ca1l for the materials as and when it is required and logistics take
control after that. Later on warehousing and customer comes.
Information flow embodies much more than the materials flow. Be it production planning and control
or sales and marketing, inventory control or purchasing and procurement activities. The effectiveness
of the materials flow is thus dependent on decision-information. If an organization can control these
two flows easily and effectively then it will definitely render goods products at a low cost and also
would be able to offer good service.
*************
16
CHAPTER III
Sourcing Decisions
18
3.4 Roles of Supply Base
The supply base or supplier base refers to the list of supplies that a firm uses to acquire its materials,
services, supplies, and equipment. Firms engaging in supply chain management emphasize long-term
strategic supplier alliances by reducing the variety of purchased items and consolidating volume into
one or fewer suppliers, resulting in a smaller supply base. An effective supply base that complements
and contributes to a firm’s competitive advantage is critical to its success. Savvy purchasing managers
develop a sound supply base to support the firm’s overall business and supply chain strategies, based
on an expanded role for the supplier. It is thus vital to understand the strategic role of suppliers.
Besides supplying the obvious purchased items, preferred or top-performing suppliers also supply
(i) Product and process technology and knowledge to support the buyer’s operations, particularly
in product design – termed early supplier involvement;
(ii) Information on the latest trends in materials, processes, or designs;
(iii) Information on the supply market, such as shortages, price increases, or political situations
that may threaten supplies of vital materials;
(iv) Capacity for meeting unexpected demand; and
(v) Cost efficiency due to economies of scale, since the supplier is likely to produce the same
item for multiple buyers.
When developing and managing the supply chain, high-performance suppliers are found or developed
to provide these services and play a very important role in the success of the supply chain.
19
to assess. Total cost analysis demonstrates how other costs beside unit price can affect
purchase decision.
(v) Reliability: Besides reliable quality levels, reliability refers to other supplier
characteristics. For example, is the supplier financially stable? Otherwise, it may not be
able to invest in research and development or stay in business. Is the supplier’s delivery
lead-time reliable? Otherwise, production may have to be interrupted due to shortage of
material.
(vi) Order system and cycle time: How easy to use is a supplier’s ordering system, and what
is the normal order cycle time? Placing orders with a supplier should be easy, quick, and
effective. Delivery lead-time should be short, so that small lot sizes can be ordered on a
more frequent basis to reduce inventory-holding costs.
(vii) Capacity: the firm should also consider whether the supplier has the capacity to fill orders
to meet requirements and the ability to fill large orders if needed.
(viii) Communication capability: Suppliers should also possess a communication capability
that facilitates communication between the parties.
(ix) Location: Geographical location is another important factor in supplier selection, as it
impacts delivery lead-time, transportation, and logistical costs. Some organizations
require their suppliers to be located within a certain distance from their facilities.
(x) Service: Suppliers must be able to back up their products by providing good services when
needed. For example, when product information or warranty service is needed, suppliers
must respond on a timely basis.
There are numerous other factors, some strategic while others tactical, that a firm must consider when
choosing suppliers. The days of using competitive bidding to identify the cheapest supplier for
strategic items are long gone. The ability to select competent strategic suppliers directly impacts a
firm’s competitive success. Strategic suppliers are trusted partners and become an integral part of the
firm’s design and production efforts.
20
beyond tactical issues and toward a more strategic path to corporate success. When partners have
equal decision-making control, the partnership has a higher chance of success.
(iii) Personal Relationships
Interpersonal relationships in buyer-supplier partnerships are important since it is people who
communicate and make things happen.
(iv) Mutual Benefits and Needs
Partnering should result in a win-win situation, which can only be achieved if both companies have
compatible needs. Mutual needs create not only an environment conducive for collaboration but
opportunities for increased innovation. When both parties share in the benefits of the partnership, the
relationship will be productive and long lasting. An alliance is much like a marriage, and if only one
party is happy, then the marriage is not likely to last.
(v) Commitment and Top Management Support
First, it takes a lot of time and hard work to find the right partner. Having done so, both parties must
dedicate their time, best people, and resources to make the partnership succeed. Commitment must
start at the highest management level. Partnerships tend to be successful when top executives are
actively supporting the partnership. The level of cooperation and involvement shown by the
organization’s top leaders is likely to set the tone for joint problem solving further down the line.
Successful partners are committed to continuously looking for opportunities to grow their businesses
together. Management must create the right kind of internal attitude needed for alliances to flourish.
Since partnerships are likely to encounter bumps along the way, it is critical that management adopt a
collaborative approach to conflict resolution instead of assigning blame.
(vi) Change Management
With change comes stress, which can lead to a loss of focus. As such, companies must avoid
distractions from their core businesses as a result of the changes brought about by the partnerships.
(vii) Information Sharing and Lines of Communication
Both formal and informal lines of communication should be set up to facilitate free flows of
information. When there is a high degree of trust, information systems can be customized to serve
each other more effectively. Confidentiality of sensitive financial, product, and process information
must be maintained. Any conflict that occurs can be resolved if the channels of communication are
open. For instance, early communication to supplies of specification changes and new product
introductions are contributing factors to the success of purchasing partnerships. Buyers and sellers
should meet regularly to discuss any change of plans, evaluate results, and address issues critical to
the success of the partnerships. Since there is free exchange of information, nondisclosure agreements
are often used to protect proprietary information and other sensitive data from leaking out. It is not the
quantity but rather the quality and accuracy of the information exchanged that indicates the success of
information sharing.
(viii) Capabilities
Organizations that have a long history of using cross-functional teams to solve problems and who
have shown that their employees can collaborate successfully internally have the skills to do so
externally. We all know that things do not always turn out as planned. Thus, companies must be
willing to accept responsibility and have the capability to correct errors effectively when they are
detected. Key suppliers must have the right technology and capabilities to meet cost, quality, and
delivery requirements. In addition, suppliers must have the flexibility to respond quickly to changing
customer requirements. Before entering into any partnership, an organization must conduct a thorough
investigation of the supplier’s capabilities and core competencies. Organizations prefer working with
suppliers who have the technology and technical expertise to assist in the development of new
products or services that would lead to a competitive advantage in the marketplace.
21
3.7 Supplier Performance Evaluation
Performance Metrics
Measures related to quality, cost, delivery, and flexibility have traditionally been used to evaluate how
well supplies are doing. Information provided by supplier performance will be used to improve
efficiency in the entire supply chain. Thus, the goal of any good performance evaluation system is to
provide metrics that are understandable, easy to measure, and focused on real value-added results for
both the buyer and supplier.
By evaluating supplier performance, organizations hope to identify suppliers with exceptional
performance or developmental needs, improve supplier communication, reduce risk, and manage the
partnership based on an analysis of reported data. After all, it is not unusual that the best customers
want to work with the best suppliers. Additionally, the best suppliers are commonly rewarded and
recognized for their achievements.
In a survey of buyers carried out by Purchasing Magazine, although price/cost was rated the most
important factor when selecting suppliers, other criteria such as technical expertise, lead times,
environmental awareness, and market knowledge were also rated highly by the respondents. An
earlier study on the electronics industry by Dr. Pearson and Dr. Ellram showed that quality was the
most important criterion for selection, followed by cost, current technology, and design capabilities. It
would appear that in the electronics industry, which pioneered the six-sigma revolution, quality is the
prime selection criteria due to its strategic importance. Thus it is seen that a multi-criteria approach is
needed to measure performance. Examples of broad performance metrics are shown in Table 3.1.
Over the past several years, total cost of ownership (TCO), a broad-based performance metric, has
been widely discussed in the supply chain literature. TCO is defined as “all costs associated with the
acquisition, use, and maintenance of a good or service” and is comprised of pre-transaction,
transaction, and post-transaction costs. Explanations of these three major cost categories follow:
• Pre-transaction costs: These costs are incurred prior to order and receipt of the purchased
goods. Examples are cost of certifying and training suppliers,
• Transaction costs: These costs include the cost of the goods/services and cost associated with
placing and receiving the order. Examples are purchase price, preparation of orders, and
delivery costs.
• Post-transaction costs: These costs are incurred after the goods are in the possession of the
company, agents, or customers. Examples are field failures, company’s goodwill/reputation,
maintenance costs, and warranty costs.
TCO provides a proactive approach for understanding costs and supplier performance leading to
reduced costs. However, the challenge is to effectively identify the key cost drivers needed to
determine the total cost of ownership. A recent exploratory study of total cost of ownership models
indicates that leading-edge companies actually use such models.
22
(ii) Quality
• Zero defects
• Statistical process controls
• Continuous process improvement
• Fit for use
• Corrective action program
• Documented quality program such as ISO 9000
• Warranty
• Actual quality compared to : historical quality, specification quality, target quality
• Quality improvement compared to : historical quality, quality-improvement goal
• Extent of cooperation leading to improved quality
(iii) Delivery
• Fast
• Reliable/on time
• Defect-free deliveries
• Actual delivery compared to : promised delivery, window (i.e., two days early to zero
days late)
• Extent of cooperation leading to improved delivery
(iv) Responsiveness and Flexibility
• Responsiveness to customers
• Accuracy of record keeping
• Ability to work effectively with teams
• Responsiveness to changing situations
• Participation/success of supplier certification program
• Short-cycle changes in demand/flexible capacity
• Changes in delivery schedules
• Participation in new product development
• Solving problems
• Willingness of supplier to seek inputs regarding product/service changes
• Advance notification given by supplier as a result of product/service changes
(v) Environment
• Environmentally responsible
• Environmental management system such as ISO 14000
• Extent of cooperation leading to improved environmental issues
(vi) Technology
• Proactive improvement using proven manufacturing/service technology
• Superior product/service design
• Extent of cooperation leading to improved technology
(vii) Business Metrics
• Reputation of supplier/leadership in the field
• Long-term relationship
• Quality of information sharing
• Financial strength such as Dun & Bradstreet’s credit rating
• Total Cash flow
• Rate of return on investment
• Extent of cooperation leading to improved business processes and performance
(viii) Total Cost of Ownership
• Purchased products shipped cost-effectively
• Cost of special handling
• Additional supplier costs as the result of the buyer’s scheduling and shipment needs
• Cost of defects, rework, and problem solving associated with purchases
23
3.8 Supplier Evaluation and Certification
Only the best suppliers are targeted as partners. Companies want to develop partnerships with the best
suppliers to leverage suppliers’ expertise and technologies to create a competitive advantage.
Learning more about how an organization’s key suppliers are performing can lead to greater visibility,
which can provide opportunities for further collaborative involvement in value-added activities. A
supplier evaluation and certification process must be in place so that organizations can identify their
best and most reliable suppliers. In addition, sourcing decisions are made based on facts and not
merely on perception of a supplier’s capabilities. Providing frequent feedback on supplier
performance can help organizations avoid major surprises and maintain good relationships. For
example, Honeywell has a Web-based monthly reporting system for evaluating supplier performance.
Suppliers can access their ratings on-line and see how they are performing with respect to the other
suppliers. While it is important to evaluate the suppliers, it is equally important that suppliers be
allowed to provide constructive feedback to the customer to enhance long-term partnerships.
One of the goals of evaluating suppliers is to determine if the supplier is performing according to the
buyer’s requirements. An extension of supplier evaluation is supplier certification, defined by the
Institute for Supply Management as “an organization’s process for evaluating the quality systems of
key supplies in an effort to eliminate incoming inspections. The certification process implies a
willingness on the part of customers and suppliers to share goals, commitments, and risks to improve
their relationship. A supplier certification program also indicates long-term mutual commitment. For
example, a certification program might provide incentives for suppliers to deliver parts directly to the
point of use in the buyer firm, thus reducing costs associated with incoming inspection and storage of
inventory.
Implementing an effective supplier certification is critical to reducing the supplier base, building long-
term relationships, reducing time spent on incoming inspections, improving delivery and
responsiveness, recognizing excellence, developing a commitment to continuous improvement, and
improving overall performance. Supplier certification allows organizations to identify the suppliers
who are most committed to creating and maintaining a partnership and who have the best capabilities.
Table 3.2 presents criteria generally found in many certification programs.
24
(iii) Assign weights to each of the dimensions of performance based on their relative importance
to the company’s objectives. The weights for all dimensions must sum to 1.
(iv) Evaluate each of the performance measures on a rating between zero (fails to meet any
intended purpose or performance) and 100 (exceptional in meeting intended purpose or
performance).
(v) Multiply the dimension rating by the importance weight and sum to get an overall score.
(vi) Classify vendors based on their overall score:
• Unacceptable (less than 50) : Supplier is dropped from further business.
• Conditional (between 50 and 70) : Supplier needs development work to improve
performance but may be dropped if performance continues to lag.
• Certified (between 70 and 90) : Supplier meets intended purpose or performance.
• Preferred (greater than 90): Supplier will be considered for involvement in new
product development and opportunities for more business.
(vii) Audit and perform ongoing certification review.
An example of the preceding evaluation and certification process is shown in Table 3.3.
Table 3.3
Supplier Scorecard Used for the XYZ Company
Performance Rating × Weight = Final Value
Measure
Technology 80 0.10 8.00
Quality 90 0.25 22.50
Responsiveness 95 0.15 14.25
Delivery 90 0.15 13.50
Cost 80 0.15 12.00
Environmental 90 0.05 4.50
Business 90 0.15 13.50
Total Score 1.00 88.25
Note: Based on the total score of 88.25, the XYZ company is considered a certified
supplier
25
• What products/services do we purchase?
• What parts can be reused in new designs?
In general, SRM software varies by vendors in terms of capabilities offered. AMR Research has
identified five key tenets of an SRM system:
• Automation of transactional processes between an organization and its suppliers.
• Integration that provides a view of the supply chain that spans multiple departments,
processes, and software applications for internal users and external partners.
• Visibility of information and process flows in and between organizations. Views are
customized by role and aggregated via a single portal.
• Collaboration through information sharing and suppliers’ ability to input information directly
into an organization’s supply chain information system.
• Optimization of processes and decision making through enhanced analytical tools such as
data warehouse and Online Analytical Processing (OLAP) tools with the migration toward
more dynamic optimization tools in the future.
The key benefits of SRM include the following: (i) Better internal and external communications
providing visibility into various cost components; (ii) Automated creation, negotiation, execution and
compliance leading to more strategic, long-term relationships; (iii) Common and consistent
measurements that help focus resources, identify performance glitches, and develop strategies for
supply chain improvements; and (iv) The elimination of time-intensive, costly processes of
performing paper-based business transactions.
***************
26
CHAPTER IV
Bullwhip Effect and Supply Chain Management
4.1 Introduction
In recent years, many suppliers and retailers have observed that while customer demand for specific
products does not vary much, inventory and back-order levels fluctuate considerably across their
supply chain. For instance, in examining the demand for pampers disposable diapers, executives at
Procter & Gamble noticed an interesting phenomenon. An expected, retail sales of the product were
fairly uniform; there is no particular day or month in which the demand is significantly higher or
lower than any other. However, the executives noticed that distributors’ orders placed to the factory
fluctuated much more than retail sales. In addition, Procter & Gamble’s orders to its suppliers
fluctuated even more. This increase in variability as we travel up in the supply chain is referred to as
the bullwhip effect.
To understand the impact of the increase in variability on the supply chain, consider the second
stage in our example, the wholesaler. The wholesaler receives orders from the retailer and places
orders to its supplier, the distributor. To determine these order quantities, the wholesaler must
forecast the retailer’s demand. If the wholesaler does not have access to the customer’s demand data,
it must use orders placed by the retailer to perform the forecasting. Since variability in orders placed
by the retailer is significantly higher than variability in customer demand, the wholesaler is forced to
carry more safety stock than the retailer or else to maintain higher capacity than the retailer in order
to meet the same service level as the retailer.
This analysis can be carried over to the distributor as well as the factory, resulting in even higher
inventory levels and therefore higher costs at these facilities. Consider, for example, a simple widget
supply chain. A single factory, Widget Makers, Inc., supplies a single retailer, the Widget Store.
Average annual widget demand at the Widget Store is 5200 units, and shipments are made from
Widget Makers to the store each week.
External Demand
Retailer
Delivery lead-time
Order lead-time
Wholesaler
Delivery lead-time
Order lead-time
Factory
Production lead time
If the variability in orders placed by the Widget Store is low, such that the shipment every week is
about 100 units, Widget Makers’ production capacity and weekly shipping capacity need by only
about 100 units. If weekly variability is very high, such that during certain weeks Widget Makers
must make and ship 400 units and some weeks no units at all, it is easy to see that production and
shipping capacity must be much higher and that some weeks this capacity will be idle. Alternatively,
Widget Makers could build up inventory during weeks with low demand and supply these items
during weeks with high demand, thus increasing inventory-holding costs.
28
proportional to the amount ordered. When the period of shortage is over, the retailer
goes back to its standard orders, leading to all kinds of distortions and variations in
demand estimates.
(vi) Lack of centralized information. One of the most frequent suggestions for reducing the
bullwhip effect is to centralize demand information within a supply chain, i.e., to
provide each sage of the supply chain with complete information on the actual customer
demand. To understand why centralized demand information can reduce the bullwhip
effect, note that if demand information is centralized, each stage of the supply chain can
use the actual customer demand data to create more accurate forecasts rather than
relying on the orders received from the previous stage, which can vary significantly
more than the actual customer demand.
Now consider two types of supply chains: one with centralized demand information and a second
with decentralized demand information. In the first type of supply chain, the centralized supply
chain, the retailer, or the first stage in the supply chain, observes customer demand, forecasts the
average demand, determines its target inventory level, and places an order to the wholesaler. The
wholesaler, or the second stage of the supply chain, receives the order along with the retailer’s
forecast average demand, uses this forecast to determine its target inventory level, and places an
order to the distributor.
Since the wholesaler has full information on the retailer inventory levels and customer demand, the
wholesaler can predict an incoming order from the retailer and hence be ready for this order, thus
reducing lead-time. This lead-time reduction leads to reduction in the increase in variability.
Similarly, the distributors, or the third sage of the supply chain, has information about the
wholesaler and the retailer inventory levels as well as customer demand and hence can significantly
reduce lead time and as a result reduce the bullwhip effect.
The second type of supply chain that we consider is the decentralized supply chain. In this case the
retailer does not make its forecast average demand available to the remainder of the supply chain.
Instead, each sage of the supply chain must estimate mean demand based on the orders received
from its customer, without knowledge of the retailer’s forecast.
What can we conclude about the bullwhip effect in these two types of supply chains? For either
type of supply chain, centralized or decentralized, the variability of the order quantities becomes
larger as we move up the supply chain so that the orders placed by the wholesaler are more variable
than the orders placed by the retailer, and so on. The difference in the two types of supply chains is
in terms of the ability to respond to orders from down stream facilities. Centralized information
allows to reduce lead-time and hence variability in the supply chain.
Indeed, the variability of orders increases dramatically more in the decentralized system. In other
words, a decentralized supply chain, in which only the retailer knows the customer demand, can lead
to significantly higher variability than a centralized supply chain, in which customer demand
information is available at each stage of the supply chain, particularly when lead times are large. We
therefore conclude that centralizing demand information can reduce the bullwhip effect significantly.
29
Note, however, that even if each stage uses the same demand data, each may still employ
different forecasting methods and different buying practices, both of which may contribute
to the bullwhip effect. In addition, even when each stage uses the same demand data, the
same forecasting method, and the same ordering policy, the bullwhip effect will continue to
exist, albeit at a significantly reduced level. Thus centralized demand information reduces
the bullwhip effect but does not eliminate it.
(ii) Reducing variability. The bullwhip effect can be diminished by reducing the variability
inherent in the customer demand process. For example, if we can reduce the variability of
customer demand seen by the retailer, then even if the bullwhip effect occurs, the variability
of demand seen by the wholesaler also will be reduced.
We can reduce the variability of customer demand through, for example, the use of an
everyday low pricing (EDLP) strategy. When a retailer used EDLP, it offers a product at a
single consistent price rather than offering a regular price with periodic price promotions.
By eliminating price promotions, a retailer can eliminate many of the dramatic shifts in
demand that occur along with these promotions. Therefore, everyday low pricing strategies
can lead to much more stable – i.e., less variable – customer demand patterns.
Of course, variability of customer demand depends not only on the retailer pricing strategy
but also on its competitors’ strategies. Thus, while EDLP is an important tool used to
reduce demand variability, its impact can be limited.
(iii) Lead-time reduction: As we observed earlier, the longer the lead-time, the larger is the
increase in variability. Therefore, lead-time reduction can reduce the bullwhip effect
significantly throughout a supply chain.
Observe that lead times typically include two components: order lead times (i.e., the time it
takes to produce and ship the item) and information lead times (i.e., the time it takes to
process an order). This distinction is important because order lead times can be reduced
through the use of cross-docking, whereas information lead times can be reduced through
the use of electronic data interchange (EDI).
(iv) Strategic partnership. The bullwhip effect can be eliminated by engaging in any of a
number of strategic partnerships. These strategic partnerships change the way information is
shared and inventory is managed within a supply chain, possibly eliminating the impact of
the bullwhip effect. For example, in vendor-managed inventory (VMI;), the manufacturer
manages the inventory of its product at the retailer outlet and therefore determines for itself
how much inventory to keep on hand and how much to ship to the retailer in every period.
Therefore, in VMI, the manufacturer does not rely on the orders placed by a retailer, thus
avoiding the bullwhip effect entirely.
Other types of partnerships are also applied to reduce the bullwhip effect. As we discussed earlier,
for example, centralizing demand information can dramatically reduce the variability seen by the
upstream stages in a supply chain. Therefore, it is clear that these upstream stages would benefit
from a strategic partnership that provides an incentive for the retailer to make customer demand data
available to the rest of the supply chain.
30
In order for supply chain integration to be successful, suppliers must be able to accurately forecast
demand so that they can produce and deliver the right quantities demanded by their customers in a
timely and cost-effective fashion. There are several ways to closely match supply and demand. One
way is for a supplier to hold plenty of stock available for delivery at any time. While this approach
maximizes sales revenues, it is also expensive because of the cost of carrying inventory and the
possibility of write-downs at the end of the selling season. Use of flexible pricing is another
approach. During heavy demand periods, prices can be raised to reduce peak demand. Price
discounts can then be used to increase sales during periods with excess inventory or slow demand.
This strategy can still result in lost sales, though, as well as stock-outs and thus cannot be considered
an ideal or partnership-friendly approach to satisfying demand. In the short term, companies can also
use overtime, subcontracting, or temporary workers to increase capacity to meet demand for their
products and services. In the interim, firms will lose sales as they train workers and quality may also
tend to suffer.
Thus, it is imperative that suppliers along the supply chain find ways to better match supply and
demand to achieve optimal levels of cost, quality, and customer service to enable them to compete
with other supply chains. Any problems that adversely affect the timely delivery of products
demanded by consumers will have ramifications throughout the entire chain.
31
• Manages the demand chain by exception and proactively eliminates problems before they
appear
• Allows collaboration on future requirements and plans
• Uses joint planning and management of promotions
• Integrates planning, forecasting, and logistics activities
• Provides efficient category management and understanding of consumer purchasing habits
• Provides analysis of key performance metrics (e.g., forecast accuracy, forecast exceptions,
product lead times, inventory turnover, percentage stock-outs) to reduce supply chain
inefficiencies, improve customer service, and increase sales and profitability
The Global Commerce Initiative (GCI) created the GCI Recommended Standard for Globalizing
CPFR. GIC is a voluntary body created in USA in 1999 to “improve the performance of the
international supply chain for consumer goods through the collaborative development and
endorsement of recommended standards and key business processes.” A description of the CPFR
process model used by GCI follows:
• Step 1: Develop Collaboration Arrangement
The buyer and seller must agree on the objective of the collaboration, ground rules for
resolving disagreements, confidentiality of information to be shared, sales forecast
exception criteria, review cycle, time frame, frozen time period with acceptable
tolerances, resource commitments, financial incentives, and success metrics. Some
examples of objectives are to improve customer service levels, reduce stock-outs,
reduce inventories, increase sales, reduce costs, improve forecast service levels, reduce
stock-outs, reduce inventories, increase sales, reduce costs, improve forecast accuracy,
and synchronize production with the forecast.
• Step 2: Create Joint Business Plan
A joint business plan is developed by sharing the companies’ business strategies and
plans. The plan typically involves developing a joint product category and promotional
plan in which the appropriate category strategies, inventory policies, promotional
activities, and pricing policies are specified. A product category is a manageable group
of products perceived by consumers to be similar that can be substituted in meeting their
needs. For each item in the product category, an item management profile is developed
that includes the minimum order quantity, lead-time, and time between orders, frozen
time period, and safety stock guidelines. The trading partner should be informed of
changes such as store openings or closings or changes of items in each product category.
It is important that trading partners be able to understand the impact of new product
introductions, promotions, and marketing campaigns have on demand and ultimately on
the effective management of the supply chain.
• Step 3: Create Sales Forecast
The trading partners use Web-based technologies to share data such as retailer point-of-
sale information, distribution center withdrawals, manufacturing consumption, planned
events including store openings or closings, and new product introductions. Using
multiple inputs into the forecasting process including information about the future, as
well as the past, results in the creation of a shared forecast that reflects the most accurate
and real-time information available. Either partner or both partners may generate the
sales forecast. The forecasting techniques used can be qualitative or quantitative. When
both partners each generate a forecast, middleware is used to highlight the differences,
based on predetermined exception criteria previously agreed upon the partners.
32
• Step 4: Identity Exceptions for Sales Forecast
Irrespective of how the initial forecast is generated, all exceptions are identified in Step
4. Examples of sales forecast exception criteria are: retail in-stock is less than 95 per
cent, sales forecast error is greater than 20 percent, the difference in sales forecast from
the same period of the previous year is greater than 10 percent, or any changes that have
occurred in timing of promotional activities or number of active stores. The real-time
joint decision-making reduces the risk and increases the confidence in the single
forecast.
• Step 5: Resolve/collaborate on Exception Items
In Step 5, all exceptions are resolved through a collaborative process to create a single
consensus forecast.
• Step 6: create Order forecast
Data are analyzed – such as point-of-sale (POS) data; historical demand; shipment data;
current capacity limits; minimum order quantities; lead times; time between orders;
frozen time periods; safety stock rules, impact events such as new product introductions,
store openings, and store closings; and current inventory positions (on hand, on order, in
transit) – to generate the order forecast consistent with the sales forecast and joint
business plan developed earlier. The order forecast represents detailed time-phased
ordering needs with inventory objectives by product and receiving location. The order
forecast enables the manufacturer to effectively schedule production capacity based on
the demand and to minimize safety stock. For the retailer, there is greater confidence
that orders will be met. In effect, the real-time collaborative effort minimizes the
uncertainty between trading partners, leading to reduced supply chain inventories, along
with improved customer service levels.
• Step 7: Identity Exceptions for Order Forecast
The items that fall outside the order forecast exception criteria such as customer service
measures, order fill rate, or forecast error measures, established jointly by the buyer and
seller in the collaboration agreement, are identified as exception items. Examples of
order forecast exception criteria are retail in-stock less than 95 percent, order forecast
errors greater than 20 percent, annual inventory turnover less than agreed-upon goal,
addition of new event than affects inventory/orders, or requested emergency orders
greater than 5 percent of weekly forecast.
• Step 8: Resolve/Collaborate on Exception Items
Any order forecast exceptions are investigated by examining the shared data, e-mail,
telephone conversations, meetings, and other supporting information. If the analysis
justifies a change in the forecast, a revised forecast is submitted.
• Step 9: Order Generation
This last step involves converting the order forecast into a committed order. The actual
order is expected to consume the forecast. The committed order is generated based on
the product demand in the frozen time period of the order forecast.
Common performance metrics, such as gross margin percent, return on investment, and sales
growth, are developed to measure the success of the relationship. Other metrics include in-stock
percent a point of sale, inventory turnover, inventory level, sales forecast accuracy, potential sales
lost due to stock-out, manufacturing cycle time, order cycle time, shipping cycle time, problem
resolution time, rate of emergency or cancelled orders, and percent shipped or delivered on time.
33
4.6 Forecasting Techniques
Considering the importance of forecast, in this section, a brief discussion about various techniques
of forecasting is presented. Both quantitative and qualitative forecasts can be improved by seeking
inputs from trading partners. Qualitative forecasting methods are based on opinions and intuition,
whereas quantitative forecasting methods use mathematical models and relevant historical data to
generate forecasts. The quantitative methods can be divided into two groups: time series and
associative models.
34
solely on past demand data, all quantitative methods become less accurate as the forecast’s time
horizon increases. Thus, for long time horizon forecasts, it is generally recommended to utilize a
combination of both quantitative and qualitative techniques.
Components of Time Series Data
Time series data typically have four components: trend, cyclical, seasonal, and random variations:
• Trend variations: Trends represent either increasing or decreasing movements over many
years and are due to factors such as population growth, population shifts, cultural changes,
and income shifts. Common trend lines are linear, S-curve, exponential, or asymptotic.
• Cyclical variations: Cyclical variations are wavelike movements that are longer than a year
and influenced by macroeconomic and political factors.
• Seasonal variations: Seasonal variations show peaks and valleys that repeat over a
consistent interval such as hours, days, weeks, months, years, or seasons. Due to seasonality,
many companies do well in certain months and not so well in other months.
• Random variations: Random variations are due to unexpected or unpredictable events such
as natural disasters (hurricanes, tornadoes, fire), strikes, and wars.
∑A
i = t − n +1
i
Ft +1 = where
n
Ft+1 = forecast for Period t + 1,
n = number of periods used to calculate moving average, and
Ai = actual demand in Period i.
The average tends to be more responsive if fewer data points are used to compute the average.
However, random events can also impact the average adversely. Thus the decision maker must
balance the cost of responding slowly to changes versus the cost of responding to random variations.
The advantage of this technique is that it is simple to use and easy to understand.
(ii) Weighted Moving Average Forecasting Model: Weighted moving average forecasting, which
is based on an n-period weighted moving average, follows:
tt
Ft +1 = ∑w A
i = t − n +1
i i
35
where
Ft+1 = forecast for Period t + 1,
n = number of periods used to determining the moving average,
Ai = actual demand in Period i, and
Wi = weight assigned to Period i (with ∑W i = 1)
The weighted moving average allows greater emphasis to be placed on more recent data to reflect
changes in demand patterns. Weights used also tend to be based on experience of the forecaster.
Although the forecast is more responsive to underlying changes in demand, the forecast still lags
demand because of the averaging effect. As such, the weighted moving average method does not do
a good job of tracking trend changes in the data.
(iii) Exponential Smoothing Forecasting Model: Exponential smoothing forecasting is a
sophisticated weighted moving average forecasting in which the forecast for the next period’s
demand is the current period’s forecast adjusted by a fraction of the difference between the current
period’s actual demand and its forecast. This approach requires less data to be kept than the
weighted moving average method because only two data points are needed. Due to its simplicity and
minimal data requirement, exponential smoothing forecasting is one of the more popular techniques.
This model, like the other time series models, is suitable for data that show little trend or seasonal
patterns. The exponential smoothing formula is
Ft +1 = Ft + α ( At − Ft ) or Ft +1 = α At + (1 − α ) Ft
where
Ft+1 = forecast for Period t + 1,
Ft = forecast for Period t,
At = actual demand in Period t, and
α = a smoothing constant (0 ≤ α ≤ 1).
With an α value closer to 1, there is a greater emphasis on recent data, making the model more
responsive to changes in the recent demand. When α has a low value, more weight is placed on past
demand (which is contained in the previous period’s forecast value) and the model responds more
slowly changes in demand. The impact of using a small or large value of α is similar to the effect of
using a large or small number of observations in calculating the moving average. In general, the
forecast will lag any trend in the actual data because only partial adjustment to the most recent
forecast error can be made. The initial forecast value could be estimated using one of the qualitative
methods, such as the Delphi forecast, or by simply setting the initial forecast equal to the demand for
that period.
(iv) Trend-Adjusted Exponential Smoothing Forecasting Model. The exponential smoothing
method can be modified to include a trend component when the time series show a systematic
upward or downward trend in the data over time. This method requires two smoothing constants,
one for the smoothed forecast (α ) and the other for the trend (β ). The equations for this model are
Ft = α At + (1 − α ) ( Ft −1 + Tt −1 ),
Tt = β ( Ft − Ft −1 ) + (1 − β ) Tt −1 ,
and the trend-adjusted forecast,
TAFt +m = Ft + mTt
where
36
Ft = exponentially smoothed average in Period t,
At = actual demand in Period t,
Tt = exponentially smoothed trend in Period t,
α = smoothing constant ( 0 ≤ α ≤1), and
β = smoothing constant for trend ( 0 ≤ β ≤ 1).
A higher value of β indicates greater emphasis on recent trend changes, while a small β places less
weight on recent changes and has the effect of smoothing out the current trend. The smoothing
constants, α and β, are estimated using a trial and error approach, matching actual historical demand
data to the forecasted demand in search of the smoothing constants that minimize the forecast errors.
Linear Trend Forecasting Model: The trend can be estimated using simple linear regression to fit
a line to time series of historical data. The linear trend method minimizes the sum of squared
deviations to determine the characteristics of the linear equation:
Here,
∧
Y = b0 + b1 X
where
∧
Y = forecast or dependent variable
X = time variable,
b0 = intercept of the line, and
b1 = slope of the line.
The coefficients b0 and b1 are calculated as follows:
n∑ ( xy) − ∑ x∑ y
b1 = and
n∑ x 2 − (∑ x )
2
b0 =
∑ y −b ∑ x1
n
where b1 = slope of the line,
x = independent variable values,
y = dependent variable values,
37
CHAPTER V
Distribution Management Overview
5.1 Introduction
Transportation happens to be the most fundamental part of strategic logistic management and
transport costs include all costs associated with movement of products from one location to another.
The average transport costs ranges from 5 to 6% of the recommended retail price of the product.
Transportation is the movement of products, materials and services from one area to another, both
inbound and outbound. It can also be said as movement from one node of the supply chain to the
other.
Indian Army is a typical example of ideal transportation mixes in our country. It uses the aerial, land,
sea and rail routes to maintain its forces strewn all over the country and aboard. The logistics is
enormous and the various modes of transport are aircraft, train, trucks, animals, and human beings. It
transports supplies, ration, fuel, oil lubricants, arms, ammunition, clothing and personal loads over
vast distances and over varied terrain and climatic conditions.
40
OPERATIONAL FACTORS
Figure 5.1: Operational Factors for Deciding the Transport Mode (Deshmukh and
Mohanty), 2004
Transportation costs
Transport cost vary from less than 1 per cent (for machinery) to over 30 per cent (for food) of the
recommended selling price of products, depending upon the nature of the product range and its
market. However, the average transport costs is between 5 to 6 per cent of the recommended retail
price of a product. With inflation, transport costs also rise because the major components are the
workforce (labor), fuel & maintenance, spares, driver’s cost. Similarly, transport represents a direct
cost added to the price of the product and any reduction in transport costs would lead to an increase in
profit, with price remaining constant. Transportation rates are almost linear with distances and not
with volume, be it road, rail, water or air. Transportation costs for company owned fleet is simple and
is evolved by annual cots per truck, annual mileage, amount delivered and trucks effective capacity.
All this information could be effectively utilized to calculate cost per mile per SKU.
41
• Distribution models: This identifies and explains the interrelationships between the
components of the distribution system at various levels of daily, weekly or monthly demands.
These models could be built to examine the impact of alternative transport modes and
methods, as either the demand changes or the components in the system change.
Therefore, in order to carry out a systematic selection of the transporter a framework consisting of the
following stages is recommended
SYSTEMATIC SELECTION
42
DECISION FRAMEWORK
43
shortages can be estimated and a cost of shortages versus cost of ownership analysis can be made to
determine the optimal fleet size. Fleet size can be regulated and minimized by:
• Utilizing standard size pallets and transport containers.
• Vigorously monitoring fleet utilization levels annually.
• Maintaining total fleet visibility, including loading times, unloading, transit times and
maintenance times.
• Choosing low-use periods to conduct routine maintenance.
• Monitoring and charging for demiurges for fleet detention by suppliers, customers and
carriers.
• Utilizing alternative coverage means during super peak periods to avoid carrying the burden
of an oversized fleet.
Therefore, it can be seen that whatever be the fleet size, the company has to use it judiciously and
constantly monitor its progress for optimum utilization of the available resources and at the same time
cut down cost of maintenance e and tile lag.
Fleet maintenance is one means of reducing the ownership cost of the fleet by delaying potential
replacements and improving customer service through improved reliability.
44
5.7 Future directions in transportation
One salient aspect that we all have to understand that with e-services, our lead time to delivery has
reduced considerably, but somehow the movement of the product and raw materials performance
cannot move through e-services and have to restrict movement to roads, rail, air and waterways. Yes,
the order can be placed through e-services in a faster mode and so can payment be but the products
cannot be physically moved through the net. A truck, rail wagon or a ship or the cargo aircraft has to
move it from the place of origin to the consumer’s destination.
Transportation too, has improved considerably with the advent of technology and mechanical
developments within a short span. Certain programs and organizations help in coordinating
transportation in a better way and as time passes they are bound to improve transportation in a big
way. They can be clubbed under as follows
• Carrier relationship management: Through carrier relationship management programs we
can bind transporters and those working with it under one roof/ enterprise. These programs
are designed to formalize communication, partnering, negotiating, and performance
monitoring aspects of carrier management. At the heart of most carrier relationship
management programs is a set of guidelines for selecting core carriers, the minority of carriers
who carry a majority of the enterprise’s weight, cube and shipments.
• Corporate traffic councils: These help in bringing together all personnel working in the area
of transportation within an enterprise. The traffic council sets corporate transportation policy
and explores opportunities for leveraging transportation spending across the corporation.
• Training and certification: Corporations should aim at making and maintain transportation as
a value added activity. For this everyone should be in one plane and therefore, such training
activities are carried out to get all under one platform.
• Driver quality: Improvements in drivers with better working environment and better wages
will help in a big way to improve the driver’s capability and capacity in the long run.
• Joint Procurement: Significant cost reduction can be carried out if the purchase and
negotiation of transportation services is consolidated across both inbound/outbound
transportation activities within a business unit, across units and even with no-competitors.
• Logistics compliance & security officer: Forming the chief logistics security officer will
enable a company to cope with global logistics law and to anticipate security lapses within the
logistics network.
45
5.8.1 Plant location
Choice of location for a plant is one of the earliest problems facing management. But location,
perhaps, is one of the most neglected aspects of business, although the manufacturing and distribution
costs may vary by over 10 percent simply by virtue of choice of location.
There are two types of factors (or criteria) on which location decisions are based: quantitative (or
objective factors and qualitative or subjective) factors. The objective factors involve cost of land,
transportation costs, utilities rates etc. The subjective factors include labour availability, climate,
community environment, quality of life, local politics etc. The presence of objective and subjective
factors results in greater degree of complexity in the structure of the plant location problem as well as
its solution. A decision made on these factors is difficult as they are consistent over all locations. For
example, a plant may be located far from work but have lower utility bills related to the area closer to
work. Some factors may be more dominant than others. For example, on mineral production plants,
raw materials dominate the situations due to which processing is located near mines. On the other
hand, output oriented activities, such as service organizations tend to be located near consumers.
Table 5.1 presents a list of some of the important location factors.
46
5.9 Evaluation of Location Alternatives
As stated earlier, the plant location problem involves both qualitative and quantitative factors. Finding
the best location alternative considering all the above factors is not an easy one. Attempts have been
made to combine the qualitative and quantitative factors and score the alternatives. One of the scoring
(or rating) models is outlined below (Table5.2).
The procedure starts by listing the various factors and assigning weight to each factor to represent the
relative importance of various factors. The score for each alternative is found by multiplying each
factor’s score by its weight and summing the results. Table 5.2 gives the details of this rating
approach for two locations A and B.
Table 5.2: Rating Approach
From Table 5.2, we see that location B that has a higher score is preferred. However one has to be
careful in the use of the rating approach because of the assessment of scores, which might have
involved some amount of subjectivity. For example, if the total score for location B were 70, which is
very close to that of A, one need to go for further analysis before arriving at the final decision.
47
Plant 1 Plant 2 Plant 1 Plant 2
Warehouse
1 2 3 4 5 1 2 3 4 5
48
Rectilinear distance is appropriate for many location problems such as in metropolitan areas. In many
manufacturing situations, material is transported along aisles arranged in rectilinear pattern.
Fortunately, rectilinear distance problem is easier to solve than Euclidean distance problem.
The problem of locating a simple new facility with respect to a number of existing problems is known
as the single facility location problem whereas the problem of locating multiple new facilities is
known as the multi-facility location problem.
The objective is to locate the new facility to minimize a weighted sum of the rectilinear distance from
the new facility to existing facilities. The goal is to find the values of x and y such that
minimize f ( x, y ) =
n
∑w
i =1
i ( x − a + y − b ), where
i i
wi is the flow of material / goods between the new facility and ith existing facility. The optimum
values of x and y can be determined separately.
f ( x, y ) = g1 ( x) + g 2 ( y ), where
n n
g1 ( x ) = ∑ wi x − ai and g 2 ( y ) = ∑ wi y − bi
i =1 i =1
An example of the single facility location problem could be location of a new storage warehouse for a
company with an existing network of production and distribution centers.
49
∂f ( x, y ) n
We get , = 2∑ wi ( x − a i )
∂x i =1
∂f ( x, y ) n
We get , = 2∑ wi ( y − ai )
∂y i =1
Setting these partial derivatives equal to zero and solving for x and y, we get
n
+
∑w a i i
X = i =1
n
∑w i =1
i
+
∑w a i i
Y = i =1
n
∑w
i =1
i
Thus X+ and Y+ are the weighted averages of x and y coordinates and hence the name Gravity
problem.
Unfortunately, it is not easy to find the optimum solution mathematically. The partial derivatives
become undefined when the location of the new facility coincides with that of an existing facility.
There are no known simple algebraic solutions; all existing methods require an iterative procedure.
The Gravity solution is usually selected as the starting solution for this iterative process.
50
m n
and f 21 ( y ) = ∑
i≤ j ≤k ≤m
V jk y j − y k + ∑
j =1
∑
i =1
w ji y j − ai
Vjk represents the interaction between new facilities j and k and wji represents the interaction between
new facility j and existing facility i. The optimum x and y values can be determined independently as
in the case of single facility location problem.
Multi-facility gravity problems require the solution of a system of linear equations, so that gravity
problems involving large number of facilities are easily solved. Multi-facility Euclidean distance
location problems are solved by using multi dimensional version of the interaction solution procedure
described in the previous section.
***************
51
PART B
ADVANCED TOPICS
CHAPTER VI
Supply Chain Management and Multi-echelon Inventories
6.1 Introduction
In this section, we consider a multi facility supply chain that belongs to a single firm. The objective of
the firm is to manage inventory so as to reduce system wide cost. Therefore it is important to consider
the interaction of the various facilities and the impact of this interaction on the inventory policy that
should be employed by each facility.
The first echelons, retail outlets, are replenished from branch warehouses (the second echelon), which
are supplied from a central warehouse (the third echelon). Finally, it is supposed that central
warehouse is replenished from outside sources.
Inventory management in this system is complex because demand at the central warehouse is
dependent on the demand (and stocking decisions) at the branches. And demand at the branches is
dependent on the demand (and stocking decisions) at the retail outlets. More generally, we refer to
this as dependent demand situation (as compared to other inventory control module where demand for
different stock keeping unit was considered to be independent). Multistage manufacturing situation
are conceptually very similar to multi-echelon inventory systems.
6.2.1 Deterministic demand environment
Here external demand rates are known with certainty. The model will reveal the basic interactions
among replenishment quantities at the different situations. Here stocking points are serially connected
We consider here two stages denoted by a warehouse (W) and a retailer (R).
56
Fig. 6.4: Warehouse Inventory
57
However, we cannot simply multiply each average echelon stock by the standard holding cost term
and sum to obtain total inventory carrying costs.
The reason is that the same physical units of stock can appear in more than one echelon inventory.
When the decision being made is whether to store inventory at an upstream location or
at a downstream location that it supplies, the relevant holding cost is the incremental cost of
moving the product to the retailer.
The warehouse echelon inventory is valued at,
′ ′
Vw = Vw while the retailer echelon inventory is valued at only VR = VR − Vw .
⎡A D Q V ′ r A D Q V ′r ⎤
TRC (n, QR ) = ⎢ W + n R W + R + R R ⎥
⎣ nQR 2 QR 2 ⎦
⎡D ⎛ A ⎞ Q r ⎤
=⎢ ⎜ AR + W ⎟ + R (nVW′ + VR′ )⎥ (1)
⎣ QR ⎝ n ⎠ 2 ⎦
∂TRC D ⎛ A ⎞ r
Now, = − 2 ⎜ AR + W ⎟ + (nVW′ + V R′ ) = 0
∂Q R QR ⎝ n ⎠ 2
⎛ A ⎞
2⎜ AR + W ⎟ D
Or, QR (n ) =
* ⎝ n ⎠
(2)
(nVW′ + VR′ )r
Substituting the value of QR (n ) in eqn. (2), we get
*
⎛ A ⎞
TRC * (n ) = 2 Dr ⎜ AR + W ⎟(nVW′ + VR′ ) (3)
⎝ n ⎠
We have to determine integer value of n that minimizes TRC * (n ) . The value of n that minimizes the
expression is,
58
⎛ A ⎞
F (n ) = ⎜ AR + W ⎟(nVW′ + V R′ ) (4)
⎝ n ⎠
For minimization,
∂F (n )
= 0,
∂n
∂TRC ⎡ AW ⎤ ⎛ A ⎞
⇒ = ⎢− 2 ⎥ (nVW′ + VR′ ) + ⎜ AR + W ⎟VW′ = 0,
∂QR ⎣ n ⎦ ⎝ n ⎠
∂TRC ⎛ AW ⎞
⇒ = ⎜ − 2 ⎟VR′ + ARVW′ = 0,
∂QR ⎝ n ⎠
AW VR′
⇒ n* = (5)
ARVW′
which in general, will not be an integer.
If n * < 1 , assume n = 1 and calculate the respective value. Otherwise, ascertain two-integer values
n1 and n 2 that surround n * .
⎛ A ⎞
F (n1 ) = ⎜⎜ AR + W ⎟⎟(n1VW′ + VR′ )
⎝ n1 ⎠
⎛ AW ⎞
and, F (n2 ) = ⎜⎜ AR + ⎟⎟(n2VW′ + VR′ ) .
⎝ n2 ⎠
If F (n1 ) ≤ F (n2 ) use n = n1 ,
⎛ A ⎞
2⎜ AR + W ⎟ D
⎝ n ⎠
QR = , then calculate, QW = nQ R . (6)
(nVW′ + VR′ )r
59
It captures the essence of cost interdependencies. An examination of level demand case provides
considerable insight.
⎡D ⎛ A ⎞ Q r ⎤
TRC (QW , QR ) = ⎢ ⎜ AR + W ⎟ + R (nVW′ + VR′ )⎥ where, QW = nQR .
⎣ QR ⎝ n ⎠ 2 ⎦
This problem is analogous to a single echelon problem (i.e. the selection of QR ) if the adjusted fixed
replenishment cost is represented as,
A
Aˆ R = AR + W ,
n
and adjusted unit variable cost of the item is, VˆR = nVW′ + VR′
AW
The term reflects that there is a warehouse setup only every nth retailer setup.
n
⎛ A ⎞
2⎜ AR + W ⎟ D
We can select QR from, QR = ⎝ n ⎠
.
(nVW′ + VR′ )r
Properly taking account of the cost impact at the warehouse if we have a good pre-estimate of n.
Further, we can use the steps followed in the earlier problem.
Step 1
AW VR′
n* = (7)
ARVW′
Step 2
Ascertain the two integer values n1 and n 2 that surround n * .
Step 3
⎛ AW ⎞
Evaluate, F (n1 ) = ⎜⎜ AR + ⎟⎟(n1VW′ + VR′ )
⎝ n1 ⎠
⎛ AW ⎞
and, F (n2 ) = ⎜⎜ AR + ⎟⎟(n2VW′ + VR′ ) .
⎝ n2 ⎠
If F (n1 ) ≤ F (n2 ) use n = n1 ,
AW VR′
But, Blackburn and Miller have found that simply using n =
*
and ensuring that n is ‘at
ARVW′
least’ unity works well, particularly for more complex assembly structure.
⎡ AW V R′ ⎤
Thus, n = ⎢ , 1⎥ (8)
⎢⎣ ARVW′ ⎥⎦
For time-varying demand, compute n and then employ n value to obtain adjusted setup and unit
variable costs for the retailer according to following equation,
60
A
Aˆ R = AR + W ,
n
VˆR = nVW′ + VR′
The Silver-Meal heuristic can then be applied to the retailer using ÂR and VˆR . The resulting
replenishment again implies a replenishment pattern for the warehouse. Subsequently, the Silver-Meal
or another lot-sizing procedure is used at the warehouse with Aw and Vw .
61
TC (Q ) =
AD hQ
+
Q 2
⇒ TC (T ) =
A hDT A hD
+ = + gT , where, g = .
T 2 T 2
( ) 2 ADh ,
Total cost at EOQ is given as TC Q * =
It is natural to consider policies where the reorder interval T is restricted to values that can be easily
implementable.
Now, we put power of two restrictions and T is restricted to be a power of multiple of some fixed base
planning period TB. B
Such policy is called power of two policy. The base planning period TB may represent a day, week or
month, etc and is usually fixed beforehand. It represents minimum possible reorder interval.
Now, the basic question is that how does one find the best power of two policy that minimize the cost
over all possible power of two policy?
Secondly, how much this best power of the policy deviates from the optimal policy?
A
T* = is the optimal reorder interval under unrestricted condition and let T be the optimal power
g
of two reorder interval.
Since total cost is a convex function, the optimal k in (9) is the smallest integer k satisfying the
condition.
TC (TB 2 k ) ≤ TC (TB 2 k +1 )
⎛ A k ⎞ ⎛ A k +1 ⎞
⇒ ⎜⎜ k
+ gTB 2 ⎟ ≤ ⎜
⎟ ⎜ T 2 k +1 + gTB 2 ⎟⎟
⎝ B
T 2 ⎠ ⎝ B ⎠
⎡ 1⎤
⎢1 − 2 ⎥ ≤ gTB 2 (2 − 1)
A
⇒ k
TB 2 k ⎣ ⎦
≤ (TB 2 k )
A 2
⇒
2g
⇒
1 A
g
(
≤ TB 2 k )
2
1
⇒ T* ≤T (10)
2
By the definition of optimal k, it must also satisfy the following condition since the cost curve is
convex in nature that is,
TC (TB 2 k ) ≤ TC (TB 2 k −1 )
62
⎛ A ⎞ ⎛ A ⎞
⇒ ⎜⎜ k
+ gTB 2 k ⎟⎟ ≤ ⎜⎜ k −1
+ gTB 2 k −1 ⎟⎟
⎝ TB 2 ⎠ ⎝ TB 2 ⎠
A ⎡ 1 ⎤ k⎛ 1⎞
⇒ ⎢1 − 2 −1 ⎥ ≤ gTB 2 ⎜ − 2 ⎟
TB 2 k ⎣ ⎦ ⎝ ⎠
⇒
2A
k
(
≥ gTB 2 k )
TB 2
≥ (TB 2 k )
2A 2
⇒
g
A
⇒ TB 2 k ≤ 2
g
⇒ T ≤ 2T * (11)
1
From equation (10) and (11), T * ≤ T ≤ 2T * (12)
2
Hence the optimal power of two policy for a given base planning period TB must be in the interval
B
⎡ 1 * ⎤
⎢ T , 2T * ⎥ .
⎣ 2 ⎦
The maximum discrepancy between the total cost for the power of two ordering policy and the total
1
cost for T * will occur if power of two reorder interval equals either 2T * or, T*.
2
We have already derived that,
TC (Q ) ⎛ AD hQ ⎞ 1 1 ⎡ Q Q* ⎤
=⎜ + ⎟ = ⎢ +
( )
TC Q * ⎜⎝ Q
⎥,
2 ⎟⎠ ADh 2 ⎣ Q * Q ⎦
Q* Q
Since, T * = and T =
D D
TC (T ) 1 ⎡ T T * ⎤
= ⎢ *+ ⎥
( )
TC T * 2 ⎣T T ⎦
1
Now, upper bound of T = 2T * and lower bound of T = T*.
2
⎛ 1 ⎞
TC ⎜ 2T * , or , T*⎟
⎝ 2 ⎠ 1 ⎡ 2T T* ⎤ 1 ⎡ 1 ⎤
*
Then, = ⎢ * + = ⎢ 2+ ⎥ = 1.06.
TC T * ( ) 2⎣ T *⎥
2T ⎦ 2 ⎣ 2⎦
Thus, average inventory purchasing and carrying cost of the best power of two policy is guaranteed to
be within 6% of the average cost of overall minimum policy.
Let us consider the following example to understand the power of two ordering policy
Three products are with reorder intervals of 3.5 days, 5.6 days and 9.2 days respectively
63
(
T * = 2 2 k TB , )
Let, TB =1 period.
Starting with power of two k =0,
⇒ 3.5 ≤ 2 2 0 . It is found that condition is not satisfied. We now increase the power k to 1 and
check whether it satisfies the condition. It is found that
3.5 ≤ 2 21
⇒ 3.5 ≤ 1.44 x2 ≤ 2.88 and therefore not satisfied the condition.
Again increasing the power k to two, it is found that
3.5 ≤ 2 2 2
3.5 ≤ 2 x 4
⇒ 3.5 ≤ 1.44 x4 ≤ 5.76 , it satisfies the condition i.e. power of 2 2 =4 weeks.
Similarly for second case, 5.6 ≤ 2 2 2 ≤ 5.76 . This will satisfy the condition, so follow order
interval 2 2 =4 weeks.
Similarly for the third case, 9.2 ≤ 1.44 x 2 3
9.2 ≤ 1.44 x 2 3
⇒ 9.2 ≤ 1.44 x8 , it satisfies the condition. So, follow the order interval 2 3 =8 weeks.
So, according to Roundy’s policy, quantity should be ordered at 2 2 , 2 2 , and 2 3 order interval time.
Thus orders for two items will be given at the end of 4 weeks whereas order for the third item should
be given at the end of 8 weeks.
Reference
Silver,E A., Pyke. D.F, Peterson, R., and; Inventory Management and Production Planning and
Scheduling; Publisher: John Wiley and Sons
**************
64
CHAPTER VII
Supply Chain Contract and Coordination
7.1 Introduction
Managing the flows in the supply chain network implies the presence of many decision makers within
the supply chain where each one operates a part of it. These decision makers could be either distinct
firms or managers of different departments within a firm. Every individual decision maker will
attempt to maximize his own profit keeping in mind that others will also do the same thing. This
individual competetive behavior of the members of the supply chain adversely affect the overall
performance goal of the supply chain.
Thus to avoid this undesirable situation, one of the most important issue arises in the management of
supply chain is how to have perfect coordination amongsts the members of a supply chain to have
cost savings and increase in channel efficiency. Philosophy of supply chain contracting is to develop
coordiantion policies through pair wise interaction of an upstream (supplier) and downstream
(retilaer) agent at a period of time. Decision-making at different levels in the organization should be
so coordinated that operating policies are optimal for the organization as a whole. Without
coordination, improvement at one level may be lost due to inefficiencies at another level. As for
example, reducing inventory at one level may not be beneficial if it is accumulated in any succeeding
stage. To avoid this undesirable situation, academicians have studied the issue and developed various
mathematical models. Porter (1985) has mentioned that cooperative relationship between buyer and
supplier is not a zero sum game in which one gains only at the expense of the other, but a relationship
in which both gain. Coordination is actually in the form of cooperative decisions, that is, the
individual entities make decisions, which are in the best interests of the entire supply chain.
As discussed above, it is clear that in a supply chain (SC), there are multiple firms owned and
operated by different parties, and each of these firms takes decisions, which are in line with their own
goals and objectives. As in all decentralized systems, the actions chosen by SC participants might not
always lead to the “optimal” outcome if one considers the supply chain as one entity. That is, since
each player acts out of self-interest, we usually see inefficiencies in the system, i.e., the results look
different than if the system was managed “optimally” by a single decision-maker who could decide on
behalf of these players and enforce the type of behavior dictated by this globally (or centrally) optimal
solution. In this section, we will take a look at the nature of inefficiencies that might result from the
decentralized decision-making in supply chains, and if and how one can design contracts such that
even though each player acts out of self interest, the decentralized solution might approach the
centralized optimal solution. For excellent reviews on SC contracts and coordination, we refer the
reader to Tsay, Nahmias and Agrawal (1999) and Cachon (2001).
Mechanisms of channel coordination and vertical control in production/distribution channels have
received attention from several disciplines such as Economics, Marketing apart from Operations
Management. The models in Economics literature generally assume a deterministic demand function.
Marketing literature has focused on channel coordination to maximize joint profits. Though
Marketing and Economics literature provide different motivations for coordination yet, ultimately the
motivations share the common objective of maximizing system welfare.
DSC CSC
w per unit. The supplier’s and the retailer’s profits are ΠS = (w - c)q and ΠR = (a - bq - w) q,
respectively. The supply chain’s profits are Π = ΠS + ΠR = (a - bq - c)q. Note that the choice of w
only indirectly affects the total SC profits, since the choice of w impacts the choice of q.
Decentralized Supply Chain (DSC)
As in most real-world supply chains, suppose that the supplier and the retailer are two independently
owned and managed firms, where each party is trying to maximize his/her own profits. The supplier
chooses the unit wholesale price w and after observing w, the retailer chooses the order quantity q.
Note that this is a dynamic game of complete information with two players, supplier and retailer,
where the supplier moves first and the retailer moves second. Hence, we can solve this game using
backwards induction. Given a w, first we need to find the retailer’s best response q(w). The retailer
will choose q to maximize ΠR = (a - bq - w)q. This is a concave function of q, and hence from FOC
we get
∂π R
= a − 2bq − w = 0
∂q
a−w
⇒ q ( w) =
2b
66
Next, given the retailer’s best response q(w) = (a-w)/2b, the supplier maximizes ΠS = (w-c)q = (w -
c)(a - w)/2b. This is a concave function of w and from FOC we get
∂π s a+c
= a−w−w+c = 0 ⇒ w =
∂w 2
The equilibrium solution for this decentralized supply chain is given in the second column of Table 1.
In this contractual setting, the supplier gets two-thirds of the SC profits, the retailer gets only one-
third. This is partly due to the first-mover advantage of the supplier.
Now, let us consider a centralized (integrated) supply chain (CSC) where both the retailer and the
supplier are part of the same organization and managed by the same entity. Centralized Supply Chain
(CSC): In this case there is a single decision-maker who is concerned with maximizing the entire
chain’s profits Π= (a-bq -c)q. This is a concave function of q and from first order condition (FOC),
we get
∂π a−c
= a − 2bq − c = 0 ⇒ q* =
∂q 2b
The solution for the CSC is given in the third column of Table 1. From Table 1, we see that the
quantity sold as well as the profits are higher and the price is lower in the CSC than in the DCS.
Hence, both the supply chain and the consumers are better off in the CSC. What about the retailer and
the supplier? Are they both better off, or is one of them worse off in the CSC? What is the wholesale
price? How does the choice of w affect the market price, quantity, and the supply chain profits? A
closer look would reveal that w has no impact on these quantities. Any positive w would result in the
same outcome for the CSC because the firm would be paying the wholesale price to itself ! However,
the choice of w in the CSC is still very important as it determines how the profits will be allocated
between the supplier and the retailer. We can interpret w as a form of transfer payment from the
retailer to the supplier. What is the minimum w that is reasonable? For positive supplier profits, we
need w ≥ c. If we set w = c, the supplier’s profits are zero whereas the retailer captures the entire
supply chain’s profits. What is the w that splits the SC profits equally between the retailer and the
supplier? If we set w = (a+3c)/4, w -c = P -w = (a-c)/4 and each party’s profits are (a-c)2 /8b. Note that
this is the same as the supplier’s profits in the DSC. Hence, if the supplier and the retailer split the
profits equally in the CSC, the supplier is at least as good, and the retailer is strictly better off than in
the DCS. In the DSC, the outcomes are worse for all the parties involved (supplier, retailer, supply
chain, and consumer) compared to the CSC, because in the DSC both the retailer and the supplier
independently try to maximize their own profits, i.e., they each try to get a margin, P - w and w - c,
respectively. This effect is called “double marginalization” (DM).
In a serial supply chain with multiple firms there is coordination failure because each firm charges a
margin and neither firm considers the entire supply chain’s margin when making a decision.
In this stylized model, the profit loss in the DSC due to DM is 25% (also referred to as the DM loss).
It is clearly in the firms’ interest to eliminate or reduce double marginalization, especially if this can
be done while allocating the additional profits to the firms such that both firms benefit. This simple
model suggests that vertical integration could be one possible way of eliminating double
marginalization. However, for reasons we discussed at the beginning of this chapter, vertical
integration is usually not desirable, or not practical. Then the question is, can we change the terms of
the trade so that independently managed companies act as if they are vertically integrated? This is the
concept known as “supply chain coordination.” In this stylized model, the retailer should choose q* =
(a - c)/2b in any coordinating contract. One can easily think of some very simple alternative contracts
to eliminate double marginalization:
67
7.2.1 Take-it-or-leave-it-contract
The supplier offers the following contract to the retailer: Buy q* at the wholesale price w = (a + c)/2,
or nothing. In this case the supplier’s profit is Π, i.e., the supplier captures 100% of the CSC profits.
The supplier offers the contract and grabs whole of the CSC profit as the first-mover advantage. This
contract requires a very powerful supplier.
∂R (q ) (w + c R ) ∂R(q )
From FOC we have, = . We had = c in the integrated chain. Hence, by
∂q α ∂q
setting the two right hand sides equal to each other, we achieve the same quantities CSC and DSC.
68
That is w + c R = αc , i.e., w = αc − c R , we have marginal revenue equal to marginal cost in the DSC
as well, and q = q* . In this case, the retailer’s profit is
Π R = αR(q ) − (αc − c R )q − c R q = α (R (q ) − cq ) = αΠ , i.e., the retailer captures α fraction of
* * *
69
In the DSC, the supplier’s profit is
Π R = revenue + returns – cost – purchase cost
⎛ R(q ) ⎞ ⎛ b⎞
= R(q ) + b⎜⎜ q − ⎟⎟ − (c R + w)q = R(q )⎜⎜1 − ⎟⎟ − (c R + w − b )q
⎝ p ⎠ ⎝ p⎠
( ( ) )
In revenue sharing contracts we have, Π R = αΠ * = α R q * − cq * , i.e., the retailer’s profit is an
affine transformation of the centralized supply chain’s profits. Hence the retailer’s optimal quantity is
the same as the centralized supply chain’s optimal quantity. For a similar situation to hold buyback
contracts, the retailer’s revenue should be equal to αΠ * , i.e.,
⎛ b⎞ ⎛ b⎞
R(q )⎜⎜1 − ⎟⎟ − (c R + w − b )q = αΠ * and we need to have (c R + w − b ) = αc and ⎜⎜1 − ⎟⎟ = α .
⎝ p⎠ ⎝ p⎠
⎛ b⎞
⎜⎜1 − ⎟⎟ = α ; ⇒ b = p(1 − α )
⎝ p⎠
and, (c R + w − b ) = αc; ⇒ wb = αc + b − c R = p (1 − α ) + αc − c R
Hence, if the supplier chooses w and b in this fashion, the retailer will get a fraction of the CSC’s
profits.
In the revenue sharing contract we had, w = αc − c R . In buyback contracts, we have,
wb = p(1 − α ) + αc − c R and, b = p(1 − α ) . That is, wb = b + w . Hence, in the buyback contract
the supplier charges a little more for the wholesale price and in return guarantees to give back b for
any unit that is not sold.
In revenue sharing contracts the wholesale price did not depend on the selling price. But in a buyback
contract, the wholesale price and the return price depend on the selling price. Hence, in a buyback
contract the supplier should set w and b as functions of the selling price p. Is the buyback contract
flexible in terms of how the profits are shared between the supplier and the retailer? The answer is
yes. Actually, for every revenue sharing contract, there is an equivalent buyback contract and vice
versa.
70
∂Π R ∂R(q ) ∂R(q )
= − w − cR = 0 ⇒ = w + cR
∂q ∂q ∂q
In order to have the CSC solution, we need w = c s , i.e, supplier must sell at cost. In this case the
( ) ( )
retailer gets Π R = R q * − (w + c R )q * − F = R q * − (c S + c R )q * − F = Π * − F . Supplier’s profit
is Π S = F + (w − c S )q = F .
*
Notice that F determines the allocation of profits between the supplier and the retailer. This is a
flexible contract as any allocation is possible.
7.3 Present trend in the study of supply chain coordination from Operations Management
perspective
In the last couple of years, interest in the field of supply chain coordination from Operations
Management perspective has grown considerably. One line of research employs quantity discount and
quantity commitment as coordination mechanism (Aderohunmu et al. 1995; Lariviere, 1999). Further,
authors like Ertogral et al. (2001) have put emphasis on the need to incorporate negotiation process in
supply chain coordination. Negotiation process focusses on dynamic sharing of surplus between the
two parties where both can take part in the decision. Here, the negotiation ends with the win-win
feeling for both. This is considered superior to a pre-determined static division of surplus through side
payment strategy for a decentralized supply chain.
Many authors (e.g. Kohli and Park, 1989) have assumed that both buyer and the supplier have full
information about each other. But in practice, such a comfortable situation hardly exists. Realizing
this fact, recently some authors (e.g. Corbett et al., 2000; Ha, 2001) have incorporated the information
asymmetry factor in their models of supply chain coordination problem.
71
Another area of supply chain coordination that has drawn the attention of researchers is on
development of suitable mechanism to coordinate the logistic processes that are controlled by various
companies. Swenseth et al. (2002) have reported that often about 50% of total annual logistics cost of
a product can be attributed to transportation cost. Therefore, for the overall performance improvement
of the supply chain, there is a need to develop coordination mechanism to coordinate the logistics
processes between the various parties of a supply chain. Particularly, in the multi buyers case where
buyers are located in different geographical regions, individual shipments to the buyers by the vendor
increases the total system cost. In such a situation, coordinated shipment from the vendors to multiple
buyers helps to reduce channel cost.
Notation:
Subscript 1 and 2 represent vendor and buyer respectively,
D = the buyer’s annual demand for the product,
Si = Setup and ordering cost for the firm i, i = {1,2}
ri = Annual inventory holding cost expressed as a percentage of the value of the item for the
firm i, i = {1,2}
Q = the buyer’s order quantity
M2 = the vendor/manufacturer’s gross profit on sales expressed as a percentage
dk = discount per unit offered by the manufacturer
R2 = the manufacturer’s production rate in unit per year
72
P0 = the buyer’s base purchase price without quantity discount
C2 = the manufacturer’e manufacturing cost per unit excluding order processing, setup, and
inventory holding costs per unit
7.4.1 Background
Study of integrated inventory models can be viewed as one of the origin of supply chain coordination
study from Operations Management perspective. These models mainly examine the benefits accrued
in the system due to coordination in order quantities between the two parties. Earlier, Goyal and
Gupta (1989) have reviewed the literature of buyer vendor coordination models. Benton and Park
(1996) and Munson and Rosenblatt(1998) have also reviewed some of the papers discussed here
under different context. We have mainly considered here the literature of channel coordination/
supply chain coordination models that have operations approach. Operations approach mainly
concentrates on the operating cost of the channel. Operating cost is considered as a function of
retailer’s / buyer’s order quantity where a fixed retail price is assumed and this leads to a fixed final
demand.
The traditional inventory model assumes that a rational buyer would prefer to purchase his optimal
order quantity (EOQ) as any deviation from this quantity would increase his total cost. The buyer’s
annual total cost for order quantity Q can be expressed as
⎛D⎞ ⎛Q⎞
TC (Q ) = P0 D + ⎜⎜ ⎟⎟ S1 + ⎜ ⎟r1P0 (1)
⎝Q⎠ ⎝2⎠
When quantity discount is not allowed, the buyer’s optimal order size is given as
⎛ DS ⎞
YNP2 = DM 2 P0 − ⎜⎜ 2 ⎟⎟ (3)
⎝ Q ⎠
The total channel cost is the sum of the individual cost component of buyer and vendor respectively
73
(i) One can maximize the supplier’s yearly net profit as shown by equation (3) in our general
model by adopting different lot size by giving incentive to the buyer. The authors who
have attempted the coordination problem from this perspective are classified here as
vendor’s /manufacture’s perspective coordination models.
(ii) Similarly, one can minimize the total system cost with respect to coordinated lot size or
the order quantity as shown by equation (4) and thereby improves the system savings.
We have classified here those models as joint buyer and seller / manufacturer perspective
coordination models.
(iii) On the otherhand, some authors have studied the buyer vendor coordination through
quantity discount as a non cooperative and cooperative game. In a non cooperative
game, each member will try to maximize his profit or minimize his cost. Thus the
objective will be here to maximize equation (3) and minimize equation (1) of the general
model. However, in a cooperative game, the objective will be to maximize system profit
subject to the constraint that no player looses or incurs more from their non cooperative
solution. We have categorized these models as a buyer and a seller/ manufacturer
coordination models under game theoretic frame work.
In this stream of literature, most of the models have assumed that seller /manufacturer knows or can
estimate the buyer’s setup and holding costs. Further, EOQ assumptions are considered for the buyer.
The buyer is assumed to act optimally and order the quantity leading to his lowest total cost.
⎛ (K − 1)2 ⎞
( )
TC KQ = P1D + 2 DS1r1P0 ⎜⎜1 +
*
⎟
⎟ (5)
⎝ 2 K ⎠
The increase in cost resulting from larger order size is the difference between the costs at the
EOQ and costs at the order size KO* as given by equation (5). The vendor offers a price discount per
2 S1r1 P0 ⎧ (K − 1) ⎫
2
unit equal to the increase in cost at buyer’s side, which is given as dk = ⎨ ⎬
D ⎩ 2K ⎭
(6)
Supplier’s yearly net profit after giving discount amount is given as follows
YNP2 = D(M 2 P0 − d k ) −
D
S2 (7)
KQ *
Substituting the value of dk in equation (7), maximize the supplier’s profit equation YNP2 with respect
S2
to K. The optimal value of K is obtained as K = +!
*
(8)
S1
From the expression of K * in equation (8), one can easily say that when the value of S2 is large, the
supplier can entice the buyer to order in larger quantity and the value of K* is independent of the
amount of discount offered by the supplier. One important issue here is that when buyer is exactly
compensated for increase in cost due to larger order size, buyer will be indifferent towards increasing
74
his order quantity. Monahan developed the model considering lot-for-lot policy, an all unit quantity
discount schedule with single price break
(b) Joint buyer and seller / manufacturer coordination models
Some authors have used quantity discount as a coordination mechanism to maximize the joint
profit of the buyer and the vendor. The objective function here in all likelihood is to minimize the
total channel cost as shown by equation (4). The models here provide some explicit mechanism for
division of surplus generated in the channel due to coordination. Like the seller’s perspective model,
here also it is assumed that seller’s have full information about buyer’s cost structure.
The idea of joint optimization for buyer and vendor was initiated by Goyal (1976) and later reinforced
by Banerjee (1986a). The objective of Goyal’s model was to minimize total relevant cost for both the
vendor and the buyer for the order quantity Q. He assumed that manufacturer does not produce the
item and in fact purchases it from another supplier. Moreover, he assumed that inventory holding
costs are independent of the price of the item
Banerjee (1986a) formulated a joint economic lot size (JELS) model for a buyer and a vendor system
where the vendor has a finite production rate. He determines the JELS Q* by differentiating the total
system cost equation with respect to Q.
⎛ ⎞
TC (Q ) = (S1 + S2 ) + Q r ⎜⎜ P0 + D C2 ⎟⎟
D
(11)
Q 2 ⎝ R2 ⎠
2 D(S1 + S 2 )
Q* = (12)
⎛ D ⎞
r ⎜⎜ P0 + C2 ⎟⎟
⎝ R2 ⎠
The assumption they consider is that a production setup is incurred every time when an order is
placed. He finds that without quantity discount, the buyer incurs loss, but the supplier gets benefit if
JELS is adopted rather than buyer’s EOQ. He developed the two bounds of discounts that allow the
joint benefit to both the parties if the buyer increases the order quantity from EOQ to the JELS
quantity. When discount amount is fixed at lower bound, all the benefits go to the supplier and the
buyer is indifferent where as when amount of discount is set at maximum level, all benefits shift to the
buyer and the supplier is indifferent. While suggesting equal distribution of the gains from Joint
Economic Ordering, Banerjee (1986a) mentioned that question of pricing and lot-sizing decisions are
settled through negotiations between the buyer and the seller. Later on, we will see how some authors
have incorporated in their model the bargaining power of the channel members in fixing the order
quantity and amount of discounts.
Viswanathan (1998) in his paper has compared two supply policies for an integrated vendor buyer
inventory model. In first policy, the vendor produces a batch and supply to the buyer in number of
equal shipment size at constant interval. The second policy is to supply the production batch to the
buyer in increasing shipment size. He identified problem parameters under which the equal shipment
size policy and increasing shipment size policy is optimal. The author has observed that neither of the
two policies dominates the other for all problem parameters. The second policy attempts to shift
inventory to the buyer as quickly as possible. This type of strategy works better if the holding cost for
the buyer is not much higher than that for the vendor.
Three level coordination models
Munson and Rosenblatt (2001) have extended the two level supply chain to a three level supply chain
by considering a supplier (who is supplying raw materials to manufacturer), a manufacturer and a
retailer and they explored the benefit of using quantity discount on both ends of the supply chain to
decrease cost. Like the earlier scenario, manufacturer’s production lot size is an integer multiple of the
buyer’s order quantity and the manufacturer orders an integer multiple of his production lot size to the
75
raw materials supplier. They have shown that by quantity discount mechanism; company can
coordinate its purchasing and production functions. This creates an integrated plan that dictates order
and production quantities throughout a three firm channel. They have considered manufacture as the
dominant member in the channel who takes the lead role in coordinating the channel.
Yang and Wee (2001) in their paper have also considered integration of producer, distributors, and
retailers a three-stage supply chain. They have developed an economic ordering policy under constant
demand for the arborescent (i.e. a tree like) inventory model structure. They have shown that the
integrated approach results in a significant cost reduction compared to that of independent decision
making by each individual entity of the supply chain. The model however, has not considered how the
increase in cost at retailer level is to be compensated due to implementation of the integrated policy.
Khouja (2003a) has also considered three stage supply chain of tree like inventory model structure.
He has considered three coordination mechanisms between the members of the supply chain and has
shown that some of the coordination mechanisms can lead to significant reduction in total cost. The
author however, has not considered the distribution of savings between the different members of the
supply chain.
Khouja (2003b) also studied coordination of the entire supply chain from raw materials to customer
considering single and multiple components. They consider components scheduling decisions at each
stage in which manufacturing occurs and its impact on the holding cost. They have shown that
complete synchronization in the chain leads to loss of some members of the supply chain. They
provide an algorithm for optimal synchronization of supply chain and incentive alignment along the
supply chain.
(c ) A buyer and a seller/ manufacturer coordination models under game theoretic framework
Some authors have viewed the buyer vendor coordination problem through quantity discount
mechanism as a two-person game. They can be formulated as non-zero sum game having elements of
both conflict and cooperation. In a non-cooperative game playing independently, the intention of the
players (vendor and the buyer) is to maximize their individual gain. The objective function for this
game from the general model can be written as
DS1 Q
Minimize TC = P0 D + + r1P0
Q 2
DS 2
Maximize YNP2 = DM 2 P0 −
Q
Generally, the solution to the non-cooperative game can be obtained by using established equilibrium
concept. Different types of game models have different solution concept. In the Stackelberg game,
the player who holds more powerful position is called the leader and enforces his strategy on the other
and the other player who reacts to the leader decision is called the follower. The solution obtained to
this game is the Stackelberg equilibrium solution.
On the other hand in a cooperative game both buyer and seller would consider maximizing system
profit subject to buyer’s total annual cost at cooperation should be less than or at most equal to those
at non-cooperation. Similarly, seller’s total annual profit at cooperation should be greater than or at
least equal to those at non-cooperation. The objective function for this game from the general model
can be written as
Max λYNP2 − (1 − λ )TC
Subject to TC ≤ TC*
YNP2 ≥ YNP2*
76
Where TC* and YNP2* represents the cost and profit of buyer and seller before cooperation.
Depending upon the bargaining power of the seller and the buyer, the value of λ varies between 0
and 1. In the cooperative game a group of strategies is called a pareto efficient point when at least one
player will be better off and no player will be worse off from the initial condition. In the decentralized
supply chain where the members belong to two different firms, the method of bargaining and
negotiated solution which is dynamic in nature may result better coordination in the supply chain as
compared to static coordinated solution in a centralized supply chain.
77
References
[1] Bannerjee, A., (Summer) 1986a, A joint economic lot size model for purchaser and vendor,
Decision Science, Vol. 17, No. 3, pp. 292-311.
[2] Cachon, P.G. and Lariviere, A. M., 2001, Contracting to assure supply: How to share demand
forecasts in a supply chain, Management Science, Vol. 47, No. 5, pp. 629-646.
[3] Corbett, C. J., and Tang, C. S., 1999, Designing supply contracts: contract type and information
asymmetry, In: S. Tayur, M. Magazine, R. Ganeshan, (Eds.), Quantitative models for supply
chain management, Published by Kluwer Academic Publishers, 1999, pp. 269-297.
[4] Goyal, S. K., 1976, An integrated inventory model for a single supplier single customer problem,
International Journal of Production Research, Vol. 15, No. 1, pp. 107-111.
[5] Ha, A., 2001, Supplier buyer contracting: Asymmetric cost information and cut off level policy
for buyer participation, Naval Research Logistics, Vol. 48, No. 1, pp. 41-64.
[6] Khouja, Moutaz., 2003, Optimizing inventory decisions in a multistage multi customer supply
chain, Transportation Research part E, Vol. 39, No. 3, pp. 193-208.
[7] Munson, L.C. and Rosenblatt, J.M., 2001, Coordinating a three level supply chain with quantity
discounts, IIE Transactions, Vol. 33, No. 4, pp. 371-384.
[8] Sarmah, S P., Acharya, D., Goyal, S.K., 2006. Buyer vendor cooridnation models in supply chain
management: An invited review, Europoen Journal of Operational Research, Vol.175, pp. 1-15
[9] TSay, A., Nahmias,S., & Agarwal,N., 1999, Modeling supply chain contracts: A review, In: S.
Tayur, M. Magazine, R. Ganeshan, (Eds.), Quantitative models for supply chain management,
Published by Kluwer academic publishers, 1999, pp. 301-336
[10] Viswanathan, S., 2001, Coordinating supply chain inventories through common
replenishment epoch, European Journal of Operational Research, Vol. 129, No. 2, pp. 277-286.
[11] Viswanathan, S., 1998, Optimal strategy for the integrated vendor buyer inventory model,
European Journal of Operational Research, Vol. 105, No. 1, pp. 38-42.
[12] Yang, C.P. and Wee, M.H., 2001, An arborescent inventory model in a supply chain system,
Production Planning and Control, Vol. 12, No. 8, pp. 728-735.
*********
78
CHAPTER VIII
A Method for Supply Base Rationalization Considering Supply Risk *
8.1 Introduction
Collaborative sourcing (alternatively named as “partnership sourcing”) has been widely proposed in
the literature [1,2,3] to foster a long-term collaboration between a buyer and its suppliers based on
trust and cooperation, with the buyer relying on a single or a small number of preferred suppliers for
sourcing a product. It has merits over adversarial competition because of its lower operational costs
[1] arising out of fewer dedicated suppliers and because risks and rewards are shared between them. A
prerequisite for developing a strong buyer-supplier relationship is to have a small and rational supplier
base. However, it is a very tedious task for a practicing manager to take its supplier base to a rational
level. The word ‘rationalization’ is most commonly associated with the task. A rational supplier base
leads to: (1) reduced supplier development costs, (2) close and workable supplier relationships, and
(3) business rewards to its suppliers [4]. There has been much confusion with regard to the concepts
underlying the supplier base reduction and supplier base rationalization. Cousins [5] and Dubois [6]
have used the term ‘supplier rationalization’ to principally mean supplier base reduction. Supplier
base reduction presupposes the existence of a large supplier base and is concerned with retaining only
the top performers so as to limit the downsized supplier base to a predetermined size. Supplier base
rationalization, on the other hand, consists of two phases: (1) Determination of the optimum size of
the supplier base and (2) Identification of the suppliers who should constitute this base.
Supplier base rationalization may result in an expanded or contracted supplier base depending on the
number of existing suppliers vis-à-vis the optimal size of the supplier base. Industrial firms in many
developing countries still follow the traditional purchase management practices and have large
supplier bases, especially for MRO items. For these organizations, the problem of identification of the
constituents of the supplier base reduces to a problem of reducing the supplier base to a rational level.
Supplier base rationalization may be viewed as a one-time selection of a small group of suppliers so
as to reduce transactional costs and purchasing complexity and build long-term buyer-supplier
relationships. However, no distinguishing approach for supplier base rationalization has been
forwarded in the purchasing literature. The process of supplier base rationalization is strategic in
nature, with suppliers being evaluated based on factors that represent their short- and long-term
characteristics. Many factors, generally considered in the literature for supplier evaluation, are
qualitative in nature. These factors can be measured, at best, subjectively, and are therefore tend to be
imprecise. Thus, the process of supplier base rationalization can be thought of as a process that
consists of the following three steps:
i. Determination of optimal size of supplier base,
ii. Selection of a method to be used for evaluation of suppliers, and
iii. Identification of the constituents of the supplier base.
Whereas there are large volumes of literature with regard to supplier evaluation methods, the literature
on the first and the last issues are rare. Therefore, in this paper we have addressed the first and third
issues only and they have been presented one after the other in the following sections.
*
Contributed by Ashutosh Sarkar
Lecturer, Department of Mechanical Engineering
Institute of Technology, Banaras Hindu University, Varanasi 221005
8.2. Determination of optimal size of supplier base
Majority of literature evaluates the effect of the size of the supplier base on the inventory and the
replenishment lead time [7, 8, 9]. Further, these studies take the number of suppliers as input to study
its effect and do not advance any method for finding the optimal supplier base. Studies that
specifically deal with the problem of determining the optimal size of the supplier base are due to
Agrawal and Nahmias [10], Weber et al. [11], Berger et al. [12], and Kauffman and Leszczyc [13].
Agrawal and Nahmias [10] formulated a profit maximization problem to determine the optimal lot
size and optimal number of suppliers. The model assumes that having more number of suppliers
reduces the yield uncertainty but increases the fixed cost associated with operating multiple suppliers.
The model trades-off the cost of yield with the fixed cost to determine the optimal size of supplier
base. Kauffman and Leszczyc [13] used the concept of buyer utility and decision-related costs to
derive the optimal choice set size for one-time- and repeat-purchase situations. They have also used
the data on actual bid prices and cost data from the industrial steel pipe market to empirically arrive at
the optimal size of the choice set. Weber et al. [11] used both multi-objective programming and data
envelopment analysis techniques to solve the problem. Considering that the biggest motivation for
having multiple suppliers is to prevent complete disruption of supplies due to an unforeseen natural
disaster (like earthquake, cyclone, tsunami, and flood) and/or man-made disaster (like power grid
failure, strike, and communal violence), the above two modeling approaches are inadequate to
determine the optimal size of the supplier base.
Berger et al. [12] argued that supply disruption or interruption of the inbound supply network can
obstruct the functionality of the whole chain. In order to determine the optimal size of the supplier
base, they considered supply risks, the risks posed by the occurrence of catastrophic events that lead
to complete disruption of inbound supply network. They classified these events as (1) ‘super-events’
that can affect all suppliers simultaneously and disrupt supplies from all the suppliers, exhibiting total
effect, (2) ‘semi-super-events’ that affect only a subset of suppliers, exhibiting regional effect, and (3)
‘unique-events’ that affect a particular supplier uniquely, exhibiting local effect. The purchasing
environment determines the classification of an event as a super-, semi-super-, or unique-event. For
example, a cyclone in a coastal region may be termed as a super-event if all suppliers of an item are
located in this region and supplies from these suppliers fail. It may be labeled as a semi-super-event if
a few but not all the suppliers fail. Berger et al. [12] considered the probabilities of occurrence of
super- and unique-events and used the decision tree approach to find the financial loss and operating
cost of working with multiple suppliers. Although Berger et al. [12] defined three types of
catastrophic events, they considered only two types of events while developing their model. In this
chapter, we have considered all three types of events to determine the optimal size of suppliers.
80
probability that supplies of an item will be affected because of problems at the supplier’s end’ and the
cost of supply disruption as its impact.
We assume a semi-super-event to be location specific. Occurrence of such an event disrupts all
suppliers in a geographical location while it does not affect suppliers in other locations. We also
assume that all suppliers in a location will have more or less similar risks due to the occurrence of a
unique-event, and, so, individual variation of supply risks for the suppliers can be neglected for that
location. Furthermore, a decision based on small variations in the probability of occurrence of a
unique-event for each individual supplier is inappropriate, considering that precise information on all
potential suppliers (both the existing suppliers as well as those who are not) may not be available and
the existing information base needs to be updated over time for use during the actual process of
supply base rationalization. Individual variations are needed be considered during the actual reduction
of the supplier base. Thus, for our case, the supply risk due to a unique-event represents more the
character of the supply market rather than the individual supplier, and it is a function of the number of
suppliers engaged.
81
J l = Number of suppliers available at location l .
P*
K
Location
. Pl** ∏P l
**
(1 − P ) ∏ P
L
k =1
⊗
* **
. l
Semi-super- All suppliers fail because l =1
.
. of Semi-super-events
Suppliers
Supplier
Location l
.
1 − Pl** . Jk
.
. ρ jl ∏ρ jl
1− P * . j =1
. No Semi-super .
Unique ⊗ All suppliers fail because
. event
event of unique events
. j .
. .
. . No Unique event No supplier fails
. .
.
No super
.
. Supplier Jl
Location L
82
various probabilities can be easily derived. We refer P ( . ) to represent the probability of
occurrence of an event.
P (All suppliers at location l fail because of a semi-super-event)
(
= 1 − P* Pl** )
P (All suppliers at location l fail because of either a super-event or a semi-super-event)
[ ] [(
= P* + 1 − P* Pl** ) ]
P (All suppliers at all locations fail because of either a super-event or a semi-super-event)
[ ] [(
= P* + 1 − P* P1** P2** ... Pl** ) ]
Similarly we get, P (All suppliers at all locations fail because of either a super-event or a semi-super-
event or a unique event)
⎡ ⎤ ⎡ ⎤
[ ] ( ) ∏P ( ) ∏ {(1 − P )ρ }⎥
L L (2)
= P* + ⎢ 1 − P* l
**
⎥ + ⎢1− P
*
l
**
l
Jl
⎣⎢ l =1 ⎦⎥ ⎣⎢ l =1 ⎦⎥
The number of units short during the period of supplier failure can be estimated from the information
on demand and the duration of time required for recovering (T ′ − T ) from a catastrophic event failing
the suppliers to supply the products. Number of units short during the period (T ′ − T ) is D (T ′ − T )
(3)
Total shortage cost during the period of supply failure, CT = CS (T ′ − T ) D(T ′ − T )
We refer to CT as the cost of supply disruption. Thus, the total cost of engaging y suppliers, f ( y ) ,
is the sum of cost of operating y suppliers and costs due to shortages when all suppliers fail to make
their supplies.
⎡ ⎤
( )∏P ( ) ∏ {(1 − P ) ρ }⎥
L L
f (y ) = C (y ) + CT ⎢P * + 1 − P * l
**
+ 1 − P* l
**
l
il
(4)
⎢ ⎥
⎣ l =1 l =1
⎦
Where, C ( y ) is the cost of operating y suppliers and y = ∑ i . The problem, thus, is to find the
l =1
l
optimum number of suppliers y that minimizes the sum of the cost of operating the suppliers and the
cost of shortage due to supply disruption. Stated more succinctly, the problem is to
“Find y that minimizes (4).”
83
This problem is hereafter called Supplier Size Problem where y is the decision variable and y > 0 .
If the choice of y suppliers has to be economically justified compared to the choice of y − 1
suppliers, the following condition must be satisfied:
C ( y ) − C (1)
CT
( )(
< 1 − P * 1 − Pl ** ρ ly −1 (1 − ρ l ) ) (5)
We assume that the operating cost, C ( y ) , has two components: (1) a fixed component, a and (2) a
variable component, b , that varies with the size of the supplier base, y . Therefore,
C ( y ) = a + b ( y ) and inequality (5) can be rewritten as
b
CT
( )( )
< 1 − P * 1 − Pl** ρ ly −1 (1 − ρ l )
(6)
From inequality (6) it can be concluded that the existing supplier base could be reduced as long as the
following condition holds:
y > 1 + ln b { [C T (1 − P )(1 − P )(1 − ρ )] } (ln ρ )
*
l
**
l
(7)
Inequality (6) can also be extended to multiple locations and can be rewritten as for the case when
choosing y + 1 suppliers is superior to choosing y suppliers
( ) ∏ (1 − P )ρ (1 − ρ )
u
b (8)
< 1 − P* l
**
l
yl
u
CT l =1
⎛ u ⎞
where, yl ⎜ ∑ yl = y ⎟ represent the number of suppliers chosen from location l ,
⎜ l =1 ⎟
⎝ ⎠
( )
l = 1, 2, ..., u and the ( y + 1) th supplier is chosen from location u .
84
0 suppliers
(not permitted)
Location 3 1S 1 supplier
2S
0S 2 suppliers
0S 1 supplier
1S 1S 2 suppliers
Location 2 Location 3
2S
3 suppliers
2S 0S 2 suppliers
1S
0S Location 3 3 suppliers
2S 4 suppliers
0S 1 supplier
1S 2 suppliers
Location 3
2S
0S 3 suppliers
0S 2 suppliers
1S 1S 1S 3 suppliers
Location 3
2S 4 suppliers
2S 0S 3 suppliers
1S 4 suppliers
Location 3
2S 5 suppliers
0S
2S 2 suppliers
1S 3 suppliers
Location 3
2S
0S 4 suppliers
0S 3 suppliers
1S 1S
Location 2 Location 3 4 suppliers
2S 5 suppliers
2S 0S 4 suppliers
1S
Location 3 5 suppliers
2S 6 suppliers
Fig. 2 Decision tree
85
Proof: If the selection of y suppliers from a set of u locations is economically more advantageous
compared to choosing y + 1 suppliers then from inequality (8), we get
( ) ∏ (1 − P )ρ (1 − ρ )
u
b (9)
> 1 − P* l
**
l
yl
u
CT l =1
Similarly, assuming that both y + 1 th and y + 2 th supplier are chosen from location u , the selection
of y + 2 suppliers is economically more advantageous compared to selecting y + 1 suppliers from
the same set of locations if the following condition holds:
( )∏ (1 − P )ρ (1 − ρ )ρ (10)
u
b
< 1 − P* l
**
l
yl
u u
CT l =1
( )∏ (1 − P )ρ (1 − ρ )ρ
u
b
</ 1 − P * l
**
l
yl
u u
CT l =1
Thus, if for same the set of locations, selecting y suppliers is more economical than selecting y + 1
suppliers, then it will be also more economical than selecting y + 2 suppliers from the same set of
locations.
Theorem 2: If y suppliers are to be chosen from a set of L locations, with at least one from each
location, then it is always economically advantageous to choose as many suppliers as possible from
the location that has the minimum of unique-event probabilities.
Proof: Let us consider that y suppliers are to be chosen from u locations. Then the total cost
function f ( y ) is given as
f ( y ) = C ( y ) + C T [{ P * + P1** .... Pu** + (1 − P * )(1 − P1** )...(1 − Pu** ) ρ 1y1 .... ρ uyu }]
where, y1 , y2 , …, yu represent the number of suppliers chosen from the location 1, 2, ... , u ,
respectively.
Instead of selecting a supplier from location 1, if we select a supplier from location u , then the total
cost
f ′( y ) is given as
[{ ( )( )( )
f ′( y ) = c ( y ) + CT P * + P1** .... Pu** + 1 − P * 1 − P1** ... 1 − Pu** ρ1y1 −1 .... ρ uy u +1 }]
If the latter alternative has to be more advantageous than the former, then the following condition has
to be satisfied.
f ( y ) − f ′( y ) > 0
86
⎧ ρ ⎫
or, (1 − P )(1 − P )...(1 − P )ρ
* ** ** y1
....ρ uyu ⎨1 − u ⎬ > 0
⎩ ρ1 ⎭
1 u 1
ρu
or, <1
ρ1
The above condition will be violated only if ρ u > ρ 1 .
While searching for solutions to the supplier size problem two questions has to be answered. First,
what should be the total number of suppliers to be engaged and second, how these chosen suppliers
will be distributed among the various locations. The above two theorem helps in searching the
solution to the supplier size problem efficiently. Theorem 1 restricts the engagement of more number
of suppliers from a set of locations once the total cost f ( y ) starts increasing, thus, limits evaluations
for higher values of y . The number of ways a given number of suppliers y may be chosen from a
set a locations is large. Theorem 2 allows us to evaluate only the combination that selects as much
suppliers as possible from the location for which ρ l is least. Thus theorem 1 and 2 helps in reducing
the solution space for the supplier size problem considerably.
q′ = L C1 + LC2 + .......+ LC L
Thus, the number of combinations of v′ suppliers chosen from L locations is L
C v . When v′ is
fixed, there are a number of ways in which suppliers can be selected from these v′ locations. The
proposed tabular method evaluates each alternative for each combination of locations separately and
helps find the optimal number of suppliers to be chosen from each location that minimizes the total
cost, f ( y ) . For example, given five locations if we have to evaluate for various alternative values of
y , there will be five separate tables. The first table evaluates the various alternative values of y for
each location, the second table for all possible combinations of two locations, the third for three
locations, and so on. The minimum possible value of y in a table will be equal to v′ , the number of
locations considered in that table, i.e., at least one supplier will be chosen from each location for a
combination. For a given value of y , all possible alternatives for the number of suppliers that can be
chosen from each location are considered for evaluation. Table 1 shows an example of such tables
when v ′ = 4 . The first column in the table represents the various alternative values of y , the
87
variables y1, y2 , y3 , y4 represent the number of suppliers chosen from the first, second, third and the
fourth location, respectively. Location 1:2:3:5 signifies that suppliers are to be chosen from location 1,
2, 3 and 5 only for that column. Entries in these columns are to be the total cost.
Table 8.1: Decision alternatives for a 5C4 problem
y y y y y Location Location Location Location Location
1 2 3 4
1:2:3:4 1:2:3:5 1:2:4:5 1:3:4:5 2:3:4:5
4 1 1 1 1
5 2 1 1 1
5 1 2 1 1
5 1 1 2 1
5 1 1 1 2
6 3 1 1 1
6 1 3 1 1
6 1 1 3 1
6 1 1 1 3
6 2 2 1 1
6 2 1 2 1
6 2 1 1 2
6 1 2 2 1
6 1 2 1 2
6 1 1 2 2
Based on Theorem 1 it can be argued that total costs for a column in the table has to be calculated to a
value of y such that f ( y ) > f ( y − 1) for that column. Thus, for a column, there is no need for
calculating the total cost beyond a value of y for which the condition f ( y ) > f ( y − 1) is satisfied.
For a given value of y , the number of ways in which y can be distributed among the locations is
large. However, if we arrange the locations sequentially in increasing order of their unique-event
probabilities, then Theorem 2 allows us to retain only those rows that choose maximum possible
number of suppliers from the first location while discarding all other alternatives for the given value
of y . Thus, if we assume that the unique-event probabilities are related as ρ1 < ρ2 < ... < ρ5 ,
then the table in Table. 1 can be reduced. Table 2 is the reduced table with various non-optimal rows
deleted. It can be observed that the reduced table has twelve rows less than the table shown in Table 1.
This leads to significant saving in computational time. The optimum solution is the minimum of all
best solutions for the individual table.
88
Table 8.2: Reduced table for decision alternatives for a 5C4 problem
y y y y y Location Location Location Location Location
1 2 3 4
1:2:3:4 1:2:3:5 1:2:4:5 1:3:4:5 2:3:4:5
4 1 1 1 1
5 2 1 1 1
6 3 1 1 1
= P2 = P3 = P4 = 0.02
** ** ** **
Semi-super-event probabilities: P 1
Unique-event probabilities: ρ 11
= ρ 21
= ρ 31
= ....... = ρi 1
1
= 0.031
ρ 12
= ρ 22
= ρ 32
= ....... = ρi 2
2
= 0.030
ρ 13
= ρ 23
= ρ 33
= ....... = ρi 3
3
= 0.033
ρ 14
= ρ 24
= ρ 34
= ....... = ρi 4
4
= 0.032
Therefore,
Cost of supply disruption (vide Eq. 2.6), C T
= 33.00× 23 × 2 × 23 = 35000 (Rupees)
In the example, there are four locations ( L = 4 ). We arrange these locations in order of their unique-
event probabilities and label them as location A, B, C and D respectively. Table 3 gives the assumed
occurrence probability values for the semi-super and the unique-events. Later, these probability values
have been changed to study their effects.
89
As there are four locations, the first table will have 4 ( = 4C1 ) columns and all suppliers are to be
chosen from a single location only; the second table will have 6 ( = 4C2 ) columns with all suppliers to
be chosen from any two locations; the third table will have 4 ( = 4C 3 ) columns with all suppliers to be
chosen from any three locations; and the fourth table will have one ( = 4C4 ) column with all suppliers
to be chosen from any four locations. We have also assumed a maximum of three suppliers can be
chosen from each location. The variables y1 , y2 , y3 , and y4 represent the number of suppliers
chosen from the first, second, third and the fourth location, respectively. When all suppliers are
selected from one location (Table 4), the maximum value of y can be 3, because we have assumed
every location to have three suppliers. When two locations are selected (Table 5), a maximum of six
suppliers can be chosen. It is not necessary to compute total costs for each case, however. A good
approach for achieving computational efficiency is to compute the total cost for each location starting
from the lowest value of y . We can stop whenever a higher value of the total cost is obtained.
Thereafter we proceed to the next location and adopt the same procedure. Table 6 and 7 are the
reduced tables for the number of locations considered for selecting suppliers v ′ = 1 and 2,
respectively. The entries in each cell of the tables are the total cost of engaging y suppliers selected
from the corresponding set of locations. The minimum costs for individual Tables are highlighted.
Although there is no reduction in the size of Table 6, the number of rows in Table 7, have reduced to
3 from 15 in Table 5. It may be noted that Table 7 could have been directly obtained without
considering the last 12 rows of Table 5. Table 8 and Table 9 show the total costs of various
alternatives for v ′ = 3 and v ′ = 4 , respectively. The minimum of all the best values from each
table is the solution for the supplier size problem. The minimum occurs in Table 8. The optimal
solution for the above example, therefore, is obtained as y = 3 and with one supplier chosen from
each of the A, B, and C.
Table 8.4: Complete tabular evaluation of all alternatives for v ′ = 1
y Location A Location B Location C Location D
1 2131.710000 2165.667000 2199.624000 2233.581000
2 1163.561300 1165.632677 1167.771968 1169.979173
3 1153.916839 1154.011613 1154.112703 1154.220313
91
contribution to develop core competencies and boost the buyer’s image among customers and
suppliers. Difficulty of managing a purchase situation can be assessed on the basis of external factors
such as the nature of the product, the characteristics of the supplier market, and the risk and
uncertainty associated with a supply. The nature of the purchase influences many purchasing
decisions like the size of supplier base, the extent of resources to commit to supplier development and
other long-term involvement with suppliers. Fig. 3 lists various categories of purchases and their
features and possible action plans. The buyer-supplier relationships for each category of purchase will
be different from the others. As a strategy, long-term relationships are preferred for sourcing of
bottleneck items, and a medium-to-long-term relationship with one or a smaller group of suppliers is
prescribed for strategic items. For other types of items, the relationship is of very short duration for
which procurements can be done in a more traditional manner. For bottleneck items, there are very
few suppliers available in the market, and so a supplier reduction strategy may not be desirable in this
case. However, a common approach can be adopted for identifying the constituents of the choice set.
Supplier base rationalization process should be done through a careful evaluation of suppliers
considering both their short-term performances and long-term capabilities.
92
8.3.2 Performance versus capability of a supplier
Sourcing decisions should be normally based on the consideration of a large number of factors [20].
However, a majority of practitioners focus only on such factors as cost, quality and service, while
neglecting other important factors like technological and financial capabilities, quality systems, etc.
This has not helped in distinguishing suppliers with strong long-term capabilities from those who
excel when measured against short-term criteria. In their study on evaluation and rationalization of
suppliers, Narasimhan et al. [21] have used organization’s capability factors as input resources and
performance factors as output variable in their DEA study. We suggest that the supplier evaluation
criteria can be classified based on their influence on the short-term and the long-term goals of the
supply chain. We define two important dimensions of a supplier’s abilities: performance and
capability. Performance is defined as the demonstrated ability of a supplier to meet a buyer’s short-
term requirements in terms of cost, quality, service, and other short-term criteria. Capability is defined
as the supplier’s potential that can be leveraged to the buyer’s advantages in the long term. Various
criteria identified for supplier selection by Katsikaes et al. [22], Choi and Hartley [23], Swift [24], and
Weber et al. [20] are classified as performance and capability factors and are presented in Table 10.
93
Production Facilities and Capacity 20
Communication Openness 23
Labour problems at the supplier’s place 20
Business volume/Amount of Past business 20
It can be seen in Table 10 that while most of the performance factors are quantitative and can be
measured relatively easily, most of the capability factors are qualitative and present measurement
problems.
Scores secured by a supplier against the capability and performance ratings define the position of the
supplier in Fig. 6. A diagonal line drawn from the top-left corner to the right-bottom corner divides
the suppliers into three classes: (1) Balanced suppliers, (2) Motivated suppliers, and (3) De-motivated
suppliers. All suppliers on the diagonal line are balanced suppliers (supplier H). They are deemed to
have a performance level which is commensurate with their level of capability. The suppliers lying
below the diagonal line (suppliers E, D, and B) fail to match their performance with their capability.
They are the de-motivated suppliers. They are not sufficiently motivated to leverage on their
96
capability. Either such a supplier is unable to use his capability efficiently or he does not know how to
operate within the framework of the relationship. Suppliers who are above the diagonal line (suppliers
A, C, F, and G) have performed better than their capability. They are the motivated suppliers. They
either have high stake in doing business with the buyer or are committed and are able to efficiently
capitalize on their capability. However, a cross-evaluation of the reasons for over-performance for this
category of suppliers is very necessary. An evaluation of the sustainability of this performance in the
long run should also be carried out. The cost of capability enhancement for them for consistent
performance over a long period should be traded-off against the cost of motivating and improving the
performance of a de-motivated supplier in case of a tie.
In case two suppliers have the same location on the diagonal and the same length of the perpendicular,
ranking is made arbitrarily. In this case the buyer will have to evaluate the cost of supplier
motivational efforts against the cost of capability enhancement. The first option is relatively easier to
implement whereas the latter will demand a lot of resources for both the suppliers and the buyer.
Supplier location on the matrix diagonal as a reference to determine the order of preference is logical
when we give equal importance to both performance and capability ranking. For a supplier the sum of
its rank in the performance factors and the capability factors can also be used in place of using the
intersecting point. In such a case the supplier with a greater sum will be placed ahead of the others in
the order of preference.
Table 8.11: Performance and capability factors considered for the illustrative example
Performance Factors Capability Factors
Price Quality systems in operation at the supplier’s place
Quality Financial capability of the supplier
Delivery lead time Production facilities and capacity
Attitude Management and organization
Technological capability
Breadth of product line
Supplier’s proximity
Existence of IT standards
Labour problems at the supplier’s place
Reputation
The ranks of the suppliers are given in Table 12.
97
Table 8.12: Rankings of suppliers
Supplier Based on Performance Based on Capability
Rank Factors Factors
01. A2 A10
02. A7 A4
03. A4 A6
04. A10 A9
05. A6 A3
06. A3 A1
07. A8 A7
08. A9 A2
09. A1 A8
10. A5 A5
The individual ranks of suppliers for capability and performance factors are plotted in the capability-
performance matrix and are shown in Fig. 8. The capability-performance matrix shows that supplier
A5 lies on the diagonal and is a balanced supplier; however, it performs the least both on the
capability and the performance factors. Suppliers A7, A2, and A8 are located above the diagonal and
suppliers A1, A3, A4, A6, A9 and A10 are located below the diagonal. The suppliers A7, A2, and A8
are the motivated suppliers and the suppliers A1, A3, A4, A6, A9, and A10 are the de-motivated
suppliers. The locations of suppliers A4 and A10 on the diagonal are same, as those of A2 and A7. On
the basis of the length of the perpendiculars, A4 is ranked ahead of A10 and A7 is ranked ahead of
A2.
98
In order to rank the suppliers based on the preferences for developing long-term relationships we
move from the top-left corner of the capability-performance matrix and sequence the supplier based
on their locations on the diagonal. The order of preference is shown in Table 13.
As we have to retain only two suppliers for the case, we retain suppliers A4 and A10 and all other
suppliers are removed from the registered supplier list.
8. 4. Conclusions
Supplier base reduction is a prerequisite for developing long-term relationships with suppliers. In this
chapter, we propose a structured method of rationalizing the supplier base. A major contribution of
our work is that the problem has been addressed considering, in addition to the above two types of
supply risks, the possibility of occurrence of a semi-super-events. The decision tree approach, when
all the three types of supply risks are considered, results in an unmanageable number of decision
alternatives. In order to avoid it, we develop a simple, but elegant, method to determine the optimal
size of supplier base. The method uses tables to evaluate all possible decision alternatives. From the
practitioners point of view this tool is very simple and spreadsheets can be very easily used to find the
optimal solution.
One of the important points of the proposed method is the use of the capability-performance matrix,
which increases the visibility about each supplier’s strengths and weaknesses and facilitates a more
rational judgment. The increased visibility through the use of capability-performance matrix helps
classifying the suppliers into ‘motivated’ and ‘de-motivated’ ones. Tracing the causes for a supplier
being ‘motivated’ or ‘de-motivated’ can reveal important information with respect to ‘consistency’ in
the supplier performance. The ‘capability-performance matrix’ also helps easy ranking of the
suppliers with whom a sustainable long-term relationship can be established.
An issue that needs further development is how to develop a mechanism for continuously evaluating
supplier performance and maintaining of a knowledge base of suppliers. While the knowledge about a
new supplier is usually continually updated over time, the issue is how to include such new
information on suppliers in the method. After the supplier base is rationalized, issues related to the
subsequent management of the supplier base have to be addressed. How to develop and build a
sustainable relationship with this reduced supplier base is also an area that needs further development.
Development of such long-term relationship requires efforts and resources on the part of the supplier
as well as the buyer. A future area of research can be to include and adapt the dynamics of supplier-
development potential as well as related costs into the proposed method.
References
[1] Parker, D., and Hartley, K., (1997) The Economics of Partnership Sourcing versus Adversarial
Competition: A Critique. European Journal of Purchasing & Purchasing Management, 3(2), 115-125.
[2] Mudambi, R. and Schrunder, C.P., (1996) Progress towards Buyer-Supplier Partnerships:
Evidence from Small and Medium-sized Manufacturing Firms. European Journal of Purchasing and
Supply Management, 2(2/3), 119-127.
99
[3] Bechtel, C., & Patterson, J.L., (1997). MRO Partnerships: A Case Study. International Journal of
Purchasing and Materials Management, 33(3), 18-23.
[4] Dowlatshahi, S. (2000). Designer-Buyer-Supplier Interface: Theory versus Practice. International
Journal of Production Economics, 63, 111-130.
[5] Cousins, P.D. (1999) Supply Base Rationalization: Myth or Reality?. European Journal of
Purchasing and Supply Management, 5, 143-155.
[6] Dubois, A., (2003). Strategic Cost Management across Boundaries of Firms. Industrial Marketing
Management, 32, 365-374.
[7] Ramasesh, R.V., Ord, J.K., Hayya, J.C. and Pan, A.C., (1991) Sole versus Dual Sourcing in
Stochastic Lead Time (s, Q) Inventory Models. Management Science, 37(4), 428-443.
[8] Chiang, C. and Benton, W.C., (1994) Sole sourcing versus dual sourcing under stochastic
demands and lead times. Naval Research Logistics, 41, 609-624.
[9] Mohebbi, E. and Posner, M.J.M., (1998) Sole versus Dual Sourcing in a Continuous Review
Inventory System with Lost Sales. Computers and Industrial Engineering, 34(2), 321-336.
[10] Agrawal, N. and Nahmias, S., (1997) Rationalization of the supplier base in the presence of yield
uncertainty. Production and Operations Management, 6(3), 291-308.
[11] Weber, C.A., Current, J. and Desai, A., (2000) An Optimization Approach to Determining the
number of Vendors to Employ. Supply Chain Management: An International Journal, 5(2), 90-98.
[12] Berger, P. D., Gerstenfeld, A., & Zeng A.Z., (2004) How Many Suppliers are best? A Decision-
Analysis Approach. Omega, 32, 9-15.
[13] Kauffman, R.G., and Leszczyc P.T.L. P., (2005). An Optimization Approach to Business Buyer
Choice Sets: How Many Suppliers Should be Included?. Industrial Marketing Management, 34, 3-12.
[14] Cavinato, J.L. (2004) Supply chain logistics risks: From the back of the room to the board room.
International Journal of Physical Distribution & Logistics Management, 34(5), 383-387.
[15] Zsidisin, G.A., Ellram, L.M., Carter, J.R., and Cavinato, J.L., (2004) An Analysis of Supply Risk
Assessment Techniques. International Journal of Physical Distribution & Logistics Management,
34(5), 397-413.
[16] Kraljic, P., (1983) Purchasing must become supply management. Harvard Business Review,
61(5), 109-117.
[17] Zolkiewski, J., and Turnbull, P., (2002) Do Relationship Portfolios and Networks Provide the
Key to Successful Relationship Management?. Journal of Business & Industrial Marketing, 17(7),
575-597.
[18] Nellore, R., & Soderquist, K., (2000) Portfolio Approaches to Procurement: Analysing the
Missing Link to Specifications. Long Range Planning, 33, 245-267.
[19] Olsen, R.F., & Ellram, L.M., (1997) A Portfolio Approach to Supplier Relationships. Industrial
Marketing Management, 26, 101-113.
[20] Weber, C. A., Current, J. R. and Benton, W. C. (1991) Vendor Selection Criteria and Methods,
European Journal of Operational Research, 50, 2-18.
[21] Narasimhan, R., Srinivas, T. and Mendez, D., (2001 summer) Supplier Evaluation and
Rationalization via Data Envelopment Analysis: An Empirical Examination. The Journal of Supply
Chain Management: A Global Review of Purchasing and Supply, 37(3), 28-37.
[22] Katsikeas, C.S., Paparoidamis, N.G., and Katsikea, E., (2004) Supplier Source Selection Criteria:
The Impact of Supplier Performance on Distributor Performance. Industrial Marketing Management,
33, 755-764.
100
[23] Choi, T.Y., & Hartley, J.L., (1996) An Exploration of Supplier Selection Practices across Supply
Chain. Journal of Operations Management, 14, 333-343.
[24] Swift, C.O., (1995) Preferences for Single Sourcing and Supplier Selection Criteria. Journal of
Business Research, 32, 105-111.
[25] De Boer, L., Labro, E. and Morlacchi, P., (2001) A review of Methods Supporting Supplier
Evaluation, European Journal of Purchasing and Supply Management, 7, 75-89.
[26] Timmerman, E., (1986) An Approach to Vendor Performance Evaluation. Journal of Purchasing
and Supply Management, 1, 27-32.
[27] De Boer, L, Van der Wegen, L. and Telgen, J., (1998) Outranking Methods in Support of
Supplier Evaluation. European Journal of Purchasing and Supply Management, 4(2/3), 109-118.
[28] Narasimhan, R., (1983) An Analytic Approach to Supplier Selection. Journal of Purchasing and
Supply Management, 1, 27-32.
[29] Petroni, A. and Braglia, M., (2000) Vendor Selection Using Principal Component Analysis. The
Journal of Supply Chain Management: A Global Review of Purchasing and Supply, 36(2), 63-69.
[30] Monczka, R. M. and Trecha, S. J., (1988) Cost-based Supplier Performance Evaluation. Journal
Purchasing and Materials Management, 24 (Spring), 2-7.
[31] Smytka, D. L. and Clemens, M. W., (1993) Total Cost of Supplier Selection Model: A Case
Study. International Journal of Purchasing and Materials Management, 29, 42-49.
[32] Weber, C. A. and Desai, A., (1996) Determination of paths to vendor market efficiency using
parallel co-ordinates representation: a negotiation tool for buyers. European Journal of Operational
Research, 90, 142-155.
[33] Weber, C. A., Current, J. R., and Desai, A. (1998) Non Co-operative Negotiation Strategies for
Vendor Evaluation. European Journal of Operational Research, 108, 208-223.
[34] Karpak, B., Kumcu, E., and Kasuganti, R., (1999) An Application of Visual Interactive Goal
Programming: A Case in Vendor Decisions. Journal of Multi Criteria Decision Analysis, 8, 93-105.
[35] Degraeve, Z. and Roodhooft, F., (1998) Determining Sourcing Strategies: A Decision Model
Based on Activity and Cost Driver Information, Journal of Operational Research Society, 49(8), 781-
789.
[36] Degraeve, Z. and Roodhooft, F., (1999) Improving the Efficiency of the Purchasing Process
Using Total Cost of Ownership Information: The Case of Heating Electrodes at Cockerill Sambre
S.A. European Journal of Operational Research, 112(1), 42-53.
[37] Degraeve, Z. and Roodhooft, F., (2000) A Mathematical Programming Approach for
Procurement Using Activity Based Costing. Journal of Business Finance and Accounting, 27 (1-2),
69-98.
[38] Ronen, B. and Trietsch, D., (1988) A Decision Support System for Purchasing Management of
Large Projects. Operations Research, 36 (6), 882-890.
[39] Soukup, W. R., (1987) Supplier Selection Strategies. Journal of Purchasing and Materials
Management, 23(3), 7-12.
[40] Vokurka, R. J., Choobineh, J. and Vadi, L. (1996) A Prototype Expert System for Evaluation and
Selection of Potential Suppliers. International Journal of Operations & Production Management, 16
(12), 106-127.
[41] Choy, K.L., Lee, W.B., Lau, H.C.W. and Choy, L.C., (2005) A Knowledge-Based Supplier
Intelligence Retrieval System for Outsourcing Manufacturing. Knowledge-Based Systems, 18, 1-17.
101
[42] Luu, D.T., Ng, S.T., Chen, S.E. & Jefferies, M., (2006). A strategy for evaluating a fuzzy case-
based construction procurement selection system. Advances in Engineering Software, 37, 159-171.
[43] Holt, G.D., (1998) Which contractor selection methodology?. International Journal of Project
Management, 16(3), 153-164.
[44] Hong, G.H., Park, S.C., Jang, D.S. & Rho, H.M., (2005) An Effective Supplier Selection Method
for Constructing a Competitive Supply-Relationship, Expert Systems with Applications, 28, 629-639.
[45] Sarkar, A. and Mohapatra, P.K.J., (2006) Evaluation of Supplier Capability and Performance: A
Method for Supply Base Reduction. Journal of Purchasing and Supply Management, 12, 148-163.
[46] Avery, S., (1999) MRO Report: Supplier Alliances Help Power Wisconsin Electric. Purchasing,
June 3, 62-64.
*************
102
CHAPTER IX
E-procurement
9.1. Introduction
Online procurement (e-procurement) is a technology solution to facilitate corporate buying using the
Internet and other Information and Communication Technologies (ICT). It has been identified as the
most important element of e-business [1]. As specified in the literature and experienced by top
corporate houses the benefits of e-procurement are manifold. These advantages include reducing
administrative costs, shortening the order fulfillment cycle time, lowering inventory levels and the
price paid for goods, and preparing organizations for increased technological collaboration and
planning with business partners[1].
The potential is so great that e-procurement has turned the formerly looked-down-upon traditional
purchasing function into a competitive weapon. One such example is General Electric’s (GE’s)
Trading Process Network (TPN) [2]. Here the buyer posts a request for proposal on the Internet for
access by pre-qualified suppliers. The suppliers download the request and submit bids electronically.
The buyer evaluates the bids, negotiates on line, and places the order with the lowest bidder. The
system also facilitates transaction processing by, for example, automatically reconciling purchase
orders with invoices as part of the payment process. A solution like the TPN impacts both the supplier
selection and contract agreement components of the purchasing process.
Some of the benefits that GE has realized from its e-procurement operations in the initial days are
listed below [2]
• Reductions in labor costs in the purchasing process are one of the reasons that transaction costs
fall so precipitously with e-procurement. For example, in traditional labor-intensive, paper-based
purchasing process in GE the transaction cost. was ranging from $70 to $300 per purchase order.
GE saw those costs drop 30%.
• Material cost reductions in the range of 5% to 20% were realized because GE’s e-procurement
solution helped the firm reach a wider supplier base and identify heretofore unidentified and
qualified sources of supply.
• The system also allows the Company’s purchasing departments around the world to share
information about their best suppliers. GE’s purchasing departments gained 6 to 8 days per month
to work on more strategic initiatives.
• e-procurement systems enable firms to more efficiently and accurately capture and aggregate how
much they are spending corporate-wide in various purchased product areas, allowing the firm to
bring what may be significant buying power leverage to market. This benefit contributes to the
5% to 20% material cost reductions that GE has experienced.
Adoption of technology solution requires reengineering of the traditional purchasing process. We
describe below the traditional purchasing process and compare with a typical reengineered process.
9.2. Traditional purchasing process
Fig. 1 provides an overview of a typical purchasing process. It begins with the need to define buying
requirements based on the demands of the firm’s final customer. At this stage, specifications are
developed. The step involves early purchasing involvement (EPI) and early supplier involvement
(ESI), as well as inputs of a cross-functional buying team that may include, in addition to supply and
engineering, representatives from operations and marketing. Once the specifications have been
developed, a buying team led by the supply manager will pre-qualify suppliers, generate requests for
proposals, evaluate the proposals, and select a supplier based on established selection criteria.
Contract negotiations result in the terms and conditions of a formal contract. Ordering routines and
transaction-processing guidelines are established for all purchases that take place under the umbrella
of the negotiated contract. Closing the loop is a supplier evaluation system that assesses supplier
performance that provides information to be used as the basis for rating the supplier (e.g., excellent,
good, fair, unacceptable).
.
Define
Requirement Select Contract Supplier
Supplier Agreement Evaluation
The basic activities in the purchasing process are not altered in an organization using ICT tools and
techniques. Rather they are reengineered for the convenience of automation leading to value creation.
Figure 2 (adopted from [3]) provides an overview of the main activities in sourcing and procurement
in a typical e-procurement setup.
Spend
Analysis Sourcing Procurement Settlement
i. Spend Analysis: The focus of this activity is to develop an aggregate view of the procurement
spend across the organization using the transaction data. The aggregate spend by commodity,
supplier, plant etc provides a basis for identifying cost saving strategies. A typical example is to find
commodity classes or plants where reducing suppliers and increasing volume to a small number of
(preferred) suppliers might allow for better price negotiations. Another piece of analysis is to track the
performance of each supplier based on past behavior. This is a strategic activity.
104
ii. Sourcing: One of the fundamental aspects of sourcing is supplier selection (for a commodity class
identified by spend analysis) using one of many negotiation techniques (such as RFx, auctions etc).
Once the suppliers are selected the relationship with the selected suppliers is then managed through
the negotiated contracts. This step operationalizes the strategy developed by spend analysis.
iii. Procurement: This is a tactical activity where purchasing is (ideally) performed within the
umbrella of existing contracts. Typical purchasing within an enterprise starts with a requisition that is
approved and purchased from within catalogs of selected suppliers. An additional activity that is
supported at this level is the enablement of (new) supplier catalogs and the management of these
catalogs.
iv. Settlement: This is the follow through activity where the purchase is ordered, invoiced etc. This is
the routine of bookkeeping of each purchase and will not be elaborately discussed.
Comparison of Fig 1 and Fig. 2 reveals the following facts:
• The basic activities for developing product specification is an unstructured decision making activity
involving human intelligence, awareness of the recent trends and technologies and other external
factors. Thus, they are not the candidates for automation. Therefore, absence of these activities in
Fig. 2 is not surprising.
• Integration of the information systems across geographically dispersed units of an organization,
developments in data warehousing and mining technologies has enabled the managements to take
strategic decisions on spend management. Therefore, these activities do not appear in Fig. 1.
• Spending in purchasing is directly affected by the suppliers’ performance. Therefore, this strategic
activity is integrated with spend management (Fig. 2).
• Other strategic decision making activities which are automated are clubbed together under the
umbrella of sourcing in Fig 2. These activities were otherwise spread under supplier selection and
contract negotiation steps of the traditional process.
• All the activities involving ICT in tactical decision making, such as organization-wide workflow,
connecting to suppliers’ catalog systems are under the umbrella of procurement (Fig 2.). This step is
not present in the traditional purchasing process.
• The transaction activities which do not involve any decision making tools are clubbed together as
settlement activity (Fig 2). In Fig. 1, these activities were considered as a part of contract
negotiation.
We describe below the ICT tools and techniques used for different steps of the reengineered
purchasing process [3].
9.4. Spend Analysis
This term is a general umbrella term used to capture various strategic activities that are important for
designing a sourcing strategy for the corporation. Following steps are involved in spend analysis.
9.4.1 Data Warehouse for Spend Analysis
As evident from GE’s case e-procurement systems enable firms to more efficiently and accurately
capture and aggregate how much they are spending corporate-wide in various purchased product
areas, allowing the firm to bring what may be significant buying power leverage to market. This
requires the creation of a homogeneous data warehouse from disparate (heterogeneous) databases
(from various departments, locations). Some of the subtasks to creating a data warehouse are:
a) Supplier Normalization: It is likely that the same supplier (e.g. HP) might have been referred
differently in different systems (e.g. HP India, H.P., etc.). These aliases should be mapped to a unique
supplier name before creation of the data warehouse. Moreover, the parent child relationships often
need to be resolved – this is particularly difficult since mergers and acquisitions often lead to parent-
child relations within companies that are completely different. This entails the creation of a list of
distinct suppliers so that transactions to the same supplier can be grouped together.
105
b) Commodity Mapping: This requires each transaction to be mapped to an appropriate commodity
code, such as the UNSPSC code or a company proprietary code. This is due to the fact that the
transaction records at the invoice level often provide only a part level description of the commodity
and maybe associated supplier codes.
c) Data Visualization: Once the data is scrubbed and cleansed, a set of visualization and rendering
tools are required to view the different cross sections of the data so as to get an enterprise-wide view
of procurement spend.
9.4.2 Sourcing Strategy
Once a data warehouse is available for analyzing the procurement spend, the next step is to evaluate
the different sourcing options for each commodity class or other dimension and identify the potential
cost savings. This then provides a basis for a list of actionable sourcing initiatives. The subtasks to
creating such a strategy report as follows:
a) Demand Aggregation: The data warehouse provides a means to examine spend by each category,
supplier, plant etc. An important first step is to establish the number of suppliers being used for each
commodity class across all plants. Often such an exercise might reveal that the number of suppliers
that are being used for each commodity is very large and it presents an opportunity to allocate the
demand to a few suppliers and leverage the aggregate demand volume to negotiate better prices.
b) Supplier Scorecarding: While consolidating the supplier set for any commodity class it is important
to analyze the supplier performance against a set of company’s strategic metrics. The score carding
function helps identify the top suppliers to whom future allocation awards would likely go (despite
potentially having higher prices) as well as the bottom suppliers who would need to be more
aggressively managed as part of the “supplier relationship” activities.
c) E-Procurement model Selection: Different kinds of product require different e-procurement models.
Mostly four types of models are discussed in the literature: Use of E-Procurement Software, Internet
market exchange, B-to-B auctions, and Internet purchasing consortia (Table 1). These models can be
broadly classified under two technology options (Table 1): Catalog Buying and Contract Negotiation.
These topics will be elaborated in the next section.
d) Report Generation: Finally a report needs to be generated that outlines the sourcing strategy based
on the spend analysis.
Table 1 E-Procurement Models and Technology Options[1]
E-Procurement model Description
106
9.5 Sourcing
Activities carried out under this step depend on the type of the e-procurement model and the
corresponding technology selected at the strategic level.
9.5.1 Catalog buying
Catalog buying is carried out using Internet-based software application that enables employees to
purchase goods from approved electronic catalogues in accordance with company buying rules, while
capturing necessary purchasing data in the process. The employee’s selection of a good for purchase
from a supplier catalogue is automatically routed through the necessary approval processes and
protocols. E-Procurement software investment may take several forms, including purchase of a
software package from a third party technology provider (e.g., Ariba, CommerceOne), use of an e-
procurement system embedded in an Internet market exchange, subscription to e-procurement
software hosted and supported by an application service provider (ASP), or development of a
proprietary in-house system.
9.5.2 Contract Pricing
Another aspect of a sourcing strategy is to decide on a negotiation technique for contract pricing. The
ability to provide good estimates depends on how well the cost types of the suppliers can be
characterized. In addition, it is important to model and analyze the risks associated with the
uncertainty in the demand and choose contracted volumes optimally.
9.5.3 Core Ingredients of Contract Pricing
1. RFx: The RFI (Request for Information), RFQ (Request for Quotation), and RFP (Request for
Proposal) – collectively referred to as RFx – each represents a document and a means for a buyer to
specify the requirements of a purchase along multiple dimensions from multiple suppliers.
2. Protocol for price discovery: It can be done in a single round or in multiple rounds. The single
round systems are equivalent to sealed bids in traditional purchasing environment. Multiple round
auctions are like traditional English auctions and called reverse auctions for obvious reason.
3. Contracts: The negotiations lead to a contract which is then executed with one or more suppliers.
9.6 RFx
In B2B settings, the specification of purchases can get quite complex and require sophisticated
capabilities that allow the specification of complex items or services. Complex RFQs also need to
allow for a variety of bid structures that exploit complementarities and economies of scale in cost
structures of suppliers. An RFx is a document with an associated process initiated by a buyer in order
to solicit information, competitive quotes, or proposals from multiple suppliers. A sourcing platform
should make the RFx process as easy and straightforward as possible for all of the parties involved. It
should also be versatile enough to be used for both goods and services, and for both direct and indirect
spend categories. It should also support a wide range of RFx types and sizes, from simple RFIs to
complex RFPs.
Most RFx applications support a common set of capabilities such as the creation and editing of an
RFx document that mimics its paper-based counterpart. For example, this includes being able to add
any number of questions with response fields of the attribute types expected by the buying
organization (e.g., numeric, date, text, units of measurement, etc.) for each line item. Table 2
summarizes some of the major requirements in terms of bids (from the seller side) that are supported
in RFx systems.
107
Table 2. Description of Complex Bid Types
Bid Types Description
Simple multi-line bids A bid includes multiple items, and specifies the unit price for each item.
Multi-attribute bids A bid includes multiple items, and specifies various relevant attributes for
each item, including unit price.
Bundled bids A bid includes multiple items, specifies the quantity of each item, and
provides a total bid price for all the items.
Volume discount bids A bid includes multiple items, and specifies the price curve of each item.
Configurable bids A bid includes multiple items, and specifies various relevant sets of values
for each attribute for each item. This provides a compact representation for
a large number of configurations (e.g. PCs) and needs to support mark-up
based pricing.
9.7 Protocols
RFQs are often used in a single round process that is similar to a one shot sealed bid auction where
the winners are selected (based on the recommendations of the bid evaluation engine) once all the
bids are in. After receiving such bids the buyer needs to identify the set of bids that minimizes total
procurement cost subject to business rules such as:
• The number of winning suppliers should be greater than a certain number (to avoid depending
too heavily on just a few suppliers), but smaller than a certain number (to avoid too much
administrative overhead);
• The maximum amount purchased from each supplier is bounded to a certain limit;
• At least one supplier(s) from a target group (e.g., minority) needs to be chosen; and
• If there are multiple winning bid sets, then one needs to pick the set that arrived first.
Decision support capabilities are essential to facilitate the creation and evaluation of such complex
RFQs and bids. Identifying the cost minimizing bid set subject to these business rules is a hard
optimization problem and difficult to do by hand (as is a common practice today). In addition, the
buyer is required to specify a scoring function that specifies the tradeoff of the non-price attributes
against price. This is difficult to do in a consistent manner without a rational process to elicit the
tradeoffs.
However, in a price negotiation context, it is often desirable to have a multiround process where after
each round the suppliers are allowed to reformulate their bids based on information about the winning
bids (more like based on feedback from the auctioneer).
9.7.2. Multiple round price discovery
A typical flow for negotiation is to get a bid response to the RFQ from the suppliers and choose the
appropriate bid/bids that satisfy the requirements of the purchase at minimum cost. With the advent of
the Internet, online reverse auctions represent a new tool in the purchasing department’s toolbox to
108
potentially increase competition through open, real-time, competitive bidding, which requires an
iterative bidding protocol.
Such a multi-round process is illustrated in Fig 3. The bid evaluation engine provides the decision
support for all the three functions required for multi-round negotiations and iterative auctions. Winner
determination identifies the winning bids from a given set of bids to minimize the total procurement
cost, the pricing module prescribes the payment to be made by each winner (this could be in general
different from the bid price to promote efficiency in the market), and signaling provides a “market
clearing” price for bid reformulation. This iterative process continues until there are no new bids or
closing time.
Most reverse auction formats allow for live, real-time, open, competitive bidding where bidders must
outbid the current winning bid in order to win the business. There are a variety of auction formats and
settings. The most basic reverse auction is a price-only auction for a single item. Most auction
providers (and there are many) provide a wide range of formats and settings including multiple
quantities of a line item, multiple line items, time extensions, start and reserve prices, partial quantity
bids and award allocations, and bundled bids – to name only a few. However, there are three advanced
auction formats/settings of note:
• Combinatorial Auction – allows suppliers to mix bundled bids along with un-bundled bids
• Volume Discount Auction – allows suppliers to establish price discounts at certain quantities
• Multi-attribute Reverse Auction – the winner(s) is determined by a score (rather than just
price) calculated using the buyer’s weights and preferences for price, quantity, and any
number of other attributes.
One of the main goals of a sourcing project is to execute one or more purchasing contracts with one or
more suppliers. The prices and terms of the line items covered by a contract were previously
negotiated in RFx and auction rounds. The contracts themselves, however, also go through a different
form of negotiation at a more legally precise level. Once executed, these contracts are meant to be
used to procure the contracted line items (perhaps via a procurement system) using the negotiated
109
prices and terms. A sourcing platform should provide a means to generate a contract based on its
preceding RFx and auction negotiations, support contract negotiations, and monitor compliance to the
contracts’ business commitments over time.
Contract monitoring capabilities incorporated into contract management software include:
• An alert notification is sent when a contract is soon to expire.
• The buyer’s purchase volume commitments can be monitored with alert notifications sent if there is
danger of buying under the minimum quantity within the designated time period.
• Notifications can be sent alerting the buyer and/or supplier of a supplier’s violation of a delivery
commitment.
In all of the contract management solutions today, there is a specific and important shortcoming.
Namely, there are two key parts enabling automated contract monitoring which currently must be
performed manually.
• First, the contract commitments to be monitored must be manually selected out of the contract’s
negotiated legalese into a structure easier to analysis.
• Second, the business process data and raw transaction data needed to assess whether commitments
are being fulfilled or violated is also captured mannually.
Business process integration and management (BPIM) and business activity monitoring (BAM)
systems are beginning to address the challenges associated with contract monitoring systems.
9.9 Risks Associated with e-Procurement Technologies
In a survey conducted by Davila et al. [1], respondents perceive certain risks linked to the adoption of
e-procurement technologies that need to be addressed before these technologies are widely accepted.
These risks include:
Internal business risks: companies are uncertain about whether they have the appropriate resources to
successfully implement an e-procurement solution. The experimentation of the companies following a
‘wait and see’ strategy may help to develop the required absorptive capacities. Implementing an e-
procurement solution requires not only that the system itself successfully performs the purchasing
process, but most important, that it integrates with the existing information infrastructure. This
internal information infrastructure includes systems such as accounting, human resources, asset
management, inventory management, accounts payable, production planning, and cash management
systems. Most organizations adopting or looking to adopt e-procurement software already have
significant investments in these other systems; integrating these new technologies with existing
platforms should happen as smoothly as possible. Failure to integrate creates duplicative work steps
and jeopardizes the reliability of organizational information.
External business risks: e-procurement solutions need not only ‘talk’ with internal information
systems, but also need to cooperate with external constituencies — mainly customers and suppliers.
External constituencies need to develop internal systems that facilitate the communication through
electronic means — an issue that demands technology investments as well as incentives for these
constituencies. For e-procurement technologies to succeed, suppliers must be accessible via the
Internet and must provide sufficient catalogue choices to satisfy the requirements of their customers.
Ideally, suppliers will provide e-catalogues in the formats required by customers, reflecting custom
pricing and/or special contractual agreements, and will send updates on a regular basis. However,
suppliers, especially in low margin industries, may be hesitant or even unable to meet such demands
without guarantees of future revenue streams. Lack of a critical mass of suppliers accessible through
the organization’s e-procurement system would limit the network effects that underlie these
technologies, further hindering the acceptance and adoption of the technology. Cooperation with
external parties also requires new suppliers and customers to meet the business criteria that
organizations have set to accept them in their networks. Since some of the business models associated
with e-procurement technologies (e.g. auctions, consortia, and exchanges) clearly envision the use of
suppliers with whom the buyer has not previously transacted business, companies need to develop
110
mechanisms that provide the buyer with assurances that the supplier meets or exceeds recognizable
and industry enforced standards relating to supplier quality, service, and delivery capabilities.
Technology risks: companies fear the lack of a widely accepted standard and a clear understanding of
which e-procurement technologies best suit the needs of each company. The lack of a widely accepted
solution blocks the integration of different e-procurement software across the supply chain. The
significance of this risk factor seems to suggest the need for clear and open standards that would
facilitate inter-organization e-procurement technologies. Without widely accepted standards for
coding, technical, and process specifications, e-procurement technology adoption will be slow and
will fail to deliver many of the benefits expected.
E-procurement process risks: another set of risks has to do with the security and control of the e-
procurement process itself. Organizations must be confident, for example, that unauthorized actions
will not disrupt production or other supply chain activities when committing to e-procurement
technologies. Thus, the challenge for the e- procurement technology adoption is to provide evidence
to non-users that these technologies (1) do not undermine control, security, or privacy requirements;
(2) they are not so technically complex that organizations without a sufficient technology skill set
cannot use them, and (3) the new business model provides the right incentives to supply chain
constituencies to effectively use these technologies.
Table 3 identifies the changes in the buyer – supplier relationship as a major barrier to e-procurement
technology use. While technology is perceived as a barrier, reflected in the ‘lack of common
standards’ concerns for e-procurement software, most barriers point to the need for redesigning these
relationships. If, for example, the use of e-procurement undermines amicable trading relationships,
buyers are concerned about how they will obtain needed goods when supplies get tight. Buyers are
also concerned that these technologies will push prices down to the point where suppliers cannot
invest in new technology or product development, upgrade facilities, or add additional productive
capacity. Additional price pressures can even push suppliers with a poor understanding of their cost
structure out of business. Finally, integration with existing mechanisms is seen as another barrier.
111
Reference
[1] Antonio Davila, Mahendra Gupta and Richard Palmer Moving Procurement Systems to the
Internet:: the Adoption and Use of E-Procurement Technology Models European Management
Journal, Volume 21, Issue 1, February 2003, Pages 11-23
[2] William D. Presutti Supply management and e-procurement: creating value added in the supply
chain Industrial Marketing Management, Volume 32, Issue 3, April 2003, Pages 219-226
[3] Robert Guttman, Jayant Kalagnanam, Rakesh Mohan and Moninder Singh, Strategic Sourcing
and Procurement, In Supply Chain Management on Demand Strategies, Technologies,
Application, Chae An and Hansjörg Fromm Eds., Springer Berlin Heidelberg, 2005, pages 117 –
142
***********
112
CHAPTER X
Economic Theory of Auctions
10.1. Introduction
Auctions have been widely adopted as tool for buying and selling goods and services. Auctions can be
used to sell (allocate) almost all kinds of goods. The governments use them to sell public resources
such as radio spectrum licenses and oil drilling rights; the firms and individuals use them to sell
houses, flowers, antiques, etc. They also find applications in fields of computer science, such as
allocating bandwidth in the communication networks. The online auction business model is the one in
which participants bid for products and services over the Internet. The functionality of buying and
selling in an auction format is made possible through auction software which regulates the various
processes involved. eBay, the world's largest online auction site, is one of the better known examples
of buying auctions. Similarly, a reverse auction is a tool used in industrial business-to-business
procurement. It is a type of auction in which the role of the buyer and seller are reversed, with the
primary objective to drive purchase prices downward. One example of a reverse auction site could be
Metal Junction – a consortium of Tata Steel and SAIL. A less know online auction business is
adopted by Dell and GM. They use auction to sell second hand products (used) which they buyback
from their clients. A well designed auction mechanism can add to company’s profitability. Similarly,
a well designed bidding strategy can save a bidder falling into a trap like that of winner’s curse.
Studying auction theory can help understanding basic design issues. Auction theory itself is an
important part of economic theory and it helps to understand properties of the markets, such as the
price formulation and information structures.
Open bids
• Ascending bid auction (also called English Auction) – In this auction the price is successively
raised until one bidder remains. This bidder wins the object at the final price. The auction can
run by the auctioneer calling price, the bidder submitting prices, or electronic bids with the
highest bid posted continuously. Once somebody quits the process they are not allowed back
in. This auction format is common in art, livestock, and some Internet-based procurement
auctions.
• Descending bid auction (also called Dutch auction) – In this auction the price starts from a
high level and called down. The first bidder who accepts the current price wins. This is how
the Dutch flower auctions are managed, but there are not many other examples of Dutch
auction.
114
Each of the six dimensions provides a vector of choices that are available to set up the auction. Putting
all of these together generates a matrix of auction types. The choices made for each of these
dimensions will have a major impact on the complexity of the analysis required to characterize the
market structure that emerges, on the complexity on agents and the intermediary to implement the
mechanism, and ultimately on our ability to design mechanisms that satisfy desirable economic and
computational properties.
115
10.5.1 Game-theoretic/mechanism design approach for auction design
An auction is a game with partial information where a player's valuation of an object is hidden from
other players. It is served as a popular way in resources (goods) allocation by specifying a set of rules
to determine the winner(s) and the related payments. A typical setting of the auction is that a seller
attempts to sell one or more items to a set of bidders. The involved players (seller and bidders) do not
have complete information about the value of the items on sale in the sense that they do not know
other’s value but know their own value, which may or may not be affected by others. All players are
assumed to be selfish and payoff-maximizing. The auction theory studies the behavior of the players
in this non-cooperative environment.
116
Game theoretic methods are used to analyze the properties of a mechanism, under the assumption that
agents are rational and will follow expected-utility maximizing strategies in equilibrium.
117
10.5.1.5.1 Vickrey’s Mechanism: an incentive compatible direct revelation mechanism
In the sealed second price auction the dominant strategy is to bid the actual valuation, regardless of
the other players’ strategy. To illustrate the validity of this strategy, we demonstrate using a
hypothetical situation
Consider bidding (v − Δ ) , if the highest bid other than this is bmax , then the following cases arise:
b. If (v − Δ ) > bmax , the bid is won and payment of bmax (same as bidding v )
c. If v > bmax > (v − Δ ) , bidding (v − Δ ) means the auction is lost. However bidding v would
have helped win the auction with a surplus of (v − Δ )
Similarly, now consider bidding (v + Δ ) . If the highest bid other than this is bmax , then the following
cases may occur:
a. If (v + Δ ) < bmax , the bid is not won (same as bidding v )
b. If v > bmax , the bid is won and payment of bmax (same as bidding v )
c. If v < bmax < (v + Δ ) , bidding (v + Δ ) means the auction is won. However, while bidding
v would have meant losing the auction the final payment is greater than the valuation by an
amount of with a surplus of bmax .
Therefore, as demonstrated, bidding more than or less than the actual valuation never helps and may
cause losses. Bidding the true value is the dominant strategy.
VA is concerned with auctioning off a single good. Clarke and Groves mechanism extends the
concept of strategyproofness to the combinatorial auction setting.
118
(first price or otherwise) sealed bid auction offers no opportunity to observe other bidders behavior
while the Dutch auction is “open.” Naturally, the reason is that in a Dutch auction the first time that
somebody enters, he wins and the auction terminates. Thus, in 1SB and Dutch auction bidders have to
decide a-priori how much to bid and the auctioneer will get what the highest bidder submitted. In
economic terms, 1SB auctions are strategically equivalent to Dutch auctions in the sense that for
every realization of bidders’ estimates the two auctions induce identical equilibrium outcomes (i.e.,
winners and prices).
In English auctions it makes sense for a bidder to stay in the auction until his value has been reached
(in fact, it is his dominant strategy, as discussed below). He should not drop beforehand and certainly
should not stay longer. Thus the winning bidder will stay until the next-to-last bidder drops out,
meaning that he will pay the price in which the last bidder dropped out, or the second highest bid.
Thus English auctions are equivalent to 2SB auctions. This equivalence is not as strong as the one
between 1SB and Dutch auctions since with English auctions bidders do get information throughout
the process when they see the prices in which other bidders drop out and can therefore adjust their
estimate of the value of the item being sold and their strategy. This is not a consideration with private
value auctions but important in common and interdependent value auctions. These equivalences are
depicted in the following Figure
PV
Dutch 1st Price
CV
PV
English 2nd Price
Equivalence of Open and Sealed Bid Auctions
The most common types of procurement auctions (where the winner is the participant who offered
the lowest bid) are the 1SB, where the winner gets to supply the item to the buyer at the lowest bid
price and English auction, where the winner supplies the item or service at the second lowest price.
English auctions became popular with procurement managers only during the 1990-s with the
advent of Internet-based auctions.
10.5.1.5.4 Revenue Equivalence
In this section we obtain the revenue’s generated from the First Price and Second Price auction
models.
Assume that there are n bidders and their values {V1 , V2 ,..., V N } are drawn from independent identical
distributions, F (v) with density function f (v). Also the order statistics are {V1′, V2′,..., V N′ }and the
density function of the kth lowest value is given by
119
′ n
f ⎛⎜ v k ⎞⎟ = ⋅ f (v ) ⋅ [F (v )] ⋅ [1 − F (v )]
k −1 n−k
⎝ ⎠ (k − 1)!⋅(n − k )!
Considering that the valuations are drawn from a uniform distribution U [0, 1], the distribution of the
kth order statistics is given by:
′ n
f ⎛⎜ v k ⎞⎟ = ⋅ v k −1 ⋅ [1 − v ]
n−k
⎝ ⎠ (k − 1)!⋅(n − k )!
This is a beta distribution with parameters k and n-k+1, the mean of the distribution is
′ k
E ⎡v k ⎤ =
⎢⎣ ⎥⎦ n + 1
120
and (ii) any bidder with the lowest feasible signal expects zero surplus, yields the same
expected revenue and results in each bidder making the same expected payment as a function
of their signal.
This result applies to both private value models as well as the more general common value model;
however the signals have to independent. Thus all the common auction formats, ascending bid,
descending bid, first price sealed bid, second price sealed bid and even some non – standard auction
formats always lead to the same expected revenue for the auctioneer under the stated conditions.
For any given mechanism, for a given bidder i, let S i (v ) be the expected surplus, as a function of the
type. Let Pi (v ) be the probability of receiving the object. The following equation is the key
S i (v ) ≥ S i (v~ ) + (v − v~ ) ⋅ P(v~ )
The right hand side is the surplus that player i would obtain if she had type v but deviated from
equilibrium behavior and followed the strategy of type v~ . In equilibrium, v must not prefer to deviate
from it, so the left hand side must (weakly) exceed the right hand side. Therefore we have,
S i (v ) ≥ S i (v + dv ) + (− dv ) ⋅ Pi (v + dv )
S i (v + dv ) ≥ S i (v ) + (dv ) ⋅ Pi (v )
S i (v + dv ) − S i (v )
Pi (v + dv ) ≥ ≥ Pi (v )
dv
dS i
= Pi (v )
dv
Integrating,
121
v
S i (v ) = S i (v ) + ∫ P (x )dx
i
x =v
At any type v̂ the slope of the surplus function is Pi (vˆ ) , so if we know S i (v ) , we have the complete
picture. Now, considering any two mechanisms with the same S i (v ) and the same Pi (v ) functions for
all v and for every player i. They have the same S i (v ) function. So any given type, v, of player i,
makes the same expected payment in each of the two mechanisms. Therefore the expected payment
averaged across the different possible types is also the same. Since this is true for all bidders, these
mechanisms yield the same revenue for the auctioneer.
122
interdependent values is the bidder who submitted the highest bid, which is the bidder who had the
highest signal. When notified of the win, the bidder knows one thing for sure – his signal was higher
than the signal of all other (n -1) participants in the auction. This means that the winner is the bidder
who most overestimated the value of the item sold.
c. If one bid is below it and one bid is above then the winner pays r instead of the second
highest price and the auctioneer gains. The probability of this occurring is 2F(r) [1=F(r)].
Thus for small r, the probability of gaining is proportional to F(r) and the probability of losing is
proportional to F [(r )] . Hence it is clearly advantageous to set a reservation price.
2
For determining the optimal reservation price we use the following argument:
Assume that the auctioneers value is 0, there are n bidders and the reserve price is set to r (where r>0)
and the auctioneer considers raising it by a small amount δ.
If the highest bidder bids above (r+ δ), then there is a gain of δ and the probability of the occurrence
of this event is n ⋅ F [(r )] ⋅ [1 − F (r + δ )] . This is a bad move if the highest bidder bids between r
n −1
and (r+ δ). The auctioneer loses r and the probability of this happening is
n ⋅ F [(r )] ⋅ [F (r + δ ) − F (r )]
n −1
n ⋅ F (r ) ⋅ [1 − F (r + δ )] F (r + δ ) − F (r )
n −1
G (δ ) = ⋅ δ − n ⋅ F (r )
n −1
⋅ ⋅r
δ δ
Simplifying and taking the limit as the increment of the reservation price goes to zero:
G (δ ) = n ⋅ F (r ) ⋅ {[1 − F (r )] − f (r ) ⋅ r}
n −1
lim
δ →0
123
The optimal reservation price r ′ is the solution of the above equation; this is independent of the
number of bidders.
max ∑ ∑ x i ( S )vi
xi ( S )
S ⊆ G i∈I
s.t.
∑ x (S ) ≤ 1,
S
i ∀i ∉ I
∑∑ x (S ) ≤ 1,
S∋ j i∈I
i ∀j ∉ G
xi ( S ) ∈ {0,1}
where, S ∋ j indicates that bundle S contains item < j. To apply linear-programming duality theory
we must relax this IP formulation, and construct an integral LP formulation. Consider [LP1] in which
the above equation is relaxed to xi ( S ) ≥ 0 . Then, the dual is simply written as:
min
πi , p( j )
∑ π +∑ p( j )
i
i
j
s.t.
π i + ∑ p( j ) ≥ vi ( S ), ∀i ∈ I , ∀S ⊆ G
j∈S
π i , p( j ) ≥ 0, ∀i, j
124
< The dual introduces variables for items p ( j ) ≥ 0 which we can interpret as prices on items. Given
prices, p ( j ) , the optimal dual solution sets π i = max S {vi ( S ) − ∑ p( j ), 0} . This is the maximal
j∈S
payoff to agent i, given the prices. The dual problem computes prices on items to minimize the sum of
the payoffs across all agents. These are precisely a set of CE prices when the primal solution is
integral. The complementary-slackness (CS) conditions on a feasible primal, x i ( S ) , and feasible
dual p ( j ) , solution define conditions for competitive equilibrium:
π i > 0 ⇒ ∑ x i ( S ) = 1 ∀i
S
p ( j ) > 0 ⇒ ∑∑ x i ( S ) = 1, ∀j
S∋ j i∉I
x i ( S ) > 0 ⇒ π i + ∑ p( j ) ≥ vi ( S ) ∀i, ∀S
j∈S
These conditions have a natural economic interpretation. First and third conditions state that the
allocation must maximize the payoff for every agent at the prices. The second condition states that the
seller must sell every item with a positive price, and maximize the payoff to the seller at the prices.
The prices are said to support the efficient allocation. A seller can announce an efficient allocation
and CE prices, and let every agent verify that the allocation maximizes its own payoff at the prices. In
practice we will need an auction to provide incentives for agents to reveal the information about their
valuations, and to converge towards a set of CE prices.
125
m in
xi ( S )
∑∑
S ∈ Bi i
x i (S ) p i (S )
s .t.
∑
S ∈ Bi
x i ( S ) ≤ 1, ∀i
∑∑
S ∈ Bi S ∋ j
x i ( S ) ≥ 1, ∀j
x i ( S ) ∈ {0 ,1} , ∀ i, S
This is posed as a cost minimization problem with a demand covering constraint. In this formulation
the problem is to procure a single unit of each good, but this can be generalized by increasing the
RHS of the first set of constraints.
Introducing side constraints: In a real world setting there are several considerations beside cost
minimization. These considerations often arise from business practice and/or operational
considerations and are specified as a set of constraints that need to be satisfied general, the specific
form of these side constraints depends on the market structure. Some of these constraints are:
Number of Winning Suppliers: An important consideration while choosing winning bids is to make
sure that the entire supply is not sourced from too few suppliers, since this creates a high exposure if
some of them are not able to deliver on their promise. On the other hand, having too many suppliers
creates a high overhead cost in terms of managing a large number of supplier relationships. These
considerations introduce constraints on the minimum, LS , and maximum, U S , number of winning
suppliers in the solution to the winner determination problem.
yi ≤ ∑
S ∈ Bi
x i ( S ) ≤ Ky i , ∀ i ∈ N
LS ≤ ∑y i
i ≤US
Budget Limits on Trades: A common constraint that is often placed is an upper limit on the total
volume of the transaction with a particular supplier. These limits could either be on the total spend or
on the total quantity that is sourced to a supplier. These types of constraints are largely motivated (in a
procurement setting) by considerations that the dependency on any particular supplier is managed.
Similarly, often constraints are placed on the minimum amount or minimum spend on any transaction,
i.e. if a supplier is picked for sourcing then the transaction should be of a minimum size. Such
constraints reduce the overhead of managing a large number of very small contracts.
Marketshare Constraints: Another common consideration, especially in situations where the
relationships are longterm, is to restrict the market share that any supplier is awarded. The motivations
are similar to the previous case.
Reservation Prices: A reservation price allows the buyer to place an additional constraint on the most
she will pay for some items. This can arise, for example, due to a fall-back option such as an external
commodity market.
References
[1] Jayant Kalagnanam and David C. Parkes, Auctions, Bidding and Exchange Design,
In Handbook of Quantitative Supply Chain Analysis: Modeling in the E-Business Era, David
Simchi-Levi, S. David Wu, and Max Shen (eds.), Chapter 5, Kluwer, 2004.
http://www.eecs.harvard.edu/econcs/pubs/ehandbook.pdf
[2] MIT Open Course Ware: Auction Theory, http://ocw.mit.edu/NR/rdonlyres/ Engineering-
Systems-Division/ESD-260JFall2003/2CECCCEB-0165-42A3-B86A-
B4BBA5A6930B/0/l18ch22auctheory.pdf
[3] Roger L. Zhan and Zuo-Jun Max Shen, Optimality and Efficiency in Auctions Design: A Survey
(With Z. Shen, to appear in Pareto Optimality, Game Theory and Equilibria, A. Migdalas, P.M.
Pardalos, and L. Pitsoulis, eds, Springer (2006), http://plaza.ufl.edu/zhan/paper/zhanShenV2.pdf
126
CHAPTER XI
Technologies for Supply-Chain Integration
11.1 Introduction
Information and communication technology (ICT) has emerged as the key enabler of supply chain
integration. Specifically, the businesses can use the Internet to gain global visibility across their
extended network of trading partners and help them respond quickly to a range of variables, from
customer demand to resource shortages. This chapter shall discuss the existing and emerging
technologies for supply chain integration.
Business-to-business integration and enterprise application integration (EAI) are the top priorities of
companies these days. B2Bi requires exchange of data and sharing of business processes across
multiple trading partners, such as buyers, suppliers and distributors. EAI, on the other hand, requires
internal applications, such as CRM, ERP and legacy systems, to interact with each other
seamlessly.B2Bi and EAI are accomplished by data, application and business process integration. The
integration challenges in both B2Bi and EAI have a lot in common and can be overcome through a
single, integrated solution.
Fig 1: B2B Integration at a glance
Fig 2. B2Bi and EAI integrate all internal and external applications
128
quality attributes such as performance, reliability, safety, security and real-time issues. In other terms,
the perfect middleware would make the process and the content connectivity among dispersed
autonomous systems as if they are components of the same systems.
129
11. 3.1 Communication (transport) level middleware
The term low-level middleware refers to the integration process that is based on the lowest levels of
interconnectivity. Two concepts are included at this level: the Network protocols that are used to
transfer raw data and the remote procedure call (RPC) as first form of distributed computing
framework.
130
responsibilities of the client, the server and all other individual components. EJB deals with issues
such as scalability, replication, distributed processing, deployment, security, or transactions.
CORBA: One of the most important standards that have emerged from the Object Management
Group (OMG) is the Common Object Request Broker Architecture (CORBA). The CORBA’s
infrastructure provides mechanisms to deal with platform heterogeneity, transparent location and
implementation of objects, interoperability and communication between software components of a
distributed object environment. CORBA uses a language-independent Interface Definition Language
(IDL), and is therefore more interoperable than Java RMI. However, Java RMI and CORBA
distributed objects can be integrated through the Java RMI versions that run on top of the Internet
Inter-ORB Protocol (IIOP). CORBA can also be integrated with the EJB because EJB specification
does not dictate the protocol used to communicate with an EJB.
131
11.4.1.1 Architecture for EDI
The essential elements of EDI are:
• The use of an electronic transmission medium (originally a value-added network, but
increasingly the open, public Internet) rather than the dispatch of physical storage media such as
magnetic tapes and disks;
• The use of structured, formatted messages based on agreed standards (such that messages can
be translated, interpreted and checked for compliance with an explicit set of rules);
• Relatively fast delivery of electronic documents from sender to receiver (generally implying
receipt within hours, or even minutes); and
• Direct communication between applications (rather than merely between computers).
EDI depends on a moderately sophisticated information technology infrastructure. This must include
data processing, data management and networking capabilities, to enable the efficient capture of data
into electronic form, the processing and retention of data, controlled access to it, and efficient and
reliable data transmission between remote sites.
A common connection point is needed for all participants, together with a set of electronic mailboxes
(so that the organizations' computers are not interrupted by one another), and security and
communications management features. It is entirely feasible for organizations to implement EDI
directly with one another, but it generally proves advantageous to use a third-party network services
provider.
132
11.4.1.4 Translating the benefits of EDI into a positive ROI
The benefits of implementing EDI can be both strategic and operational. As an example of the
strategic benefits of EDI, the Gartner Group reports that gross sales for a supplier firm to a major
retailer increased 18% after the company implemented an EDI program (Gartner Group, 1996).
• Administrative cost reduction: Studies suggest that it costs an average of Rs.30 to manually
process a customer order, whereas implementing an EDI-enabled order entry program would
reduce that cost to Rs.10 per customer order.
• Personnel Reduction: Studies have shown as much as a 50% reduction in required staff can
result from a fully functional EDI implementation (Price Waterhouse, 1995)
• Cycle time reduction: In a case study involving the Pfaltzgraff Company (a supplier) and Best
Products (a retailer), a 50% reduction in order cycle time was achieved after implementing an EDI
program. (Gartner Group, 1995).
• Inventory Reduction The Gartner Group (1995) reports an average inventory reduction of 10%
at EDI-enabled manufacturing firms.
• Cash flow improvement: Decreased operating expenses and improved accuracy in procurement
areas are directly beneficial to a firm's financial cash flow. Although this area is difficult to
quantify, most accounting departments can attest to the numerous benefits of improved corporate
cash flow
133
Inspired by the promising future of being the standard format for data transfer and object
communication in the distributed systems, many special purpose frameworks are built around XML
making use of its extensibility features.
134
11.4.2.3 Disadvantages of XML
• XML syntax is redundant or large relative to binary representations of similar data. The
redundancy may affect application efficiency through higher storage, transmission and processing
costs.
• XML syntax is too verbose relative to other alternative 'text-based' data transmission formats.
• No intrinsic data type support:XML provides no specific notion of "integer", "string", "boolean",
"date", and so on.
• The hierarchical model for representation is limited in comparison to the relational model or an
object oriented graph.
• Expressing overlapping (non-hierarchical) node relationships requires extra effort.
• XML namespaces are problematic to use and namespace support can be difficult to correctly
implement in an XML parser.
135
mechanisms. The loosely coupled messaging approach supports multiple connectivity and information
sharing scenarios via services that are self describing and can be automatically discovered."
These services are accessible through electronic means, namely the Internet. They are self-describing
and provide semantically well-defined functionality that allows users to access and perform the
offered tasks. Such services can be distributed and deployed over a number of Internet-connected
machines.
A service provider is able to describe a service, publish the service and allow invocation of the
service by parties wishing to do so. A service requester may request a service location through a
service broker that also support service search. Web services are loosely coupled allowing external
applications to bind to them. Web services are also reusable allowing many different parties to use
and reuse a service provided.
As shown above there are three emerging standards for web services:
1. Simple Object Access Protocol (SOAP) which is an XML-based protocol for exchanging
document-based messages across the Internet.An XML-based, extensible message envelope format,
with "bindings" to underlying protocols. The primary protocols are HTTP and HTTPS, although
bindings for others, including SMTP and XMPP, have been written.
2. Web Services Description Language (WSDL), which is a general purpose XML-based language
for describing the interface, protocol bindings and the deployment details of Web services, Typically
used to generate server and client code, and for configuration.
3. Universal Description, Discovery and Integration (UDDI) which refers to a set of specifications
related to efficiently publishing and discovering information about Web services, to enable
applications to find Web services, either at design time or runtime.
In conclusion, Web Services model is trying to use XML as basis to re-invent or improve the
integration model by using SOAP at the communication layer and inter-enterprise workflow at the
business process layer.
136
3) Thirdly, the solution should support diverse sets of file formats, protocols, and security standards.
4) Fourthly, the solution should be based on open standards that allow a company and its partners to
send transactions using any combination of applications and file formats, telecommunication
pathways, communication protocols and B2B protocols, and XML standards such as RosettaNet,
ebXML, OAG, Biztalk, OBI, etc. The solution should also provide support for Web Services.
5) Lastly, the solution should be scalable, that is, companies should be able to scale it horizontally
and vertically. Further, it should offer robust load balancing features, critical to the success of
large applications.
A few leading B2Bi solutions include: IBM MQSeries Integrator; Extricity; BEA eLink; webMethods
B2B Enterprise; NEON eBusiness Integration Servers; Vitria BusinessWare; and Microsoft BizTalk
Server.
11.5.3.2 Security:
B2Bi requires two levels of security. Firstly, B2Bi necessitates opening up corporate firewalls to
enable cross boundary communication between enterprises. Thus, whatever mode of integration is
used, companies have to secure their internal network against malicious attacks through these open
ports.
Secondly, the data transmitted over dedicated leased lines, such as EDI, Internet, or any other mode,
has to be secured. The data may contain classified information, such as corporate information and
business transaction information, and thus cannot be left unguarded. In their current state, Web
Services lack broad support and facilities for security. Thus, Web Services based B2Bi architecture
may potentially have big security loopholes.
11.5.3.3 Dynamic
For companies to participate in true dynamic business with other companies, integration between the
systems of the two companies has to happen in real-time. Further, this integration is only possible if
B2Bi is done using open standards over the Internet.
Web Services do provide a dynamic approach to integration by offering dynamic interfaces. Web
Services are based on open standards such as UDDI, SOAP, and HTTP, and this is probably the single
most important factor that would lead to the wide adoption of Web Services for B2Bi.
137
In this generation of Web Services, it is possible to achieve only function level integration between
applications The next generation of Web Services, however, will be functionally and technologically
advanced, offering user interface encapsulation and security. They will be able to package an
application and embed it into another application.
138
B2Bi requires exchange of data and sharing of business processes across multiple trading partners,
such as buyers, suppliers and distributors. EAI, on the other hand, requires internal applications, such
as CRM, ERP and legacy systems, to interact with each other seamlessly.
Most companies have an environment of disparate legacy systems, applications, processes, and data
sources, which typically interact by a maze of interconnections that are poorly documented and
expensive to maintain. Web Services are not EAI in and of themselves. Rather, Web Services are just
another technology that enables EAI, and it can significantly change the traditional point-to-point
approach.
Using Web Services that loosely integrate applications, a company achieves just a subsection of EAI.
EAI, on the other hand, takes a complete holistic approach of tightly integrating and connecting all
applications and systems that support a company's business. EAI takes years of continued
commitment and effort from different business and technical units within the company, high
investment, and substantial resources.
The current EAI solutions that predominately focus on integrating applications will have to be
changed significantly, as packaged applications in the future will expose their functions as services
using technologies such as XML, SOAP, and UDDI. Thus, the EAI solutions will have to provide a
broad support for service integration rather than application integration.
11.6. Conclusion
B2B integration is the pervasive enabler of most current business strategies such as collaborative e-
commerce, collaborative networks, supply chain management (SCM) and customer relationship
management (CRM) across multiple channels of delivery including wireless devices and the Internet.
B2Bi strategy should be laid out and executed in such a way so as to: have an integrated, real-time
application-to-application, system-to- system interaction with all the existing and new trading
partners; eliminate all manual steps in business processes; conduct secure and real-time commerce
transactions over the Internet; have the flexibility to accommodate the different mode of interactions
of each partner; and, finally, have the ability to adapt to change quickly and easily in this dynamic age
of B2B collaborative e-commerce. This is what B2Bi is all about — the end-to-end automation and
integration of cross-organization business processes, data, applications and systems.
Web Services certainly have the potential of redefining the whole paradigm of B2B integration by
making it truly dynamic, easily implemented in a modular fashion, and in the longer run being
cheaper. The application of Web Services for B2Bi, however, will be limited if services for
authentication, encryption, access control, and data integrity are not available. Web Services
intermediaries that provide services such as UDDI repository hosting, security services, quality
assurance of Web Services, performance checks, etc., will have a big role to play in the B2Bi space.
References
[1] Feras T. Dabous, Fethi A. Rabhi, Pradeep K. Ray and Boualem Benatallah.Middleware
Technologies for B2B Integration. The International Engineering Consortium (IEC) Annual
Reviews of Communications, IEC Press, USA, Vol 56,July, 2003
[2] Christoph Bussler, The Role of B2B Engines in B2B Integration Architectures, , SIGMOID
Record, Vol 31,No. 1, March 2002
[3] Boualem Benatallah, Olivier Perrin, Fethi Rabhi, Claude Godart: Web Service Computing:
Overview and Directions. Book chapter. In Handbook of Innovative Computing. Editor: Albert
Y. Zomaya. Springer, 2005.
[4] Bussler C, B2B Integration technology architecture,.; Fourth IEEE International Workshop on
Advanced Issues of E-Commerce and Web-Based Information Systems, 2002. (WECWIS
2002). Proceedings.
139
[5] Jones R., B2B Integration, IET Manufacturing Engineer Vol 80,Issue 4,Aug 2001, Pages 165-
168
[6] http://en.wikipedia.org/
[7] http://www.webservicesarchitect.com/content/articles/samtani02.asp
[8] http://www.worldscibooks.com/business/etextbook/p263/p263_chap1.pdf
[9] http://www.webservicesarchitect.com/content/articles/samtani01.asp
[10] http://www.msc-inc.net/Documents/EDI_roi.htm
[11] http://www.webservicesarchitect.com/content/articles/samtani07.asp
************
140
CHAPTER 12
Security and Payment Issues in Integrated Supply Chain
12.1. Introduction
In today’s Internet world, it is relatively easy to create, alter and transmit information. The
advancement in computing capacity and interconnectivity has presented a situation where small
efforts can cause potentially large losses. Both accidental and intentional breaches are easier and more
likely. This is a major challenge to businesses that want to take advantage of the current information
technology. Concern for information security is fairly widespread. Those in banking, health care,
finance, and telecommunications rate information security as the highest business priority, with
retailers a little less concerned. In every sector, security is regarded as a key business driver.
12.3.1 Confidentiality: Confidentiality deals with protecting the content of messages or data
transmitted over the Internet from the unauthorized people. For example, it is essential for you to
protect your credit card number from the hackers. Besides other e-business setups, concern for
confidentiality is also major concern in healthcare, insurance, and banking industry. To maintain the
confidentiality of Web users’ information, organizations have to find ways to keep the information
from unauthorized view. From an operational point of view, that means information that is stored has
to be secured in a way that it can only by accessed by authorized parties. Similarly, information in
transit has to be kept from the view of unauthorized parties and that it is retrieved only by a legitimate
entity.
12.3.2 Integrity: Integrity is related to preventing data from being modified by an attacker.
Transmitting information over the Internet (or any other network) is similar to sending a package by
mail. The package may travel across numerous trusted and untrusted networks before reaching its
final destination. It is possible for the data to be intercepted and modified while in transit. This
modification could be the work of a hacker, network administrator, disgruntled employee, government
agents or corporate business intelligence gatherer; it could also be unintentional.
12.3.3 Availability: Availability means that systems, data, and other resources are usable when
needed despite subsystem outages and environmental disruptions. Lack of availability is essentially
loss of use. The most commonly known cause of availability problems is Denial of Service (DoS)
attacks even though there are other common causes such as outages, network issues, or host problems.
The goal is to ensure that system components provide continuous service by preventing failures that
could result from accidents or attacks. From a security point of view, availability is enhanced through
measures to prevent malicious denials of service. Closely related to availability and very important to
e-businesses are reliability and responsiveness. Reliability implies that a system performs
functionally as expected. Responsiveness is a measure of how quickly service could be restored after
a system failure. In other words, it is a measure of system survivability.
12.3.4 Legitimate use: Legitimate use has three components: identification, authentication and
authorization. Identification involves a process of a user positively identifying itself (human or
machine) to the host (server) that it wishes to conduct a transaction with. The most common method
for establishing identity is by means of username and password. The response to identification is
authentication. Without authentication, it is possible for the system to be accessed by an impersonator.
Authentication needs to work both ways: for users to authenticate the server they are contacting, and
for servers to identify their clients. Authentication usually requires the entity that presents its identity
to confirm it either with something the client knows (e.g. password or PIN), something the client has
(e.g. a smart card, identity card) or something the client is (biometrics: finger print or retinal scan).
Biometric authentication has been proven to be the most precise way of authenticating a user's
identity. However, biometric processes such as scanning retina or matching fingerprints to one stored
in a database are often considered intrusive, and there always exists some measure of fear that this
information will be misused.
The approach to authentication that is gaining acceptance in the e-business world is by the use of
digital certificates. A digital certificate contains unique information about the user including
encryption key values. These public/private encryption key pairs can be used to create hash codes and
digitally sign data. The authenticity of the digital certificate is attested to by a trusted third party
known as a "Certificate Authority." The entire process constitutes Public Key Infrastructure.
Once an entity is certified as uniquely identified, the next step in establishing legitimate use is to
ensure that the entity’s activities within the system are limited to what it has the right to do. This may
include access to files, manipulation of data, changing system settings, etc. A secured system will
establish very well defined authorization policy together with a means of detecting unauthorized
activity.
12.3.5 Auditing or Traceability: From an accounting perspective, auditing is the process of officially
examining accounts. Similarly, in an e-business security context, auditing is the process of examining
transactions. Trust is enhanced if users can be assured that transactions can be traced from origin to
completion. If there is a discrepancy or dispute, it will be possible to work back through each step in
the process to determine where the problem occurred and, probably, who is responsible. Order
confirmation, receipts, sales slips, etc. are examples of documents that enable traceability. In a well-
142
secured system, it should be possible to trace and recreate transactions, including every
subcomponent, after they are done. An effective auditing system should be able to produce records of
users, activities, applications used, system settings that have been varied, etc., together with time
stamps so that complete transactions can be reconstructed.
12.3.6 Non-repudiation: It is an attribute of secure system that prevents the sender of a message from
denying having sent it.Non-repudiation is the ability of an originator or recipient of a transaction to
prove to a third party that their counterpart did in fact take the action in question. Thus the sender of a
message should be able to prove to a third party that the intended recipient got the message and the
recipient should be able to prove to a third party that the originator did actually send the message.
This requirement proves useful to verify claims by the parties concerned and to apportion
responsibility is cases of liability. Obviously, this is a crucial requirement in any business transaction
when orders are placed and both buyers and sellers need to be confident that not only are they dealing
with the appropriate parties but also that they have proof to support the claims of any action taken in
the process. Non-repudiation protocol is also useful in forensic computing where the goal is to collect,
analyze and present data to a court of law.
Cryptographic algorithms can be classified into two broad classes: symmetric key cryptography or
asymmetric key cryptography. If keye= keyd in the above encryption and decryption algorithm then the
corresponding the algorithm is called a Symmetric Key Algorithm. Example of such algorithms are
DES (Data Encryption Standard), TDES, IDEA, RC2, RC4, and RC5. These algorithms can be
implemented either in software form or in hardware form. The hardware implementation is typically
100 times faster than the software implementation. The major problem associated with the symmetric
key algorithm is key distribution. The keys are to be securely distributed before starting the actual
secure communication. Symmetric key algorithm cannot be used for authentication or non-repudiation
of the communication process. This is another disadvantage.
If keye is not equal to keyd then the corresponding algorithm is called an asymmetric key cryptography
algorithm. The entities wishing to use the algorithm must posses a pair of keys: a public and a private
key. The public key is known to all the outside entities. RSA (named after its inventors: Ron Rivest,
Adi Shamir and Leonard Adleman) is an example of an asymmetric cryptography algorithm.
Asymmetric key algorithms are much slower than symmetric key cryptographic algorithms. For
example, RSA is 100 times slower than DES. Private Key operation time grows with k3 where as the
private key operation time grows with k2, where k is the length of the key in bits. In real-life situations
RSA is never used for bulk data transfer. Rather it is used for bulk encryption key (symmetric key)
exchange.
143
the data to create such fingerprints. The fingerprints are called hash sums, hash values, hash codes or
simply hashes. In cryptography, a cryptographic hash function is a hash function with certain
additional security properties to make it suitable for use as a primitive in various information security
applications, such as authentication and message integrity. A hash function takes a long string (or
'message') of any length as input and produces a fixed length string as output, sometimes termed a
message digest or a digital fingerprint. In various standards and applications, the two most-commonly
used hash functions are MD5 and SHA-1. Properties of the Hash function are:
• It should be easy to compute h(Msg), where, Msg is the message to be sent.
• It should be hard to obtain Msg given h(Msg)
• It should be very hard to find another message Msg/ such that h(Msg) = h(Msg/)
MD Encryption Decryption MD
Digital signatures are used to create public key infrastructure (PKI) schemes in which a user's public
key (whether for public-key encryption, digital signatures, or any other purpose) is tied to a user by a
digital identity certificate issued by a certificate authority. PKI schemes attempt to unbreakably bind
user information (name, address, phone number, etc.) to a public key, so that public keys can be used
as a form of identification.
144
applications to communicate across a network in a way designed to prevent eavesdropping,
tampering, and message forgery. TLS provides endpoint authentication and communications privacy
over the Internet using cryptography. Typically, only the server is authenticated (i.e., its identity is
ensured) while the client remains unauthenticated; this means that the end user (whether an individual
or an application, such as a Web browser) can be sure with whom they are communicating. The next
level of security — in which both ends of the "conversation" are sure with whom they are
communicating — is known as mutual authentication. Mutual authentication requires public key
infrastructure (PKI) deployment to clients.
As explained in the figure below, TLS involves two basic protocols with following functions:
TLS Handshake protocol
• Peer negotiation for algorithm support
• Public key encryption -based key exchange and certificate-based authentication
TLS record protocol
• Symmetric cipher -based traffic encryption
HTTP
-N eg otiation of cryptographic
T LS H an d sh ake P ro toco l and co m p ressio n algo rithm s
-E xcha ng e of secrets
throu gh P K
TLS
TCP
Figure 2: TLS protocol in context
145
issues each individual a digital certificate. That certificate is signed by the CA and thus vouches for
the identity of the individuals. Unknown individuals can now use their certificates to establish trust
between them because they trust the CA to have performed an appropriate entity authentication, and
the CA's signing of the certificates attests to this fact.
A hierarchical trust model represents the most typical implementation of a PKI. In its most simple
instantiation, this trust model allows end entities’ certificates to be signed by a single CA. In this trust
model, the hierarchy consists of a series of CAs that are arranged based on a predetermined set of
rules and conventions For example, in the financial services world, rather than have a single authority
sign all end entities’ certificates, there may be one CA at a national level that signs the certificates of
particular financial institutions. Then each institution would itself be a CA that signs the certificates of
their individual account holders. Within a hierarchical trust model there is a trust point for each
certificate issued. In this case, the trust point for the financial institution's certificate is the national or
root CA. The trust point for an individual account holder is their institution's CA. This approach
allows for an extensible, efficient, and scalable PKI.
12.5.2 Security Services
The principal business objectives and risk management controls that can be implemented by a PKI are
presented in this section:
Confidentiality: Confidentiality means ensuring that the secrecy and privacy of data is provided with
cryptographic encryption mechanisms. Encryption of data is possible by using either public
(asymmetric), or secret (symmetric) cryptography. Since public key cryptography is not as efficient as
secret key cryptography for data encipherment, it is normally used to encipher relatively small data
objects such as secret keys used by symmetric based encryption systems. Symmetric cryptographic
systems are often incorporated into PKIs for bulk data encryption; thus, they are normally the actual
mechanism used to provide confidentiality.
Integrity: Integrity means ensuring that data cannot be corrupted or modified and transactions cannot
be altered. Integrity can be provided within a PKI by the use of either public (asymmetric), or secret
(symmetric) cryptography. An example of secret key cryptography used for integrity is DES in Cipher
Block Chaining mode where a Message Authentication Code (MAC) is generated. In the PKI
environment, using symmetric cryptographic systems for implementing integrity does not scale
particularly well. Public key cryptography is typically used in conjunction with a hashing algorithm
such as SHA-1 or MD5 to provide integrity.
Authentication: Authentication means verifying that the identity of entities is provided by the use of
public key certificates and digital signature envelopes. Authentication in the e-commerce environment
is performed very well by public key cryptographic systems incorporated into PKIs. The primary goal
of authentication in a PKI is to support the remote and unambiguous authentication between entities
unknown to each other, using public key certificates and CA trust hierarchies. Authentication in a PKI
environment relies on the mathematical relationship between the public and private keys. Messages
signed by one entity can be tested by any relying entity. The relying entity can be confident that only
the owner of the private key originated the message, because only the owner has access to the private
key.
Non-Repudiation: Non-repudiation means ensuring that data cannot be renounced or a transaction
denied. This is provided through public key cryptography by digital signing. Non-repudiation is a by-
product of using public key cryptography. When data is cryptographically signed using the private
key of a key pair, anyone who has access to the public key of that pair can determine that only the
owner of the key pair itself could have signed the data in question. For this reason, it is paramount that
end entities secure and protect their private keys used for digitally signing data.
12.5.3 PKI Logical Components
Different logical components comprise a PKI. The following outlines the typical logical components
in a PKI:
• End entities or subscribers
146
• Certificate authorities
• Certificate policies
• Certificate practices statement
• Hardware security modules
• Public key certificates
• Certificate extensions
• Registration authorities
• Certificate depositories
End Entities or Subscribers: An end entity or subscriber is any user or thing, including inanimate
objects, such as computers that have a need for a digital certificate to identify them for some reason.
The end entity normally must have the capacity to generate a public/private key pair and some means
of securely storing and using the private key. By definition, an end entity is not a CA.
Certificate Authorities: A Certificate Authority plays a critical role in a PKI. According to the IETF,
a CA is “an authority trusted by one or more users to create and assign public key certificates.”
[Internet X.509 Public Key Infrastructure PKIX Roadmap, March 10, 2000]. CA functions as a
trusted third party and provides various key management services A CA’s public keys must be
distributed to all entities that trust the CA’s certificates. If a CA is a Root CA, that is, at the top of the
trust hierarchy and has no superior CA to vouch for it, then the CA must distribute its public keys as
self-signed certificates with an acceptable key certificate format and distribution protocol. The CA
must also make its clear text public keys available, so that relying entities can resolve the self-signed
certificates.
Certificate Policy: A primary tenet of e-commerce security is the CP statement. The CP statement
provides the overall guiding principles that an organization endorses regarding who may do what and
how to systems and data. A CP also specifies how controls are managed. In addition, a CP names a set
of rules that indicates the applicability of a public key certificate to a particular community or class of
applications with common security requirements.
Certificate Practice Statement: The details of a policy statement should be published in a Certificate
Practices Statement or CPS. The CPS is a statement of the practices that a CA employs in issuing
public key certificates. The CPS document enumerates the procedural and operational practices of a
PKI. The CPS should detail all processes within the life cycle of a public key certificate including its
generation, issuance, management, storage, deployment, and revocation. The CPS should also specify
the original entity authentication process that an end entity must be validated through before
participating in a PKI. The objective of the CPS is to instill trust in the PKI such that the user
community at large will have sufficient confidence to participate in it.
Hardware Security Modules: Hardware Security Modules (HSMs) are another primary component
of a CA. A CA must instill trust in not only its client base but also in those who rely upon the
certificates issued to subscribers. Since that trust is predicated upon the security and integrity of the
CA's private keys used to sign the public key certificates of subscribers, it is necessary that those
private keys be secured as best as possible. For this reason, CAs should only store and use their
private keys in specialized computer equipment known as HSMs. HSMs are also known as Tamper
Resistant. Various standards are used to categorize HSMs, for example, FIPS-140-1.
Public Key Certificates: A CA’s primary purpose is to support the generation, management, storage,
deployment, and revocation of public key certificates. A public key certificate demonstrates or attests
to the binding of an end entity’s identity and its public key. The basic constructs of a certificate should
include the name of an entity, identifying information about the entity, expiration period for the
certificate, and the entity’s public key. Other additional and useful information may be included in a
certificate: serial numbers, the CA's name, the CA's public key certificate itself, the type of algorithms
used to generate and verify the keys and certificate, and any other information that the CA generating
147
the certificate considers useful. The most widely used format for digital certificates are those based on
the IETF X.509 standards
Certificate Extensions: Certificate extensions provide additional information within a certificate and
allow them to be tailored for the particular needs of an organization. Certificate extensions can affect
the interoperability of certificates if a relying party does not recognize the structure or content of the
certificate extensions. The types of information that may be found in a certificate extension include:
policy, usage, revocation, and naming data, which provide particular details unique to an
organization’s PKI.
Registration Authorities: A Registration Authority (RA) is an optional but common component of a
PKI. An RA is used to perform some of the administrative tasks that a CA would normally undertake.
Most importantly, an RA is delegated, with the CA’s explicit permission, the authority to perform
tasks on behalf of the CA. The primary purpose of an RA is to verify an end entity’s identity and
determine if an end entity is entitled to have a public key certificate issued. The RA must enforce all
policies and procedures defined in the CA’s CP and CPS.
Certificate Depositories: A certificate depository, sometimes referred to as a certificate directory, is
also an optional but common component of a PKI. A certificate depository may be an efficient
solution for closed systems (e.g., intranet) or those in isolated processing environments (e.g.,
chipcard-based applications) where the Root CA public key is distributed locally or revocation lists
are stored locally.
Certificate distribution can be accomplished by simply publishing certificates in a directory controlled
by a CA or RA. When the directory is controlled by the CA or RA, the certificate distribution process
is greatly simplified. Rather than trying to distribute every certificate to a unique point, the CA simply
updates the directory. A critical factor is that only the CA must have the authority to update or modify
the directory, but the directory must be publicly readable. LDAP is a good example of a simple and
efficient standards based directory format and protocol that can be used for certificate distribution.
12.5.4 PKI Functions
This basic processes which are common to all PKIs are:
• Public key cryptography – Includes the generation, distribution, administration, and control of
cryptographic keys.
• Certificate issuance – Binds a public-key to an individual, organization, or other entity, or to
some other data—for example, an email or purchase order.
• Certificate validation – Verifies that a trust relationship or binding exists and that a certificate
is still valid for specific operations.
• Certificate revocation – Cancels a previously issued certificate and either publishes the
cancellation to a Certificate Revocation List or enables an Online Certificate Status Protocol
process.
148
4. Format the certificate.
5. Digitally sign the certificate data.
Acquiring the Public Key: Depending on the value proposition and business risks associated with
the issuance of a public key certificate, the CA may take basic measures when obtaining the required
identification and certificate information. For example, the CA may allow it to be sent electronically
over the Internet, or the CA could use more sophisticated means such as mandating out of band
manual methods (e.g., using bonded couriers). The type and integrity of the credentials requested by
the CA during the enrollment process also depends on the intentions of the CA.
Verify the Identity of the End Entity: Depending on the business model in place and the amount of
reliance on the public key certificates, the CA may take simple measures to authenticate the end
entity. In any case, the most important factor in the binding process is to ensure that an entity’s
identity is verified unambiguously.
Formatting the Certificate: Before the certificate is signed by the CA, all data to be placed into the
certificate is collected and formatted. The specific data content and format of a public key certificate
can vary depending on the needs of the PKI.
Signing the Certificate: Finally, the certificate is digitally signed under the private key of the CA
used for signing certificates. Once signed it can be distributed and/or published using different
vehicles.
Certificate Validation: Any digital certificate issued by a CA will be valid only for a specified
amount of time mentioned in the certificate. After this period the use of certificate is no longer valid.
So before using any certificate its expiry date must be verified. Along with the expiry date validation
the certificate must also be validated whether the certificate has been signed by the appropriate
certifying authority.
Certificate Revocation: Although public key certificates are issued for a fixed period of time before
they become void, situations can arise where they are no longer trustworthy and thus must be
prematurely expired. This is known as certificate revocation. Certificate revocation must be initiated
by the CA or their delegate such as an RA, which originated the end entity's certificate. The
predominant vehicle for certificate revocation is known as a Certificate Revocation List or CRL. A
CRL is a list generated by the CA that contains unique information about the revoked certificates
which enables relying entities to determine if a certificate is valid or not. A CRL must be published in
a publicly available repository or directory.
12.6. Electronic Payment Systems
A Payment System is a mechanism that facilitates transfer of value between a payer and a beneficiary
by which the payer discharges the payment obligations to the beneficiary. Payment system enables
two-way flow of payments in exchange of goods and services in the economy. An electronic payment
system is needed for compensation for information, goods and services provided through the Internet -
such as access to copyrighted materials, database searches or consumption of system resources - or as
a convenient form of payment for external goods and services - such as merchandise and services
provided outside the Internet. It helps to automate sales activities, extends the potential number of
customers and may reduce the amount of paperwork.
12.6.1 Business to Business Payment Systems
Changes in the marketplace for business-to-business (B2B) payments increasingly demand executives'
attention. Growing numbers of e-payments, decreasing paper check volumes, and new legislation are
the main motivators. Businesses are beginning to realize they must add new payment options to a
process still dominated by paper checks, wire transfers, and automated clearinghouse (ACH)
transactions.
B2B payments involve up to five distinct components:
The buyer - purchases goods and services;
149
The buyer's financial institution - provides banking services to the buyer;
The supplier - provides goods and services;
The supplier's financial institution - provides banking services to the supplier; and
The Federal Reserve or other check-clearing institution - facilitates the check-sorting process.
Figure 3 outlines the traditional B2B check-clearing process in which a buyer writes a check that
subsequently flows through the supplier, the supplier's bank, the Federal Reserve (or other check-
clearing institution), the buyer's bank, and finally back to the buyer. This process is supported by
human intervention throughout the supply chain, including manual entry of check amounts and
associated transaction data.
150
Shifting payment processes to an electronic format requires an IT infrastructure to route payments and
related data through an integrated network. Using such end-to-end infrastructure poses a significant
adoption challenge because users' technological sophistication with regard to the payment supply
chain is so varied. Creating uniformity in the e-payment infrastructure is further complicated by the
fact that the standards of the Federal Reserve infrastructure apply to Fed products but not to products
outside the Fed's domain. Therefore, building the infrastructure necessary to securely and reliably
transmit e-payments and related transaction information among a large group of diverse users is a
significant obstacle and expense. However, the result of an improved payment network, as in Figure
3, is a streamlined process supporting the continuous exchange of information and funds.
12.6.1.1 B-to-B E-Payment Options
Three notable e-payment technologies promise to simultaneously address key adoption challenges,
accelerate B2B e-payment adoption, and deliver across multiple value dimensions: Automated
clearing house (ACH) -based bank proprietary e-payment platforms; Enhanced purchasing card
technology; and Open network systems.
ACH-Based Bank Platforms: ACH-based transactions (such as for direct deposit of employee
salaries) have existed in one form or another since the early 1970s. Bank proprietary systems are e-
payment solutions built on the ACH network. In the past decade, banks have invested heavily in
developing software packages that have improved the functionality of B2B ACH transactions. They
now combine financial processing and electronic data warehousing capabilities into one
comprehensive service. Buyers and suppliers send complete trade information (including contracts,
pricing, orders, receipts, and invoices) to bank systems; the information is then stored in a document
available to both parties. These platforms seek to provide comprehensive electronic bill payment and
presentment from the beginning to the end of the payment cycle.
From the standpoint of value attribute comparisons, ACH-based payment systems meet many of the
requirements identified by B2B transaction participants. The importance of these attributes is the
foundation of historical ACH popularity in facilitating relatively small recurring financial transactions
between businesses and consumers. One particularly troublesome issue is the relatively high
implementation costs of ACH-based platforms, particularly for small- and medium-size businesses.
This issue further complicates systems-integration problems. Another significant issue plaguing ACH-
based solutions is that they rely on bilateral closed networks. This means that businesses must share
sensitive financial information, in turn increasing the risk of fraud. Global coverage is another
concern with any form of ACH-based payment, as ACH systems are typically built for domestic use.
Due to the foregoing, along with the lack of other desired characteristics, ACH-based e-payment
systems have not gained significant traction in the B2B e-payment market. Despite this, bank
proprietary solutions are well suited for large corporations and government agencies that execute
relatively low- to medium-value recurring payments within the country.
Enhanced Purchasing Cards: Enhanced purchasing cards, or p-cards, (such as MasterCard's e-P3)
are e-payment solutions that offer management services for B2B purchasing, presentment, and
payment. This e-payment strategy is the result of collaboration between outside software vendors and
existing p-card issuers to provide businesses with customized end-to-end e-commerce. P-card
solutions allow buyers and sellers to collaborate within a common environment, streamline payment
processes, and use p-cards for payment. Businesses transmit orders electronically, view the status of
orders and invoices online, control the initiation of payments, and integrate data into existing financial
systems.
Enhanced p-cards share many of the positive characteristics of desirable e-payment networks. They
are typically Web-based and relatively easy to integrate with legacy software and hardware
environments. This ease of integration greatly reduces the cost and complexity of implementation.
Another p-card benefit is reduced potential for fraud and credit loss, due to enhanced settlement
processes. P-card transactions settle through an online "credit gateway" associated with the p-card
issuer that allows institutions to exchange encrypted information for authorization purposes. These
systems provide a secure environment for payments because sensitive account information is not
exchanged directly, and transactions are easily traceable and challenged when disputes arise.
151
One significant disadvantage of enhanced p-cards is that interchange fees typically still apply. This
means that suppliers may be averse to accepting the enhanced p-card for medium- to large-value
transactions, as fees increase as the dollar value of the transaction increases. Given these benefits and
limitations, the enhanced p-card represents a good way for current users of p-cards and prospective e-
payment-orientated businesses to conduct low-value nonrecurring transactions.
Open Network Systems: Open network systems, like Visa's "Visa Commerce" platform, are rules-
based e-payment systems that utilize open, secure, global-settlement networks to process buyer-
initiated payments. Open networks interface directly with front-end procurement and accounts-
payable systems to provide buyers and suppliers alike with a seamless, integrated corporate payment
solution. Open networks also enable buyers to initiate and settle payments based on preestablished
terms with suppliers.
Unlike enhanced p-card and ACH-based platforms, open networks employ a flexible fee schedule,
allowing for the minimization of fees for relatively high-dollar-amount transactions. In addition,
open-network solutions may be easier to integrate with legacy environments. For example, Visa
Commerce was designed to integrate with existing procurement applications, regardless of their
sophistication. This integration strategy means that financial institutions, buyers, and suppliers that
want to use an open network can do so with minimal up-front investment. Finally, open networks
typically offer global coverage, a major consideration for moderate-to-large-size firms. Open network
payments appear to be best suited for relatively large-value, multiple-invoice directed payments, as
they circumvent interchange fees and support robust transaction data.
12.6.1.2 Challenges with B-to-B E-Payment Systems
Four main challenges confront businesses interested in adopting e-payment processes: systems
integration; the absence of remittance standards; security; and uncertainty in the return on
investment; in addition, candidate technologies must be considered in light of their ability to deliver
value in multiple dimensions.
Systems integration: The most daunting barrier to the adoption of e-payments is a lack of integration
across the many systems needed to support the process.
Remittance standards: Complicating these challenges is a lack of even minimum standards
pertaining to e-payment data. Early adopters of e-payment technologies still complain that
inconsistencies in remittance standards have resulted, in some cases, in the corruption of important
remittance data, including invoice numbers, and other information, including payment amounts and
payer identification.
Security: Since B2B transactions involve multiple parties, they flow across diverse technology
architectures. Each party in an electronic transaction is subject to the security procedures of other
members in its financial supply chain. Arguably, paper-based payments are subject to similar security
challenges, though they are well understood and have long been accounted for.
Value proposition uncertainty: E-payment networks (like other networks) are subject to "network
externalities," so the value of participation is contingent on the size of the network itself; that is, the
greater the number of participants, the more valuable the network is to each participant. Given that the
technical aspects of e-payment networks are still evolving, it has been difficult for potential
participants to estimate the value of joining and assess the appropriate level and speed of the related
investment. Investment decisions in this context are influenced by businesses' perceptions of the
likelihood of agreement on the issues related to standards and integration. The value proposition of e-
payment networks is further clouded by the realization that connecting accounts for only a portion of
the total cost.
152
12.6.2 Business to Consumer Payment Systems
Business-to-consumer is a form of electronic commerce in which products or services are directly
sold from a firm or company to a consumer without any intermediation. Internet has been proved to
be an effective medium for direct selling. B 2 C on-line payment occurs when the enterprise (the
seller) and the individual (the buyer) settle their transaction on the E-business web site on the Internet
and the bank provide them with on-line fund settlement service. Traditional B-to-C payment services
can be cash based, cheque based or credit card based payment. Out of these mechanisms, credit card
based payment is the most popular.
Credit Card Based Payment: Ideally Internet based credit card payment must works in a similar
way like that of the traditional credit card based system. However, security becomes the primary
concern in an Internet based system. The credit card details once submitted to the merchant’s Web site
can be reused by the merchant himself. The protocol like SET (Secure Electronic Transaction) helps
preventing this kind of situation. SET is pragmatic approach that paves the way for easy, fast and
secure transaction over the internet. That transaction is initiated with a handshake, with the merchant
authenticating itself to the payer and fixing all payment data. The payer then uses a sophisticated
encryption scheme to generate payment slip. The goal of the encryption scheme is to protect the
sensitive payment information such as credit card number. Next the payment slip is signed by the
payer and is sent to the merchant. The merchant sends the slip to its acquirer gateway, to authorize
and capture payment. The acquirer checks all signatures and the slip, verifies the credibility of the
payer and sends either a positive or a negative signed acknowledgement back to merchant and the
payer. SET however is not widely accepted as it is not computationally efficient and requires heavy
investment. Another alternative is to use the Web sites of the Trusted Third Parties to submit credit
card details (Ex. Paypal).
Cheque-like System: Electronic cheques work in a similar way like that of paper cheques. The
customer need a wallet software to store information on the browser about the checking account.
During cheque payment the customer sends encrypted customer’s payment information with banks
public key and other information for the merchant encrypted with the customer’s private key to the
merchant. The merchant sends the payment information to the appropriate bank and get the fund
transferred to its account. CyberCash is one example of such a chequing system.
Digital Cash: In a digital cash system, user can withdraw e-cash coins from a bank and use them to
pay other users. Digital cash is kept as computer files containing huge random numbers digitally
stamped by the bank, i.e. encrypted with banks private key followed by customer’s pubic key. They
are maintained by the customer’s wallet software. Digital cash can be classified as identifiable or
anonymous depending on whether it requires the identity of the spender. Depending on the direct
participation of the bank it can be classified as online or offline. One major problem associated the
money when it is both anonymous and Offline is the problem of double spending. In case of
identifiable Electronic Money a serial number is provided by the bank for each of the computer file.
Example of this type of money is CyberCash. In case of anonymous electronic money the serial
number is created and blinded by the customer. It is subsequently signed by the bank. DigiCash is one
such example. Online Electronic Money is validated online with active participation of the bank
(CyberCash) whereas offline money is not validated online during the transaction. This scheme is
appropriate for peer to peer transaction where bank need not be an active participant (DigiCash).
12.7. Conclusion
E-business depends on providing customers, partners, and employees with access to information, in a
way that is controlled and secure. Managing e-business security is a multifaceted challenge and
requires the coordination of business policy and practice with appropriate technology. In addition to
deploying standards bases, flexible and interoperable systems, the technology must provide assurance
of the security provided in the products. As technology matures and secure e-business systems are
deployed, companies will be better positioned to manage the risks associated with disintermediation
of data access. Through this process businesses will enhance their competitive edge while also
153
working to protect critical business infrastructures from malefactors like hackers, disgruntled
employees, criminals and corporate spies.
The volume of e-payment activity has risen steadily since 1979 and shows no sign of slowing down.
Motivating this trend has been consumer willingness to submit payments electronically. Increasingly,
businesses have sought to improve their understanding of the value of e-payment systems. Businesses
that identify and adopt effective e-payment strategies are more likely to realize streamlined business
processes and significant bottom-line savings compared to their peer organizations that don't. Those
seeking to implement e-payment technologies must consider not only their emerging technology
options but evolving payment needs as well.
References
[1] Eben Otuteye, A Systematic Approach to E-business Security,
http://ausweb.scu.edu.au/aw03/papers/otuteye/paper.html
[2] An Oracle white paper, Managing e-Business Security Challenges,
http://www.oracle.com/technology/deploy/security/oracle9ir2/pdf/9iR2hisec.PDF
[3] E-business Resource Group Security Guidelines,
http://www.bc.pitt.edu/ebusiness/arEBSecurityGuide.pdf
[4] Muhammad M. Satti, Brain J. Garner, Mahmood M. Nagril, Information Security Standards for e-
Business, http://www.macquarietelecom.com/whitepapers/Info%20Security%20Standards%20e-
biz.pdf
[5] Canadian banker association, Minding your e-Business, Security and Privacy Matter,
http://www.cba.ca/en/content/general/MYEBUS_Final_Report.pdf
[6] Public Key Infrastructure Overview, By Joel Weise – SUN PS Global Security Practice, Sun
Blueprints Online, August 2001, http://www.sun.com/blueprints/0801/publickey.pdf
[7] Cutting Checks: Challenges and Choices for the Adoption of B2B Electronic Payments, Mark J.
Cotteleer, Christopher A. Cotteleer, Andrew W. Prochnow, Communications of the ACM,
Volume 50, Number 6 (2007), Pages 56-61
[8] The State of the Art in Electronic Payment Systems, N. Asokan, Phillipe A. Janson, Michael
Steiner, Michael Waidner, IEEE Computer, Vol 30 , Issue 9 ( 1997) Pages: 28 - 35
*********
154
CHAPTER XIII
Automatic Data Capture using RFID and its implications
13.1 Introduction
Optimizing inventory level across the coordinating organizations is an important aspect of supply
chain management. Such efforts require real-time inventory data to be shared among the coordinating
organizations. Collection and dissemination of real-time inventory data requires integration of
automatic data capture and transfer technologies. Automated Identification and Data Capture (AIDC)
includes technology to identify objects, and automatically collects data about them and updates the
data into software systems without human intervention. Some examples of AIDC technologies include
bar codes, RFID, smart cards, voice and facial recognition, and so forth.
Modern AIDC heavily relies upon barcodes, for automated data capture. A barcode basically is a
machine-readable visual representation of information printed on the surface of the objects. The
encoded data on the barcodes is read by barcode readers, which update the backend ERP, SCM, or
WMS systems. However there are some inherent issues with using a barcode as shown in Table 1. To
overcome these issues the industry is now looking at the possibility of using new generation AIDC
technology like the RFID. For example, objects tagged with RFID can be sensed in a wide area, and
there is no need to individually scan all the objects in front of an optical scanner. It can also offer
item-level tagging - that is, each item within a product range can be uniquely identified. The data
collected using RFID technology helps associate production events with each inventory item, which
leads to a tighter inventory control approach that relies on such real time data.
Table 1: Bar code vis-à-vis RFID solution for automated data capture
Bar Code Deficiency RFID improved solution
Line of Sight Technology Able to scan and read from different angles and
through certain materials
Unable to withstand harsh conditions (dust, Able to function in much harsher condition
corrosive), must be clean and not deformed
No potential for further technology Technology advancement is possible due to new
advancement chip and packaging technique
Can only identify the items generally and EPC code will enable to identify up to 296 items
not as unique objects uniquely
Poor tracking Technology, labor intensive Potential to track the items in real time as they
and slow move through the supply chain.
There are several versions of RFID that operate at different radio frequencies. The choice of
frequency is dependent on the business requirements and read environment – it is not a technology
where ‘one size fits all’ applications. Three primary frequency bands are being used for RFID:
• Low Frequency (125/134KHz) – Most commonly used for access control, animal tracking and
asset tracking. Tags in this range are not affected by metallic surroundings and hence are ideal for
identifying metal objects like vehicles, tools, containers, and metallic equipments. The reading
range varies from a few centimeters to a meter depending upon the size of the antennae and the
reader used. These tags can also penetrate through water and body tissues, and hence often used
for animal identification. Most LF-based systems can only read one tag at a time—that is, they do
not support reading multiple tags simultaneously
• High -Frequency (13.56 MHz) – Used where medium data rate and read ranges up to about 1.5
meters are acceptable. HF tags can penetrate through most materials including water and body
tissues; however they are affected by metal surroundings. HF tags are comparatively cheaper than
the LF tags. The data transfer rate is higher compared to LF tags, for example, 20ms for read
operation. This is because at high frequency the communication is faster. The reader can read
multiple tags simultaneously.
• Ultra High-Frequency (850 MHz to 950 MHz) – offer the longest read ranges of up to
approximately 3 -6 meters and high reading speeds. UHF tags are normally less expensive than
HF. Such tags are commonly used on objects that are moving at a very high speed, and a large
number of tags are scanned per second in the business contexts such as supply chain, warehouse,
and logistics. UHF tags do not work well in liquid and in metal surroundings. Larger read range
limits their use to banking and access control applications, because the access card may be scanned
from a longer distance and some unauthorized person might gain entry in restricted premises.
156
used for tracking expensive items; for example, the U.S. military uses these tags to track supplies
at ports
• Semi-Active Tags (or semi-passive or battery-assisted) also contain a battery, which is used to run
the circuitry on the microchip, however it still relies on the reader’s magnetic field to transmit the
radio signal (i.e., information). These tags have a larger range because all the energy supplied by
the reader can be reflected back to the reader, which means it can work at low-power signal levels
as well. These tags have a read range up to 100 meters and may cost a dollar or more. Some of
these tags often are dormant - that is, they are activated by the presence of a reader’s magnetic
field. Once activated, the battery runs the circuitry and responds back to the reader. This is a
mechanism to save the battery power.
• Passive Tags completely rely on the energy provided by the reader’s magnetic field to transmit the
radio signal to and from the reader. It does not have a battery. As a result, the read range varies
depending upon the reader used. A maximum distance of 15 meters (or 50 feet) can be achieved
with a strong reader antennae and RF-friendly environment.
Read-Write vs. Read-Only RFID Tags
• Read-only tags: The reader can only read data stored on such tags. The data cannot be modified in
any manner. The tag manufacturer programs the data on the tag. Such tags are comparatively very
cheap.
• Write-once read-many (WORM): The owner of the tag can program the data by writing the
content on the tag. Data stored on this tag can be written only once; however it can be read many
times.
• Read-write tags: Data stored on such tags can be easily edited when the tag is within the range of
the reader. Such tags are more expensive and are not often used for commodity tracking. These
tags are reusable; hence they can be reused within an organization.
13.2.2 RFID Readers
RFID readers send radio waves to the RFID tags to enquire about their data contents. The tags then
respond by sending back the requested data. The readers may have some processing and storage
capabilities. The reader is linked via the RFID middleware with the backend database to do any other
computationally intensive data processing. RFID readers can be classified using two different
schemes. First, the readers can be classified based on their location as handheld readers and fixed
readers. Second, the tags can be classified based upon the frequency in which they operate -single
frequency and multi-frequency.
Fixed Readers vs. Handheld Readers
• Fixed RFID Readers are fixed at one location (e.g., choke point). In a supply chain and
warehouse scenario, the preferred location of a reader can be along the conveyor belt, dock
door antennae or portals, depalletization stations, or any other mobile location.
• Portable or Handheld RFID Readers are designed for Mobile Mount Applications, for
example, vehicles in a warehouse or to be carried by inventory personnel, and so forth.
Single Frequency vs. Multi-Frequency
• Single-frequency operation readers operate in one frequency zone, either in LF, HF, or UHF.
Such readers become inconvenient if tags in a warehouse are operating in different frequencies.
• Multi-frequency operation readers can operate in multiple frequencies. Such readers can
conveniently read tags, which operate in different frequencies (i.e., LF, HF, or UHF). Hence
these are more useful from a practical perspective, however such readers come at a premium
price.
157
13.2.3 RFID Middleware
In a general, the RFID middleware manages the readers and extracts Electronic Product Code (EPC –
To be explained latter) data from the readers; performs tag data filtering, aggregating, and counting;
and sends the data to the enterprise WMSs (warehouse management systems), backend database, and
information exchange broker. Figure 1 shows the relationship between tag, reader, RFID middleware,
and backend database. An RFID middleware works within the organization, moving information (i.e.,
EPC data) from the RFID tag to the integration point of high-level supply-chain management systems
through a series of data-related services.
From the architectural perspective, RFID middleware has four layers of functionality: reader API,
data management, security, and integration management. The reader API provides the upper layer of
the interface interacting with the reader. Meanwhile, it supports flexible interaction patterns (e.g.,
asynchronous subscription) and an active “context-ware” strategy to sense the reader. The data
management layer mainly deals with filtering redundant data, aggregating duplicate data, and routing
data to appropriate destination based on the content.
The integration layer provides data connectivity to legacy data source and supporting systems at
different integration levels and thus can be further divided into three sub-layers: application integra-
tion, partner integration, and process integration. The application integration provides varieties of
reliable connection mechanisms (e.g., messaging, adaptor, or the driver) that connect the RFID data
with existing enterprise systems such as ERP or WMS. The partner integration enables the RFID
middleware to share the RFID data with other RFID systems via other system communication
components (e.g., the Data Exchange Broker in Figure 2). The process integration provides capability
to orchestrate the RFID-enabled business process. The security layer obtains input data from the data
management layer, and detects data tampering which might occur either in the tag by a wicked RFID
reader during the transportation or in the backend internal database by malicious attacks. The overall
architecture of RFID middleware and its related information systems in an organization are depicted
in Figure 2.
The backend DB component stores the complete record of RFID items. It maintains the detailed item
information as well as tag data, which has to be coherent with those read from the RFID. It is worth
noting that the backend database is one of the data-tampering sources where malicious attacks might
occur to change the nature of RFID item data by circumventing the protection of an organization’s
158
firewall. The WMS integrates mechanical and human activities with an information system to
effectively manage warehouse business processes and direct warehouse activities. The WMS
automates receiving, put-away, picking, and shipping in warehouses, and prompts workers to do
inventory cycle counts.
The RFID middleware employs the integration layer to allow real-time data transfer towards the
WMS. The data exchange broker is employed in this architecture to share, query, and update the
public data structure and schema of RFID tag data by exchanging XML documents. Any update of the
data structure will be reflected and propagate to all involved RFID data items stored in the backend
database. From the standardization view, it enables users to exchange RFID-related data with trading
partners through the Internet. From the implementation angle, it might be a virtual Web services
consumer and provider running as peers in the distributed logistics network.
159
Maintenance:
• Plant & Equipment
• Fixed assets
• Patients
Product security:
• Tamper evidence
• Product authentication
• Anti-counterfeiting
160
Reduction in labor costs: At DC ‘s (Distribution Centers) labor accounts for nearly 70% of costs. It is
estimated that RFID could reduce this by nearly 30% by removing the need for manual intervention
and use of barcodes when loading cases or stocking pallets.
13.5 Enabling RFID Adoption Move in the Supply Chain – The Electronic Product Code
At the heart of the current RFID based technology drive to improve supply chain efficiency and
reduce operating costs, is the EPC (Electronic Product Code).
• Header (0- 7) bits: The Header is 8 bits, and defines the length of the code in this case O1 indicates
an EPC type 1 number which is 96 bits in length. The EPC length ranges from 64 to 256 bits.
• EPC manager (8- 35) bits: Will typically contain the manufacturer of the Product the EPC tag is
attached to
• Object Class (36-59) bits: Refers to the exact type of product in the same way a an SKU (Stock
Keeping Unit)
• Serial Number (60 – 96) bits: Provides a unique identifier for up to 296 products
161
Fig. 4 The basic steps of EPC infrastructure
EPC infrastructure will allow immediate access to information, which will not only optimize existing
services such ASN, but also have the potential to create new services, for example; a retailer could
automatically lower prices as the expiry date approaches, or a manufacturer could recall a specific
batch of products due to health concerns, and if needed pinpoint source of the problem down to a
unique product.
Middleware or Savant Software The sheer potential volume of data created by billions of EPC tags
would very quickly grind most existing companies’ enterprise software and IT infrastructure to a
standstill within a matter of minutes. The answer to this problem is middleware or Savants. RFID
savants serve as a software buffer which sits almost invisible between the RFID readers, and the
servers storing the product information. It allows companies to process relatively unstructured tag
data taken from many RFID readers, and direct it to the appropriate information systems. Savants are
able to perform many different operations, such as monitor the RFID reader devices, manage false
reads, cache data and finally query an Object Naming Service (ONS).
162
Object Name Service (ONS) ONS matches the EPC code to information about the product via a
querying mechanism similar to the DNS (Domain Naming system) used in the internet, which is
already proven technology capable of handling the volumes data expected in an EPC RFID system.
The ONS server provides the IP address of a PML Server that stores information relevant to the EPC.
Physical Markup Language (PML) Whilst the EPC is able to identify the individual product, the
real useful information is written in a new standard software language called Physical Markup
Language. PML itself is based on the widely used and accepted extensible markup language (XML),
designed as a document format to exchange data across the internet. It is not surprising therefore, with
so much of the infrastructure for EPC being borrowed from the internet (DNS,XML..), that it is often
referred to as ¨ the internet of things ¨.
PML is designed to store any relevant information about a product; for example,
(1) Location information e.g., tag X was detected by reader Y, which is located at loading dock Z;
(2) Telemetry information [Physical properties of an object e.g., its mass; Physical properties of the
environment, in which a group of objects is located, e.g., ambient temperature];
(3) Composition information e.g., the composition of an individual logistical unit made up of a pallet,
cases and items. The information model will also include the history of the various information
elements listed above e.g., a collection of the various single location readings will result in a location
trace.
(4) Manufacturing and expiry dates
163
Fig 5. Using EPC
164
adopts any candidate technologies, it has to assess them against the core competency. For example,
when retailer giant Wal-Mart realizes that the core competence lies in its dominant distribution
channels, which can greatly benefit from RFID technology, it demands all of its top 100 suppliers to
attach RFID tags onto their goods. On the contrary, if a company XYZ’s CC is manufacturing high-
quality automobile engine assemblies, it should be very cautious in adopting RFID unless its primary
retailers at the downstream urge it to do so, because the distribution is not XYZ’s core competencies
and doing so will only distract itself from doing what they are good at. Therefore, in general, an
organization should be wary of taking aggressive steps in implementing RFID solutions before the
core competence has been thoroughly reaffirmed and evaluated.
Feasibility Analysis: If the RFID technology is being considered to be aligned with the CC, it is time
for the organization to perform the feasibility analysis, which deals with four major aspects:
• It is very important to estimate the capability of existing information systems. Since RFID will
dramatically increase the amount of the data captured from instance level tags, the information
systems have to capture, process, and analyze such a huge amount of data efficiently. Any
insufficiency of information systems will definitely hinder the RFID technology to realize its full
potential.
• RFID solutions could be quite costly; hence continuous and dedicated RFID funding support is
essential.
• Personnel preparation is another feasibility issue needing consideration. Once implemented, the
RFID might change the business process as well as fundamental information systems operations,
which incur substantial IT training and adaptation study. The learning capability of involved
working staff also determines the success of RFID adoption.
• Lastly, the organization has to choose the appropriate time to deploy RFID. Early adoption of
RFID has advantages as well as risks. The feasibility of time should study whether the company is
ready for RFID at that particular time, and moreover can bear the risk associated, even if the
prerequisite conditions are all met.
Candidate Scenarios
The next step is to identify candidate scenarios that would benefit from RFID and to measure the
potential benefit in a self-defined scale. For example, for manufacturing units, RFID can be used to
support quality control by querying components and subassemblies as they enter the facility. This is a
typical candidate scenario that can be identified. It is recommended for the organization to enumerate
all the potential RFID-involved candidate scenarios that can impact an organization’s core
competences. A typical tabular description of a sample candidate scenario (based on RosettaNet,
2001) is presented in Table 1.
Scenario Prioritization
In this step, the company need to prioritize this list against a set of criteria such as estimated impacts
to CC, the ROI, the cost, and so forth. Such a prioritizing process also needs to consider the
preference from different RFID stakeholders. This is achieved by:
1. organizing the criteria set into a hierarchical structure
2. performing the pair-wise comparison between any two candidate scenarios against one specific
criterion
3. providing the pair-wise comparison between any two criteria for each RFID stakeholder
4. computing the stakeholders-aggregated preferences for each criteria
5. calculating the overall weight for each scenario allowing for all criteria with different preferences
ranking the scenario list against the weight value generated in #5, with the biggest weight value
165
being positioned in the first rank. This step eventually generates a Prioritized Scenario List for
further justification review.
Justification Review:
In this step, for each candidate scenario in the Prioritized Scenario List, the organization conducts the
justification review, a static analysis of RFID deployment only related to that particular candidate
scenario. For example, traditional ROI (return on investment) methods can be utilized to examine
whether RFID technology should be deployed in this candidate scenario. For each candidate scenario,
if the justification review produces negative results, it is removed from the prioritized scenario list,
and the next scenario will be considered to perform the justification review. Otherwise, this scenario
will be subsequently chosen for the following pilot test.
Adoption Issues
Before the pilot test, a clear consensus of issues and their solutions is necessary among all the
stakeholders. Hence in this step, the stakeholders need to unanimously identify adoption challenges
specific to this scenario. The organization itemizes all the possible problems that it might encounter
during the pilot test. More importantly, the solutions that address these challenges should be proposed
and planned in this step. This ensures the smooth progress of the pilot test. Moreover, the solutions
can also be validated and modified during the pilot test.
Pilot Test
Once the justification review is confirmed positively, the pilot implementation can be carried out in an
experimental environment. It can be an RFID prototype in a smaller scope or scale of this candidate
scenario. For example, a pilot deployment in one or two locations allows evaluation of RFID vendors,
equipment, and software, and provides the opportunity for different stakeholders to gain experience
with RFID. Furthermore, such a pilot is to produce the impact analysis for this candidate scenario. As
many impacts can be associated with the RFID deployment, some of them are beneficial to some
shareholders, while some are negative to other shareholders. Hence the pilot implementation can
estimate such impacts brought by RFID deployment. If most of the results tend to be negative, the
likelihood that the RFID to be implemented in this scenario appears to be very low. The organization
might need to consider the next candidate scenario along the Prioritized Scenario List. In contrast,
positive pilot test results with the pilot feedback suggest the start of the RFID implementation in this
candidate scenario.
Implementation
Once the pilot test is passed, the formal implementation can be carried out in an incremental manner
through a set of iterations with a couple of milestones, which ensure that the implementation cost and
risk can be controlled and mitigated to the minimal level. Each milestone has different focuses. Table
3 lists possible sequential tasks within one milestone. Particular attention is given for fostering the
transition between milestones.
166
2
167
3
168
RFID Middleware: RFID middleware contributes a major portion of RFID investment. Many vendors
supply RFID middleware, and the cost can vary depending upon the capabilities of the middleware.
Usually factors that contribute to cost include complexity of the application and the number of places
the middleware would be installed. Apart from the middleware, the companies should also consider
the cost of edge servers, which are normally deployed in the warehouse, distribution center, or
production facility. The edge servers are simple servers, which are connected to the RFID reader
using a Universal Serial Bus (USB) port.
Training Existing Staff: The organization will need to train its employees, particularly engineering
staff who will manage readers in manufacturing and warehouse facilities, and IT staff who will work
on the systems that manage RFID data.
Hiring Technology Expertise: Most of the companies, as of now, would not have the expertise to
deploy a complete RFID system. This is partly attributed to the fact that RFID is a relatively new
technology. Hence an organization would need to outsource this task to a third party who knows how
to install the readers, decide the appropriate location for fixing the tag on the products, ascertain that
the data gathered by the reader is properly propagated to the middleware in the right format, and so
on. This is quite important because RFID systems can be sometimes difficult to install, as there are
several factors that can affect the optimum performance of such a system. Hence a major portion of
RFID investment has to be targeted to this area.
Other Miscellaneous Costs: The miscellaneous costs might include regular maintenance of the RFID
readers or replacement of damaged tags or antennas.
169
Read-Rate Accuracy
Achieving 100% read-rate accuracy is a major adoption challenge with RFID deployment. Supply
chains and warehouse management solutions based on RFID are highly vulnerable to read-rate
inaccuracy because of the number of RFID-tagged items that need to be scanned every second.
Consider the scenario when a palette containing 1,000 RFID-tagged items is scanned at the warehouse
exit. There is a high probability that the reader would not scan a few tags. It is difficult to list the main
reasons that result in inaccurate readings because there are too many “ifs” and “buts”. Accuracy is
dependent on so many unrelated variables that it is difficult to list the main factors behind the cause.
However we attempt to outline some basic parameters, which results in inaccurate readings. Some of
the main reasons for inaccurate readings include the environment in which RFID system works,
material of the item being tracked, reader configuration, reader and tag placements, tag orientation,
and so forth. To successfully deploy an RFID system, some key parameters should be considered to
achieve accurate readings.
Tagged Material
Maintain some consistency when tracking materials. It is not a good idea to standardize the reader
configuration to track cartons, trolleys, pallets, glass materials, documents, or metal or plastic bins.
This is because different materials behave differently to RF energy; some materials are RF friendly,
while others are RF absorbent or RF opaque. A reader configured to read tags from RF-friendly
material would definitely fail to give 100% read-rate accuracy if used to track RF-absorbent or RF-
opaque items.
Tag Orientation
Orientation is one of the big factors for providing good read rates. Even though dual dipole tags
perform much better in all orientations, it is still advised to follow a policy on tag placement and tag
orientation (TPTO). A standard policy on TPTO across an organization would definitely improve the
read-rate accuracy.
170
a) Obtain access and learn the data contents
b) Obtain access and modify/corrupt/erase the data contents
c) Copy the data contents to a similar storage device (duplicate)
In a complete system, security of data as defined above not only involves the storage medium, but
also how data is created and transferred from a host to the medium (or vice versa). For example, when
an engineer broke the security of a French bank credit card a few years ago, he did it not by
compromising the chip security, but by hacking the reader terminal. The following are scenarios that
could happen in the supply chain.
1) Industrial Sabotage – somebody with a grievance against a company decides to start corrupting
data in tags by using a hand held device, and erasing or modifying the contents.
2) Industrial Espionage – A rather unlawful competitor would like to know how many, and what
type of products are being manufactured, and shipped by your company. He could possibly achieve
this in the following ways
i. Eavesdropping – listening in on longer range communication systems like UHF which
broadcast signals (albeit very weak) up to 100 meters – some protocols have a basic security
which ensures that the ID N° is never transmitted completely in one stream.
ii. Placing bogus well concealed readers linked to a PC somewhere in the proximity of the
tags moving through the production line
iii. Using hand held devices
3) Counterfeiting – Being able to read or intercept data being written into a tag which uniquely
identifies or certifies a product. Once the data is known, similar read/write tags could be purchased
and updated with the authentic data, thus creating the real possibility of counterfeiting products which
are supposed to be protected by a tag. All the above scenarios are potential risks if no security is
implemented in the tag and reader. The importance attached to protecting data in the supply chain will
depend on the application, and the company’s strategy towards security. In some cases legislation will
impose it. Of course bar codes which are used today, can be easily read, decrypted, and even
destroyed, but not on the wide-spread and automatic scale possible with RFID. Even the simplest
security costs silicon area, and therefore will impact on the final tag price. This goes against the
current trend of trying to produce the smallest and cheapest tag possible. Every company is therefore
faced with this tradeoff between cheaper unsecured tags, and the potential security risks they entail.
13.9 Privacy
If RFID is deployed in a full scale, it may result in many privacy concerns because RFID can be used
to track consumer behavior, which can further be used to analyze consumer habits. It can even be
used for hidden surveillance, for example, deploying secret RFIDs for tracking. With the size of
RFIDs reducing day by day, it has now become possible to hide them within products without the
owners’ consent. For example, RFID tags have already been hidden in packaging. A scenario of
hidden RFID testing was discovered in a Wal-Mart store in Broken Arrow, Oklahoma, where secret
RFID readers tracked customer action. Criminals with RFID readers can look for people carrying
valuable items and can launch selective attacks. However most of these issues can be tackled by
privacy enforcement laws, which can be incorporated into the nation’s legal framework. All these
privacy issues have created a lot of fear in the consumer community. In order to address these issues,
several approaches are proposed in the literature.
171
sources of errors. Benefits of using RFID include the reduction of labour costs, the simplification of
business processes and the reduction of inventory inaccuracies.
Wal-Mart and the United States Department of Defense have published requirements that their
vendors place RFID tags on all shipments to improve supply chain management. Due to the size of
these two organizations, their RFID mandates impact thousands of companies worldwide. The
deadlines have been extended several times because many vendors face significant difficulties
implementing RFID systems. In practice, the successful read rates currently run only 80%, due to
radio wave attenuation caused by the products and packaging. In time it is expected that even small
companies will be able to place RFID tags on their outbound shipments.
Since January, 2005, Wal-Mart has required its top 100 suppliers to apply RFID labels to all
shipments. To meet this requirement, vendors use RFID printer/encoders to label cases and pallets
that require EPC tags for Wal-Mart. These smart labels are produced by embedding RFID inlays
inside the label material, and then printing bar code and other visible information on the surface of the
label.
The Table below lists few more initiatives and success stories.
13.10 Conclusions
The need to organize and make decisions based on the data provided by the RFID tags is prominent.
To date, research that conjoins RFID technology and item- level inventory management on the
shop floor is at a preliminary stage, only inferring benefits upon application. The challenge is to
collect RFID data in a timely manner, to process such voluminous data, and to make timely decisions
that are tied into manufacturing execution systems. If the challenge is overcome, then the benefits
such as waste elimination, inventory reduction, automatic replenishment, stock-out reduction and
overall cost savings can be easily realized. Therefore, there is a need for RFID data-based effective
decision making algorithms that can lead to such benefits. In this study, a forecasting integrated
inventory management model that relies on real-time RFID data is presented. The goal of this
research is threefold: (1) to model and analyze the decisions made to manage inventory levels of
time sensitive materials in a shop floor manufacturing environment; (2) to investigate new
decision making algorithms that substantiate the use of RFID for real- time visibility; and (3) to
explore the impact of fundamental control parameters on performance measures.
References:
[1] V. Potdar, C. Wu, E. Chang, E-Supply Chain Technologies and Management, chapter ‘Automated
Data Capture Technologies – RFID,’ in IDEA Group Reference, Hershey, PA, USA, March 2007
[2] Radio Frequency Identification news and commentary, http://www.rfidgazette.org/walmart/
[3] A BASIC INTRODUCTION TO RFID TECHNOLOGY AND ITS USE IN THE SUPPLY
CHAIN, Steve Lewis, White Paper from Ship2save
ship2save.com/page_images/wp_printronix_rfid_supplychain.pdf
[4] Want, R., An introduction to RFID technology, Intel Res., Santa Clara, CA, USA; IEEE Pervasive
Computing, Jan.-March 2006
[5] Introduction to RFID Technology, http://www.rfidc.com/docs/introductiontorfid_technology.htm
[6] Radio frequency identification, http://en.wikipedia.org/wiki/RFID
[7] Tony Gale, Divakar Rajamani, Chelliah Sriskandarajah, The Impact of RFID on Supply Chain
Performance, Working Paper, School of Management, University of Texas at Dallas
http://som.utdallas.edu/c4isn/documents/c4isn-Impact-RFID-SC-Perform.pdf
172
173
174
CHAPTER XIV
Dynamic Vehicle Routing with GPS and GIS
14.1 Introduction
Logistics especially distribution management is a vital component of supply chain management.
Physical distribution includes a set of activities executed to obtain the delivery of a product from the
production location to the end customer. Efficient distribution of goods entails, among other things, a
determination of routes and schedules for the fleet of vehicles so that total distribution costs are
minimized, while various requirements (constraints) are met. The constraints concern various facets
of the operation, such as vehicle capacities, time windows on pick up and/or delivery, time
availability of vehicles, etc.
The most central model in distribution management is the vehicle routing problem (VRP). In the
standard capacitated vehicle routing problem (CVRP) a homogeneous fleet of vehicles serves a set of
customers from a single distribution centre such that:
• the fixed capacity of a vehicle cannot be exceeded;
• each customer has known demand that must be satisfied;
• the demand of each customer is satisfied by exactly one visit of a single vehicle;
• each vehicle must leave and return to the distribution centre.
The objective is to generate a sequence of deliveries for each vehicle so that all customers are serviced
and the total distance traveled by the fleet is minimized. However, there exist several important
problems that must be solved in real-time. In what follows, we review the main applications that
motivate the research in the field of the real-time VRPs.
(i) Dynamic fleet management: Several large-scale trucking operations require real-time dispatching
of vehicles for the purpose of collecting or delivering shipments. Important savings can be achieved
by optimizing these operations.
(ii) Vendor-managed distribution systems: In vendor-managed systems, distribution companies
estimate customer inventory level in such a way to replenish them before they run out of stock.
Hence, demands are known beforehand in principle and all customers are static. However, because
demand is uncertain, some customers (usually a small percentage) may run out of stock and have to
be serviced urgently.
(iii) Couriers: Long-distance courier need to collect locally outbound parcels before sending them to a
remote terminal to consolidate loads. Also, loads coming from remote terminals have to be distributed
locally. Most pick-up requests are dynamic and have to be serviced the same day if possible.
(iv) Rescue and repair service companies: There are several companies providing rescue or repair
services (broken car rescue, appliance repair, etc.).
(v) Dial-a-ride systems: Dial-a-ride systems provide transportation services to people between given
origin–destination pairs. Customers can book a trip one day in advance (static customers) or make a
request at short notice (dynamic customers).
(vi) Emergency services: Emergency services comprise police, fire fighting and ambulance services.
By definition, all customers are dynamic. Moreover, the demand rate is usually low so that vehicles
become idle from time to time. In this context, relocating idle vehicles in order to anticipate future
demands or to escape from downtown rush hour traffic jam is a major issue.
(vii) Taxi cab services: In taxi cab services, almost every customer is dynamic. As in emergency
services, relocating temporary idle vehicles is an issue.
Due to recent advances in information and communication technologies, vehicle fleets can now be
managed in real-time. When jointly used, devices like geographic information systems (GIS), global
positioning systems (GPS), traffic flow sensors and cellular telephones are able to provide relevant
real-time data, such as current vehicle locations, new customer requests and periodic estimates of road
travel times. If suitably processed, this large amount of data can be in principle be used to reduce cost
and improve service level. To this end, revised routes have to be timely generated as soon as new
events occur. In recent years, algorithms relevant in a real-time context have been developed to handle
vehicle routing problem. These algorithms are celled real-time vehicle routing algorithm or the
dynamic vehicle algorithm.
14.2 Technical Requirements
In this section we provide a brief introduction to some of the most essential technologies when
dealing with real-life applications of vehicle routing problems within a dynamic environment.
Communication and Positioning Equipment
The communication between the drivers of the vehicles and the dispatching center is essential in order
to feed the most up-to-date information into the routing system. The equipment for determining the
current position of the vehicles and the communication equipment for passing information on between
the dispatching center and the drivers in the vehicles will be introduced below.
• Naturally, positioning equipment like the GPS (Global Positioning System) is essential to a
dynamic vehicle routing system. The GPS is a constellation of 24 satellites orbiting Earth that
constantly send out signals giving their positions and time. Signals from three or four different
satellites at any given time can provide receivers on the ground with enough information to
calculate their precise location within a few meters depending on which version of the GPS system
is used.
• The communication equipment between the vehicle and the dispatching center is essential for the
structure of the routing system. Mobile telephone communication systems are one example of a
technology capable of providing this information. Another technology is a dedicated radio based
communications system. The main difference in these technologies is the differences in initial and
operating costs. A mobile telephone communications system is relatively costly to operate, but has
low initial costs because the basic technology is provided by the telephone companies and the
GSM system today offers almost full coverage in most western industrialized countries. The initial
costs in implementing a radio based communications system are on the other hand very high,
because transmission masts will have to be put up and relatively expensive radio equipment must
be installed in every vehicle. In all, a radio based communication system has very high initial
costs, while the operating costs are almost negligible. Furthermore, the radio based system does
not offer the same flexibility compared to the mobile telephone communications system.
In Figure 1.2 the basic information flows between the vehicle and the dispatching center are shown.
Ideally, the dispatching center will know in which state the vehicle and the driver are at any given
point in time. However, as the above description indicates, this may prove to be infeasible for some
applications due to the operating costs of this method. However, within a real-life setting the
positioning information is transmitted at fixed intervals and an interpolation scheme is employed in
order to estimate the positions of the vehicles. Alternatively, the driver sends a message about his
current status and position to the dispatch center, each time he finishes the service at a customer.
Obviously, this approach does not offer the same level of information for the dispatcher to support her
decision as to which vehicle to dispatch to the next customer to be served. If the new information
provided by the now idle driver/vehicle makes the dispatcher change her mind on the current planned
route, she will have to call the other drivers manually to inform them about the changes in the current
routes. However, the conclusion of this discussion must be that a careful analysis will have to show
which approach to choose when designing the system. Of course, history shows that the prices of
communication decrease rapidly over the years. This could be the motivation to go for
telecommunications based system. We below describe the principles behind GPS – crucial component
of the communication system.
176
Figure 1. Sketch of the information flow in a GPS based vehicle routing system.
Global Positioning System (GPS) is a technology that uses the position of satellites to determine
locations on earth anytime, in any weather, anywhere. GPS is a constellation of U.S. Government
satellites providing the most advanced and accurate positioning and navigation service. Twenty-four
GPS satellites orbit 12,000 miles above the earth, constantly transmitting the precise time and their
position in space. The GPS receivers listen to these satellite signals and use the information to
determine the location of the receiver, as well as how fast and in what direction it is moving. GPS
satellites circle the earth twice a day in a very precise orbit and transmit signal information to earth.
GPS receivers take this information and use the principle of triangulation to calculate the user's exact
location. Satellites are equipped with very precise clocks that keep accurate time to within three
nanoseconds - that’s 0.000000003, or three billionths, of a second. This precision timing is important
because the receiver must determine exactly how long it takes for signals to travel from each GPS
satellite.
GPS has 3 parts: the space segment, the user segment, and the control segment. The space segment
consists of 24 satellites, each in its own orbit 11,000 nautical miles above the Earth. The GPS
satellites each take 12 hours to orbit the Earth. The user segment consists of receivers, which we can
hold in your hand or mount in the vehicle. These receivers detect, decode, and process GPS satellite
signals. The control segment consists of ground stations (five of them, located around the world) that
make sure the satellites are working properly.
Geographic Information Systems (GIS)
Transportation data is usually associated with spatial data, like traffic counts from particular sites, the
traffic volumes along particular roads or links, etc. Geographical Information System (GIS) can be
used as a database for storing transportation data. The primary advantage of using GIS as a database
for transportation data is the fact that GIS can integrate the spatial data and display the attribute data
in a user-chosen format. The chief sources of spatial data are the existing digitized files (e.g.:
Topologically Integrated Geographic Encoding and Referencing (TIGER) files in the US). The Global
Positioning System (GPS) is widely being used as a tool for collecting the spatial data. Systems which
177
chiefly use GPS as a spatial data source for a GIS are called as GPS-GIS integrated systems. Most
western industrialized countries now have almost fully detailed road network databases.
When cij=cji for all (vi , v j ) ∈ A the problem is said to be symmetric and it is then common to replace
A with the edge set E = {(vi , v j ) | vi , v j ∈ Vi , i < j} .
With each vertex vi in V ′ is associated a quantity qi of some goods to be delivered by a vehicle. The
VRP thus consists of determining a set of m vehicle routes of minimal total cost, starting and ending
at a depot, such that every vertex in V ′ is visited exactly once by one vehicle.
We will consider a service time δ i (time needed to unload all goods), required by a vehicle to unload
the quantity qi at vi. It is required that the total duration of any vehicle route (travel plus service times)
may not surpass a given bound D, so, in this context the cost cij is taken to be the travel time between
the cities. The VRP defined above is NP-Hard.
• a partition R1,…, Rm of V;
• a permutation σ i of Ri ∪ 0 specifying the order of the customers on route i..
The cost of a given route ( Ri = {v0 , v1 ,..., vm +1 } ), where vi ∈ V for i = 1..m and v0 = vm +1 = 0 (0
m m
denotes the depot), is given by: C ( Ri ) = ∑ cij +∑ δ ij .
i =0 i =0
178
A route Ri is feasible if the vehicle stop exactly once in each customer and the total duration of the
route does not exceed a prespecified bound D: C ( Ri ) ≤ D .
m
Finally, the cost of the problem solution S is: FVRP ( S ) = ∑ C(R ) .
i =1
i
The characteristics of the attributes of the information forming the input for the vehicle routing
problem are as follows.
• Evolution of information: In static settings the information does not change, nor is the information
updated. In dynamic settings the information will generally be revealed as time goes on.
• Quality of information: Inputs could either; 1) be known with certainty (deterministic), 2) be
known with uncertainty (forecasts) or 3) follow prescribed probability distributions (probabilistic).
Usually, the quality of the information in a dynamic setting is good for near-term events and
poorer for distant events.
• Availability of information: Information could either be local or global. One example of local
information is when the driver learns of the precise amount of oil the current customer needs,
while a globally based information system would be able to inform the dispatcher of the current
status of all the customers' oil tanks. The rapid advances within information technologies increase
the availability of information. This fast growth in the amount of information available raises the
issue of when to reveal/make use of the information. For instance, the dispatcher may choose to
reveal only the information that is needed by the drivers although she might have access to all
information.
• Processing of information: In a centralized system all information is collected and processed by a
central unit. In a decentralized system some of the information could for instance be processed by
the driver of each truck.
A VRP is said to be static if its input data (travel times, demands,. . .) do not depend explicitly on
time, otherwise it is dynamic. Moreover, a VRP is deterministic if all input data are known when
designing vehicle routes, otherwise it is stochastic.
The Static Vehicle Routing Problem can be defined as the one in which:
1. All information relevant to the planning of the routes is assumed to be known by the planner before
the routing process begins.
2. Information relevant to the routing does not change after the routes have been constructed.
A static problem can be either deterministic or stochastic. In deterministic and static VRPs all data
are known in advance and time is not taken into account explicitly. In stochastic and static VRPs
vehicle routes are designed at the beginning of the planning horizon, before uncertain data become
known. Uncertainty may affect which service requests are present, user demands, user service times
or travel times. If input data are uncertain, it is usually impossible to satisfy the constraints for all
realizations of the random variables. If uncertainty affects the constraints but the objective function is
deterministic, it can be required that constraints be satisfied with a given probability (chance
constrained programming, CCP). In a more general approach, a first phase solution is constructed
before uncertain data are available and corrective (or recourse) actions are taken at a second stage
once all the realizations of the random variables become known. The objective to be minimized is the
first stage cost plus the expected recourse cost (stochastic programming with recourse, SPR).
179
14.4 The dynamic vehicle routing problem
It is argued that a relatively high number of the routing problems being modeled as static problems do
in fact include dynamic elements in a real-life situation. Examples of this phenomenon is the fact that
on site service times as well as travel times despite extensive empirical data are often filled with
noise. This implies that the preplanned routes collapse because of the new temporal conditions of the
system. That is what seemed to be an optimal solution (or at least a good quality solution) might turn
out to be a sub-optimal solution.
In a dynamic Vehicle Routing Problem setting
1. Not all information relevant to the planning of the routes is known by the planner when the routing
process begins.
2. Information can change after the initial routes have been constructed.
The dynamic vehicle routing problem calls for online algorithms that work in real-time since the
immediate requests should be served, if possible. As conventional static vehicle routing problems are
NP−hard, it is not always possible to find optimal solutions to problems of realistic sizes in a
reasonable amount of computation time. This implies that the dynamic vehicle routing problem also
belongs to the class of NP−hard problems, since a static VRP should be solved each time a new
immediate request is received.
Figure 2: A dynamic vehicle routing scenario with 8 advance and 2 immediate request customers.
In Figure 2 a simple example of a dynamic vehicle routing situation is shown. In the example, two un-
capacitated vehicles must service both advance and immediate request customers without time
windows. The advance request customers are represented by black nodes, while those that are
immediate requests are depicted by white nodes. The solid lines represent the two routes the
dispatcher has planned prior to the vehicles leaving the depot. The two thick arcs indicate the vehicle
positions at the time the dynamic requests are received. Ideally, the new customers should be inserted
into the already planned routes without the order of the non-visited customers being changed and with
minimal delay. This is the case depicted on the right hand side route. However, in practice, the
insertion of new customers will usually be a much more complicated task and will imply a re-
planning of the non-visited part of the route system. This is illustrated by the left hand side route
where servicing the new customer creates a large detour.
180
Generally, the more restricted and complex the routing problem is, the more complicated the insertion
of new dynamic customers will be. For instance, the insertion of new customers in a time window
constrained routing problem will usually be much more difficult than in a non-time constrained
problem. Note that in an online routing system customers may even be denied service, if it is not
possible to find a feasible spot to insert them. Often this policy of rejecting customers includes an
offer to serve the customers the following day of operation. However, in some systems – as for
instance the pick-up of long-distance courier mail - the service provider (distributor) will have to
forward the customer to a competitor when they are not able to serve them.
A dynamic problem can also be deterministic or stochastic. In deterministic and dynamic problems,
all data are known in advance and some elements of information depend on time. For instance, the
VRP with time windows belongs to this class of problems. Similarly, the traveling salesman problem
(TSP) with time-dependent travel times is deterministic and dynamic. In this problem, a traveling
salesperson has to find the shortest closed tour among several cities passing through all cities exactly
once, and travel times may vary throughout the day. Finally, in stochastic and dynamic problems (also
known as real-time routing and dispatching problems) uncertain data are represented by stochastic
processes. For instance, user requests can behave as a Poisson process. Since uncertain data are
gradually revealed during the operational interval, routes are not constructed beforehand. Instead, user
requests are dispatched to vehicles in an on-going fashion as new data arrive.
The events that lead to a plan modification can be: (i) the arrival of new user requests, (ii) the arrival
of a vehicle at a destination, (iii) the update of travel times. Every event must be processed according
to the policies set by the vehicle fleet operator. As a rule, when a new request is received, one must
decide whether it can be serviced on the same day, or whether it must be delayed or rejected. If the
request is accepted, it is temporarily assigned to a position in a vehicle route. The request is
effectively serviced as planned if no other event occurs in the meantime. Otherwise, it can be assigned
to a different position in the same vehicle route, or even dispatched to a different vehicle. It is worth
noting that at any time each driver just needs to know his next stop. Hence, when a vehicle reaches a
destination it has to be assigned a new destination. Because of the difficulty of estimating the current
position of a moving vehicle, reassignments could not easily made until quite recently. However, due
to advances in communication technologies, route diversions and reassignments are now a feasible
option and should take place if this results in a cost saving or in an improved service level. Finally, if
an improved estimation of vehicle travel times is available, it may be useful to modify the current
routes or even the decision of accepting a request or not. For example, if an unexpected traffic jam
occurs, some user services can be deferred. It is worth noting that when the demand rate is low, it is
useful to relocate idle vehicles in order to anticipate future demands or to escape a forecasted traffic
congestion.
Particular features
A dynamic VRPs possess a number of peculiar features, some of which have just been described. In
the following, the remaining characteristics are outlined.
(i) Quick response: Real-time routing and dispatching algorithms must provide a quick response so
that route modifications can be transmitted timely to the fleet. To this end, two approaches can be
used: simple policies (like the first-come first served (FCFS) policy), or more involved algorithms
running on parallel hardware (like the tabu search (TS) heuristics). As will be explained, the choice
between them depends mainly on the objective, the degree of dynamism and the demand rate.
(ii) Denied or deferred service: In some applications it is valid to deny service to some users, or to
forward them to a competitor, in order to avoid excessive delays or unacceptable costs. For instance,
the requests that cannot be serviced with a given time windows are rejected. When no time windows
are imposed, some user requests can be postponed indefinitely because of their unfavorable location.
This phenomenon can be avoided by imposing dummy time windows, or by adding a nonlinear delay
penalty to the objective function.
181
(iii) Congestion: If the demand rate exceeds a given threshold, the system becomes saturated, i.e., the
expected waiting time of a request grows to infinity.
The degree of dynamism of a problem
Designing a real-time routing algorithm depends to a large extent on how much the problem is
dynamic. Degree of dynamism of a problem quantifies this concept. Without loss of generality, we
assume that the planning horizon is a given interval [0, T ] , possibly divided into a finite number of
smaller intervals. Let ns and nd be the number of static and dynamic requests, respectively. Moreover,
let ti ∈ [0, T ] be the occurrence time of service request i. Static requests are such that ti = 0 while
nd
dynamic ones have ti ∈ [0, T ) Lund et al. [32] define the degree of dynamism d as δ =
ns + nd
which may vary between 0 and 1. Its meaning is straightforward. For instance, if δ is equal to 0.3,
then 3 customers out of 10 are dynamic. In his recent doctoral thesis, Larsen [10] generalizes the
definition proposed by Lund et al. in order to take into account both dynamic request occurrence times
and possible time windows. He observes that, for a given δ value, a problem is more dynamic if
immediate requests occur at the end of the operational interval [0, T ] . As a result he introduces a new
measure of dynamism:
∑
ns + nd
ti / T
δ '
= i =1
ns + nd
It is worth noting that δ ' ranges between 0 and 1. It is equal to 0 if all user requests are known in
advance while it is equal to 1 if all user requests occur at time T . Finally, Larsen extends the
definition of δ ' to take into account possible time windows on user service time. Let ai and bi be the
ready time and deadline of client i ti ≤ ai ≤ bi , respectively. Then,
∑
ns + nd
[T − (bi − ti )] / T
δ ''
= i =1
ns + nd
It can be shown that ti ≤ ai ≤ bi also varies between 0 and 1. Moreover, if no time windows are
imposed (i.e., ai = ti and bi = T ), then δ ' = δ '' . As a rule, vendor based distribution systems (such
as those distributing heating oil) are weakly dynamic. Problems faced by long-distance couriers and
appliance repair service companies are moderately dynamic. Finally, emergency services and taxi cab
services exhibit a strong dynamic behavior.
182
distribution systems orders/demands arrive randomly in time and the dispatching of vehicles is a
continuous process of collecting demands, forming tours and dispatching vehicles. In dynamic
settings the waiting time is often more important than the travel cost. Examples of applications where
the waiting time is the important factor include the replenishment of stocks in a manufacturing
context, the management of taxi cabs, the dispatch of emergency services, geographically dispersed
failures to be serviced by a mobile repairman.
Bertsimas and Van Ryzin define the DTRP as follows:
• A repairman (or a vehicle/server) travels at unit velocity in a bounded convex region A of area
A.
• All demands are dynamic and arrive in time according to a Poisson process with the intensity
parameter λ . The locations of the demands are independently and uniformly distributed in A.
• Each demand requires an independently and identically distributed amount of on-site service
time with mean duration s and second moment s 2 . The fraction of time that the server spends
on on-site servicing the demands is denoted by ρ . For stable systems ρ = λ s .
• The system time, Ti , of demand i is defined as the elapsed time between the arrival of demand i
and the time the server completes the service of the demand. The steady-state system time
denoted by T is defined by T = lim E[Ti ] .
i →∞
• The waiting time, Wi , of demand i is defined as the time elapsed from the demand arrived until
the service starts. Thus, Ti = Wi + si . The steady-state waiting time, W, is defined as
W =T −s .
The problem is to design a routing policy that minimizes T . The optimal value of is denoted T * .
Bertsimas and Van Ryzin stress that although the DTRP resembles a traditional queuing system,
queuing theory does not apply due to the fact that the system time T includes the travel times which
cannot not be regarded as independent variables. The approach of the authors is to derive lower
bounds for all policies for the average system time T . After that Bertsimas and Van Ryzin analyze
several policies and compare their performance to the lower bounds. To obtain these results the
authors use techniques from combinatorial optimization, queueing theory, geometrical probability and
simulation.
Bertsimas and Van Ryzin analyze a wide range of routing policies for the DTRP. A brief description
of these policies is given below.
• First Come First Served (FCFS): The demands are served in the order in which they are received
by the dispatcher.
• Stochastic Queue Median (FCFS-SQM): The FCFS-SQM policy is a modification of the FCFS
policy. According to the FCFS-SQM policy the server travels directly from the median of the
service region to the location of the demand. After the service has been completed, the server
returns to the median and waits for the next demand.
• Nearest Neighbor (NN): After completing service at one location the server travels to the nearest
neighboring demand.
• Traveling Salesman Problem (TSP): The demands are batched into sets of size n. Each time a
new set of demands has been collected, a Traveling Salesman Problem is solved. The demands are
served according to the optimal TSP tour. If more than one set exists at the same time, the sets are
served in an FCFS manner.
• Space Filling Curve (SFC): The demands are served, as they are encountered during repeated
clockwise sweeps of a circle C that covers the service region.
183
14.6 The integration of information technology and OR algorithms
Heuristics for the vehicle routing problem
Many authors have suggested a number of solution methods for the vehicle routing problem.
Heuristics have been developed and applied to many routing and scheduling case studies and several
researchers have compared and evaluated these methods. The advantage of the heuristics is their
ability to handle efficiently a large number of constraints and parameters of the routing problem. They
perform a relatively limited exploration of the search space and generally produce good quality
solutions within modest computing time. Therefore, classical heuristics are still widely used in
commercial software packages. Heuristics for the vehicle routing problem can be classified into two
main classes: classical heuristics, developed between 1960 and 1990 and metaheuristics whose growth
has occurred in the last decade . The most well known classical heuristics are the Savings and Sweep
algorithms. The most successful metaheuristic approach is the tabu search heuristics. In terms of the
solution procedure, classical heuristics can be classified in sequential and parallel. Sequential heuristic
algorithms solve the vehicle routing sub-problems (clustering and finding best tour) separately and
consecutively. Parallel heuristics produce routes and tours concurrently and can be classified in
construction and improvement methods. In metaheuristics, the emphasis is on performing a deep
exploration of the most promising regions of the solution space. These methods typically combine
sophisticated neighbourhood search rules, memory structures and recombination of solutions. The
quality of solutions produced by these methods is usually much higher than that obtained by classical
heuristics but the computing time is increased.
Advanced planning and scheduling systems: The OR practice using IT
Modern information technology supports the definition and analysis of the factors influencing the
routing process, a problem difficult to solve using empirical methods. Nowadays, there are software
applications that manage supply chain processes at all three decisional levels of management,
including features of transportation planning and execution:
• High level (strategic)––these applications offer transportation planning and fleet composition
modules;
• Middle level (tactical)––these applications allocate resources to the general transportation plan
derived from the previous level;
• Low level (operational)––at this level the applications are oriented to detailed scheduling of routes
and tours on a daily basis.
The mathematical technique most commonly applied to strategic transportation planning is linear
programming and related algorithms like network optimization and mixed integer programming.
Linear programming based approaches model the current transportation business including revenues
and costs. The models are extended to include decisions on the location of new facilities, acquisition
of transport resources and implementation of transport strategies. Transportation planning applications
at the tactical level include linear programming based models similar to the ones mentioned above.
Their functionality is found in most ERP systems. The operational level applications utilize mostly
heuristic algorithms. These algorithms can find a near-optimum solution for complex and multi-
parameter problems in short time. Today, the advanced computer programming languages and the
powerful hardware supports the use of these algorithms in advanced planning and scheduling (APS)
software systems. The ability to implement APS systems as a decision support tool directed in
transportation, has been greatly enhanced by improvements in telecommunications, better supply
chain management systems, more comprehensive ERP implementations, and more powerful
forecasting tools. The power of OR models and optimization methods can be enhanced incorporating
them within a decision support system, which takes advantage of modern information technology.
184
14.7 A Generic Architecture for Dynamic Real-Time VRS
Dynamic Real-Time Vehicle Management Model
Most solution approaches to the VRP are in practice implemented in a centralized computer resource
(normally at headquarters), producing a daily plan to be provided to the vehicles before the beginning
of the distribution execution. Some of these approaches have been implemented in commercial
systems that are successfully used by numerous transportation, logistics, and manufacturing
companies over the last twenty years. These systems have not, however, been designed to address the
case in which the execution of delivery cannot follow the prescribed plan due to some unforeseen
event. When there is a need for real-time intervention, it may be necessary to re-compute the plan
using new input data. If a typical VRP approach is used for re-planning (i.e. re-planning the whole
schedule from scratch), many vehicle schedules may be affected, thus causing significant performance
inefficiencies (high overhead, nervousness, errors and high costs).
Thus, re-planning based on classical VRP solution methods may not be a realistic option. In the
absence of algorithms capable of ‘isolating’ the part of the VRP affected by the unexpected event in
order to minimize the disturbance to the overall schedule, interventions are typically performed
manually (for example, through voice communication between drivers and the logistics manager), and
the quality of decisions taken is naturally affected. The need to enhance existing methods, or develop
novel approaches, becomes clearer in view of recent advancements in mobile and positioning
technologies. Using such technologies, information about unforeseen events may be transmitted when
they occur directly from the affected truck(s) through a mobile network to headquarters and/or other
parts of the fleet. Given an efficient re-planning algorithm, appropriate and implementable plan
modifications may be transmitted back to the fleet in a timely fashion to respond effectively to the
new system state.
A real-time vehicle management model is schematically depicted in Figure 3, using control system
formalism. The model includes the following:
a) Control & Detection of the system’s state: This concerns the selection of the parameters to be
monitored, such as truck position, truck speed, truck inventory, and so on. These parameters need to
be regularly monitored, as they will trigger intervention if needed. It is noted that interventions may
lead to system “nervousness”; thus, the cost of intervention should be balanced against expected
benefits.
b) Projection: This concerns the revised distribution plan that will be generated by the system, when it
is observed that a vehicle is out of schedule. Typical parameters that cause significant disturbances to
the original routing schedule are traffic jams, road works and negative environmental conditions.
c) Decision-making and execution: The selection of the problem objectives has significant effects on
the decision-making and execution mechanisms employed. Objectives to be considered may include:
minimize the deviation from the original plan, minimize the cost of non-conformance, minimize risk,
and others.
185
Important decision-making issues include modeling of the real-time rerouting problem, and
development of appropriate solution methods. In this case, problem complexity and computational
time play a significant role. The reduction of complexity appears to be a necessary condition in
providing timely, implementable solutions. A classic way to reduce complexity is by using a
hierarchical approach, whereby, a complex monolithic problem is decomposed or disaggregated to
multiple, simpler problems that can be solved independently. The solutions of these lower-level
problems are combined to yield the solution of the global, higher-level, problem. By doing so, one
needs to consider the trade-off between optimality and computational efficiency.
System Implementation Issues
The model presented in Figure 3 can be realized through the use of mobile technologies, real-time
decision-making algorithms (along the lines presented in the previous section) and back-office
automated processing. In addition to providing the appropriate directions to the drivers of the fleet, the
customer base may be kept informed in regard to changes in the initial schedule, therefore improving
the company’s service quality and customer relations.
The proposed system architecture is shown in Figure 4. It comprises three major sub-systems. The
back-end system consists of a decision making module to facilitate automated decision making and
ERP connectivity. The Wireless Communication sub-system allows a two-way communication
between the back-end and front-end systems. The Front-end system enables a) a robust user interface,
and b) interaction between the software platform that is installed in the on-board truck computer and
the company’s back-end system.
Back-end sub-system
The back-end system is a decision support system that incorporates algorithms needed for real-time
routing, scheduling and monitoring of the current state of the fleet, as well as a robust database
containing both static (customers, geographical information of the road network, and so on) and
dynamic (orders, quantities, time window information, and so on) data. The back-end system also
provides ERP connectivity, which is especially useful in ex-van sales to provide information, such as
customer sales history, customer credit, and other decision-critical data.
Wireless Communication sub-system
The wireless communication sub-system consists of two parts: a) The mobile access terrestrial
network, which is responsible for the wireless interconnection of the back-end system with the front-
end on-board devices, and b) the positioning system, which is responsible for vehicle tracking.
The Mobile Access Terrestrial Network can be based on any of a number of existing or emerging
mobile technologies. In examining the options available to support an integrated distribution system,
bandwidth is perhaps the most important issue. The bandwidth requirements depend on the
computational model chosen. If vehicle onboard devices support much of the computations, then the
demand for bandwidth is different than in the case in which much of the computation is performed at
the headquarters. In either case, however, the demand for bandwidth is greater when compared to
existing applications, such as fleet tracking, graphical representation of real-time information in
digital maps, and voice communication.
GPRS, TETRA, and UMTS can provide always-on, packet-switched connectivity and high-speed data
rates. GSM is a mature technology. However, it cannot support high-data transmission effectively.
GPRS combines high data rates, always-on connectivity, mature technology, and has also been used
in fleet management systems. As far as TETRA is concerned, it is worth mentioning that it provides
much better security than GPRS, as well as supporting point-to-multipoint voice broadcasting. UMTS
is an emerging standard, and its use cannot be assessed prior to thorough validation testing.
As far as the Positioning System is concerned, positional accuracy of less than 100m is deemed
acceptable for urban distribution (accuracy requirements can, of course, be relaxed in non-urban
settings). An analysis of the technologies that can be used for location identification goes beyond the
scope of this paper.
186
Fig. 4: Generic architecture for dynamic real-time vehicle management
187
system’s state, balance of intervention costs vs. expected benefits, the extent of interventions (local
vs. global), and other parameters. System designs cannot be generalized beyond the extent achieved in
this chapter due to their heavy dependence on the characteristics of the problem addressed, and the
algorithmic approach chosen for intervention. Therefore, future research can assess alternative design
specifications against real-life case studies of real-time vehicle routing problems.
In the second area, a review of the vast existing literature on the vehicle routing problem has indicated
that some research is relevant and can be used as a basis for the development of appropriate
enhancements and/or novel decision support approaches in real-time vehicle re-planning. In this case,
problem complexity and computational time play a significant role in system effectiveness.
In the implementation area, it appears that there exist mature technologies to sufficiently address the
requirements of the real-time vehicle management system. In terms of the communication subsystem,
GPRS and TETRA are appropriate mobile access networks, while GPS technologies meet all the
related positioning requirements. For the front-end system, PDAs and Tablet PCs have significant
potential, since both their interface capability and computational power support efficient user
interaction and the local computational system requirements, respectively. All three fronts discussed
above present interesting challenges with significant implications for both VRP-related research and
the technology that will support effective logistics execution.
References
[1] Dimitris Bertsimas and Garrett Van Ryzin. A Stochastic and Dynamic Vehicle Routing Problem
in the Euclidean Plane. Operations Research, 39:601-615, 1991.
[2] Allan Larsen, The Dynamic Vehicle Routing Problem, PhD Thesis, Technical University of
Denmark, 1999.
[3] Vasileios Zeimpekis and George M. Giaglis, A Dynamic Real-Time Vehicle Routing System for
Distribution Operations in Georgios J. Doukidis and Adam P. Vrechopoulos eds., Consumer
Driven Electronic Transformation- applying New Technologies to Enthuse Consumers and
Transform the Supply Chain part-1, Springer Berlin Heidelberg, 2005, pp 23-37
[4] Gianpaolo Ghiani, Francesca Guerriero, Gilbert Laporte and Roberto Musmanno, Real-time
vehicle routing: Solution concepts, algorithms and parallel computing strategies, European
Journal of Operational Research, Volume 151, Issue 1, 16 November 2003, Pages 1-11
[5] Sotiris P. Gayialis and Ilias P. Tatsiopoulos, Design of an IT-driven decision support system for
vehicle routing and scheduling, European Journal of Operational Research, Volume 152, Issue
2, 16 January 2004, Pages 382-398
[6] D. Tarantilis, D. Diakoulaki and C. T. Kiranoudis Combination of geographical information
system and efficient routing algorithms for real life distribution operations European Journal of
Operational Research, Volume 152, Issue 2, 16 January 2004, Pages 437-453
[7] Vehicle Routing Problem (VRP), by Wolfgang Garn
http://osiris.tuwien.ac.at/~wgarn/VehicleRouting/vehicle_routing.html
188
For Limited Circulation
Chapter 15
Optimization of the Supply Chain Network: Simulation, Taguchi, and
Psychoclonal Algorithm Embedded approach*
In today’s market, increased level of competitiveness and uneven fall of the final demands are pushing
the enterprises to make an effort for optimization of their process management. It involves
collaboration in multiple dimensions like information sharing, capacity planning, and reliability
among players. One of the most important dimensions of the supply chain network is to determine its
optimal operating conditions incurring minimum total costs. However, this is even a tough job due to
the complexities involved in the dynamic interaction among multiple facilities and locations. In order
to resolve these complexities and to identify the optimal operating condition we have proposed a
hybrid approach incorporating simulation, Taguchi method, robust multiple nonlinear regression
analysis and the Psychoclonal algorithm. The Psychoclonal algorithm is an evolutionary algorithm
that inherits its traits from Maslow need hierarchy theory and the artificial immune system. The
results obtained using the proposed hybrid approach is compared with those found out by replacing
Psychoclonal algorithm with the AIS and Response Surface Methodology (RSM) respectively. This
research makes it possible for the firms to understand the intricacies of the dynamics and
interdependency among the various factors involved in the supply chain. It provides guidelines to the
manufacturers for the selection of appropriate plant capacity and also proposes a justified strategy
for delayed differentiation.
Key word: supply chain, simulation, Taguchi orthogonal array, Regression analysis, Psychoclonal
algorithm
1 Introduction
In today’s competitive and global market, the success of an industry is reliant upon the management
of its supply chains. Supply chain is the interlinked network of suppliers, manufacturers, distributors
and customers interconnected by transportation, information sharing, and financial infrastructure
(Chopra and Meindle, 2001). The supply chain aims to provide quality products and services to the
end consumer in most efficient and economical mode (Sahin and Robinson 2002).
Plethora of researches addressing various issues related to different aspects of the supply chains is
available in the literature. Gavirneni et al. (1999) have focused on management of material flow.
While, Strader et al. (1999), and Hewitt (1999) emphasized on the role of information technology in
the supply chain network. Shunk et al. (2006) applied integrated enterprise modeling methodology –
FIDO – to supply chain integration 1 modeling. Spekman (1988) and Tompkins (1998) studied the
interdependencies among various supply chain members like retailers, manufacturers and suppliers.
Kwak et al. (2006) proposed supplier buyer model for the bargaining process, while an agent based
approach for e-manufacturing and supply-chain was given by Zhang et al. (2006). Moreover,
Altiparmak et al. (2006) considered supply chain network as a multiobjective optimization problem
and solved it with the aid of genetic algorithm. In the past, researchers have adopted different
methodologies to cope with the supply chain problems some of them have focused on the
deterministic while others on non deterministic approaches. Cohen and Lee (1988), Arntzen et al.
1
Contributed by Dr. M. K. Tiwari, Department of Industrial Engineering and management, Indian Institute of
Technology Kharagpur, 721302, India. This work is a part of research article which has been accepted for
publication in Computers and Industrial Engineering. The other co-authors of this work are Sanjay Shukla and
Dr. Ravi Shankar.
189
For Limited Circulation
(1995), and Hariharan et al. (1995) have deployed mathematical programming, a deterministic
approach, to optimize the supply chain network. While Zheng and Zipkin (1990), and Van Houtum et
al. (1996) have concentrated on stochastic process modeling to deal with the supply chain issues.
In order to attain the competitive edge a supply chain should be flexible, quick enough, dependable,
and cost-efficient. These objectives are accomplished by high speed information and material flow
with low overhead costs. Coordination and collaboration in different activities like sale information,
inventory information, promotions, and shipment are essential to the success of the supply chain .One
of the important dimension in the supply chain network is to the minimize its costs by identifying the
optimal operating conditions. However, this is even a tough job due to the complexities involved in
the dynamic interaction among multiple facilities and locations. To resolve these complexities and to
maximize the overall supply chain performance members of the chain have to team up and define
clearly their responsibilities along with the co-operation mechanism.
Many researchers have addressed the problem of a single operating condition such as at what capacity
level will a supply chain achieves its best performance. But, an analytical approach addressing the
issue of depicting an optimal supply chain conditions by considering two or more factors, at a time, is
lacking.
In this research the proposed analysis will clarify the responsibility of each member in implementing
the optimal supply chain strategy. In other words, this research attempts to minimize total supply
chain costs and identify an optimal condition by considering various supply chain parameters
simultaneously viz. delayed differentiation, information sharing, capacity, reorder policy, lead time,
and supplier’s reliability. In order to meet this objective, we are proposing a hybrid approach
encapsulating simulation, Taguchi approach (1986), non-linear regression analysis (Montgomery
2001), and the Psychoclonal algorithm (Tiwari et al. 2004). First, Simulation is used to model a
comprehensive supply chain network, which deals with a range of operational components and
management level. Although useful, simulation only evaluates the effectiveness of pre-specified
conditions and do not provide any means for optimizing the system performance. Due to the
aforementioned shortcomings authors have intended to couple the simulation model with The Taguchi
orthogonal array, robust non-linear regression analysis, and the Psychoclonal algorithm.
In order to respond effectively to the unanticipated events such as changes in demand, order quantity,
or delivery time and to maintain system stability one needs to develop a robust supply chain. Taguchi
approach is the best way to attain such robustness as it is capable of increasing system robustness,
reducing experimental costs, and improving the overall quality by considering factors at a discrete
level. We have explored Taguchi’s orthogonal array to identify the best parameter settings for the
robust supply chain. Montgomery (2001) pointed out that the Taguchi’s orthogonal arrays are not
efficient in exploring the search space completely when process parameters vary on continuous scale.
Therefore, this method is only suitable for optimizing qualitative variables but not for the quantitative.
This limitation has motivated the authors to incorporate regression analysis and the Psychoclonal
algorithm with simulation and Taguchi approach. Regression analysis has been employed to obtain
functional relationship between quantitative process variables and response of the supply chain
network (supply chain costs). Further, Psychoclonal algorithm is utilized as an optimization tool for
depicting optimum setting of quantitative process variables. The Psychoclonal algorithm is an
evolutionary algorithm. It inherits its properties from Maslow’s need hierarchy theory and artificial
immune system. The results achieved by using the proposed hybrid approach are compared with those
obtained by replacing Psychoclonal algorithm with AIS and Response Surface Methodology (RSM)
respectively. These comparisons revealed that Psychoclonal algorithm dominates both the AIS and the
RSM in terms of minimizing the supply chain costs.
The remainder of paper is organized as follows. Section 2 deals with the simulation model of the
supply chain network and parameters affecting its performance. In section 3, the way of conducting
the experiments (using Taguchi method) are explored to obtain costs of supply chain network for
different parameters settings. Then in section 4 mathematical modeling is carried out to obtain
functional relationship between network’s process parameters and supply chain coats. To solve the
190
For Limited Circulation
Retailer 1
Supplier 1
Retailer 2
Supplier 2 Manufacturer
Retailer 3
Supplier 3 Retailer 4
From the figure, it is clear that each player of the supply chain is interdependent and hence output at
one stage acts as input for next stage. We have chosen ARENA (Rockwell software, 2003) as our
working simulation environment. Arena is general purpose simulation software that has full
visualization of model structure and parameters, run control, and animation facilities. The supply
chain under consideration manufactures and markets televisions having short life cycle. The retailers
rely on Bass model (refer to appendix A) for forecasting the demand of the product. They follow the
following strategy to meet their objective:
1. After estimation of the future demand the inventory decision is made depending to its current level.
2. According to inventory level order is placed to the manufacturer
3. Inbound inventories are received from the manufacturer then inventory cost and services are
assessed.
For simplicity we assume that studied problem has two product families based on the size of
television frame. The demand for product is assumed to resemble an S- shaped curve with an average
of 1800 units per month. Since televisions are short life cycle products hence, length of the simulation
run is considered to be small (say, 18 months). The supply chain network variables considered in
present simulation are listed in table 1.
However, each variable may have wide range of possible levels but we are focusing only on three for
each to represent most likely variability corresponding to the first stage of simulation. A brief
description of the factors influencing the supply chain performance (supply chain costs) is described
below:
191
For Limited Circulation
1. Delayed differentiation (postponement): Alderson (1950) was the first who coined the term
Delayed differentiation. This is also termed as postponement. This is the most general method that can
be applied for enhancing the efficiency of a marketing system. It exploits the component commonality
and redesigns the production process so that expensive operation can be delayed. This is an effective
means of minimizing the increased costs, with maintaining the good customer services, arisen due to
product multiplicity (Swaminathan and Tayur 1998).
2. Information sharing ( i ) : Information sharing plays an important role in supply chain network.
Each player makes an ordering decision based on information received from the downstream player.
Lack of information sharing among player results in high amplification of demand (sterman 1989).
This effect is highly undesirable as it exacerbate the supply chain costs.
3. Capacity ( c ) : Capacity indicates the ability to satisfy future demand or it may be defined as the
amount of the products that can be produced per time period. Sharing the capacity information by
each entity in a supply chain is necessary for integrated planning (Gaonkar and Viswanathan 2001).
The manufacturer makes a better production decision, in terms of costs saving, when he knows the
capacity of each supplier. A higher capacity level leads to greater benefit of information sharing
(Gavirneni 1997).
4. Reorder policy ( R ) : Each retailer fallows a (S , s ) policy. When inventory level becomes equal to or
falls below the reorder point, s , then retailer puts an order up to level S. This is also known as min-
max policy.
5. Lead-time (l ) : Lead-time is one of the most important factors influencing the supply chain
performance. It may be defined as the time interval between placing and receiving an order. Longer
lead-time results in smaller benefit of information sharing (Towill et al., 1992). Large lead time
imparts major contribution in enhancing bull Whip effect (Lee et al. 1994).
6. Reliability of the suppliers (r ) : Reliability of the supplier affects the performance of the supply
chain in terms of availability of materials and quality components at the time of production.
Unreliable supply decreases the quality of finished product; and increases demand variability.
The simulation procedure consists of three main phases that are discussed in next three subsections.
2.1 Generation of Demand and Inventory Policy
Each retailer estimates the demand by combining Bass model and demand variance. Forecasted
demand is assumed to be much larger than the error arising in forecasting thus, possibility of
generating negative demand is negligible (Zhao and Xie 2002). Once the demands for all retailers are
generated for 18 month life-cycle, the capacity is designed to match the demand. Let the
manufacturer’s monthly capacity is represented by C. Retailers follow the (S , s ) policy to control the
inventory level. Under min-max policy, every weak, if inventory drops below s then the retailers put
192
For Limited Circulation
an order to sustain a stock at an explicit level S . This explicit level is determined by calculating the
quantity needed between the time the order is placed and is received along with a quantity of safety
stock to allow variation on demand. Mathematically S can be given as:
S = μ ( T + LT ) + Θ σ T + LT …(1)
Where,
μ is the periodic expected demand which can be estimated by employing Bass model.
T is the time interval between orders which is one week for the present model.
LT is the lead time that includes production time needed when no information sharing and no product
differentiation are considered.
Θ is the standard deviation associated with desired service level, for present simulation model service
level is set to be 97.5%.
σ is the standard deviation of demand per time period.
Due to the S -shaped demand pattern (S , s ) is adjusted according to the bass model. Using the
difference of (S − s ) specified in factor D of table 1, we derive the reorder point, s , for each order
cycle. The parameters of Bass model are derived by previous sales of similar product as shown in
table 2.
Table 2. Estimation of ω , υ and K in Bass model
193
For Limited Circulation
These parameters ( ω ,υ , K ) are determined by minimizing the sums of the squared error using excel
solver. Note that for a new product, without history, demand forecast is often estimated based on a
comparable older generation product.
2.2 The Production and Delivery Decision
The manufacturer applies lot sizing method (Chung and Lin 1988) in planning of its production
activities. In order to produce the right demand and to lower its WIP the manufacturer must reduce the
bullwhip effect (Lee et al. 1994). Clearly, the retailers need to share more information, and the
manufacturer needs to make production-planning decisions based on additional information. Under no
information sharing the suppliers only know the orders placed by manufacturer and manufacturer only
receives the order information from retailers. The manufacturer plans his production based on
forecasts obtained from the Bass model. Under partial information sharing the manufacturer plans his
production polices based on forecasts provided by Bass-model but adjusts the production based on the
retailers’ projected future net requirements, which is derived from retailer’s current inventory and
future forecast. When complete information is available retailers provide real-time updated inventory
status. Whenever a demand occurs and manufacturer adjusts the production quantity accordingly.
Therefore it is evident that Bass model is only used for purpose of planning, and that the manufacturer
basically implements a pull production system.
The manufacturer makes shipping decisions from on-hand inventory after finishing the current
period’s production. He will fill each retailer’s order plus back order. If on-hand inventory is
insufficient each retailer will receive a share proportional to its order plus backorder. Shipment
reaches the retailer after the lead time.
2.3 Simulation Run and Performance Measure
The manufactured televisions have short life cycle so we have to study the system behavior over the
product’s entire life cycle. The simulation required to study this sort of system is termed as
terminating simulation. This type of simulation starts at a prescribed initial state and terminates when
system reaches a prescribed terminal state or time. In our case it starts at t = 0, and terminates at t=18
months. In addition to this we have made 1200 replications of the system to attain a good point
estimator.
The objective of the supply chain, considered in this research, is to minimize its costs by identifying
the optimal level of supply chain parameters. The total costs of supply chain consist of transportation
cost, carrying cost, ordering cost, setup cost, and backorder cost. The relation between the explicit
operating situations and supply chain costs are shown in figure 2.1 – 2.6.
3. Parameter Design –Taguchi Method
The aim of the parameter design is to select the optimal level of controllable system parameters. This
leads the high level of performance under diverse range of conditions and makes the system robust
against the noises that cause inconsistency. Studying the design parameters one at a time or by trail is
a widespread approach to optimize the required response. However, this needs either to a very long
time for completing the design or to sudden termination of the design process due to cost of
conducting large number of experiments. By varying the design parameters one at a time; the study of
6-design parameter at 3 levels would necessitate 36 possible experimental evaluations. But the time
and cost required in carrying out such a comprehensive study is not desirable. The Taguchi approach
(1986) provides the convenient and effective means to determine optimal or near optimal design
parameters by using orthogonal array and linear graphs. We choose the L27 (313) design for
controllable factors. By using L27 array, 13 parameters can be studied at 3 levels by running 27
experiments instead of (36). To map our model completely, a linear graph is presented in figure 3.
194
For Limited Circulation
figure 2.1Total Cost Under Different Level of Figure 2.2 Total Cost Under Different Level of
Delayed Differentiation Information Sharing
155 155
Total cost
Total Cost
145 145
135 Delayed 135 Information
125 Differentiation 125 Sharing
115 115
105 105
No Partial Complete No Partial Complete
145
Total Cost
155
135 Capacity
145
Retailer's S-s
135
125
115
125
115
value
105 105
25 50 75
50 100 150
retailer's S-s Value
Capacity
145 135
135
Supplier
Lead Time (days) 125 Reliability
125
115
115
105
6 10 14 105
0.5 0.7 0.9
Lead Time (in days)
Supplier's Reliability
195
For Limited Circulation
A 1
A*B
3, 4 A*C
6, 7
D E F
B*C C 5 9 10 11 12 13
8, 11
In the linear graph, for L27 (313) each circle represents a column of orthogonal array. Column 1, 2 and
5 represents factor A, Band C respectively. Column 3 or 4, 6 or 7 and 8 or 11 can represent
interactions between A and B, A and C , and B and C respectively column 9, 10 and 12 can be allotted
randomly for remaining factors D, E, and F.
3.1 Signal-to-Noise (S/N) ratio
Signal-to-Noise ratio measures the robustness of the response obtained from the Taguchi model.
Signal measures the typical value of the response whereas; noise is a measure of variability and
represents the undesired components. S/N ratios are defined in such a manner that larger is numerical
value, better the performance (Montgomery 2001). In this research the response has the smaller the
better characteristics. Hence S/N ratio for the costs of supply chain can be given as:
S/N ratio = -10log10 So … (2)
⎛1 n ⎞
Where S0 = ⎜ ∑ yi2 ⎟
⎜n ⎟
⎝ i =1 ⎠
n = number of replicates
We can draw plots between S/N ratio, obtained from Taguchi orthogonal array, and factors level as
shown in figure 4.
figure 4.1 S/N Ratio for Delayed Figure 4.2 S/N Ratio for Information
Differentiation sharing
35 29
28.5
28 S/N ratio
S/N ratio for
S/N
S/N
27.5
30 27 for factor i
d 26.5
26
25 25.5
No Partial Complete No Partial Complete
196
For Limited Circulation
S/N
28 factor R
27
26
24 25
50 100 150 25 50 75
Capacity
Retailer's S-s Value
27
S/N 26 S/N ratio for
26.5
for factor l 25 factor r
26
24
25.5
25 23
6 10 14 0.5 0.7 0.9
From the plots, we find the following optimal level for each factor.
Factor A (Delayed differentiation) = Complete
Factor B (Information sharing) =Complete
Factor C (Monthly capacity) =100
Factor D (Reorder quantity) = 25
Factor E (Lead time) = 6 days
Factor F (Supplier reliability) =90%
In optimization problems use of the Taguchi orthogonal array is limited as it assumes factors at
discreet level. Hence optimal level of factors, varying on continuous scale can not determined by it.
Therefore, in our case only two qualitative factors (delayed differentiation, and information sharing)
can be optimized by using Taguchi approach.
4. Mathematical modeling of a supply chain network using regression analysis
One of our goals is to design a network that is robust when exposed to disturbances. To attain this we
have used the factors level identified by the Taguchi method as the initial point therefore, we start
with:
Capacity=100
Reorder quantity=25
197
For Limited Circulation
Lead time=6
Suppliers reliability=90%
The Experiments are carried out with suitable step sizes of parameters as listed in table 3.
198
For Limited Circulation
⎛ Capacity − 98.5 ⎞
x1 = ⎜ ⎟
⎝ 8 ⎠
⎛ Reordered quantity - 32.6 ⎞
x2 = ⎜ ⎟
⎝ 6 ⎠ … (4)
⎛ Lead time - 5.3 ⎞
x3 = ⎜ ⎟
⎝ 5 ⎠
⎛ Supplier reliability - 88 ⎞
x4 = ⎜ ⎟
⎝ 10 ⎠
Variables in coded form are listed in table 4
Regression analysis provides the functional relationship between independent variables and response.
Regression equation is given as (Montgomery 2001):
199
For Limited Circulation
k k
ŷ = βˆ0 + ∑ βˆi xi + ∑ βˆii xi2 + ∑∑ βˆij xi x j
i =1 i =1 i< j … (5)
Where xi is the i th parameter, β i is the regression coefficient and k is the number of process parameters.
Least square estimate method is used to interpret estimated regression coefficient which is given as:
⎛ βˆ ⎞
⎜ 0⎟
⎜ βˆ ⎟
⎜ i⎟
⎜ : ⎟
⎜ : ⎟ −1 … (6)
βˆ =⎜ ˆ ⎟ =⎛⎜ X' * X⎞⎟ * y
β
⎜ k⎟ ⎝ ⎠
⎜ : ⎟
⎜ ⎟
⎜ : ⎟
⎜ˆ ⎟
⎜β ⎟
⎝ kk⎠
⎛1 x 11 x 21 .. .. xk 1 ⎞
⎜ ⎟
⎜1 x 21 x 22 .. .. xk 2 ⎟
⎜: : : : ⎟
Where, X = ⎜ ⎟
⎜: : : : ⎟ … (7)
⎜ ⎟
⎜: : : : ⎟
⎜1 xn1 xn2 .. .. x nn ⎟
⎝ ⎠
Nonlinear regression equation 9 foresees interdependencies among various supply chain parameters.
By solving this equation with suitable optimization tool, optimum level of these factors can be
predicted.
5. Solution methodology
In past, Shang et al. (2004) have solved this type of model by using Response Surface Methodology
(RSM). Measure drawback allied with RSM is to consider only one parameter as a variable during
partial differentiation and rest as a constant. While in actual practice in dynamic supply chain network
all players are interrelated. Therefore it is not reasonable to assume any variable as a constant at any
200
For Limited Circulation
time. Ridge effect also limits the use of RSM in resolving the regression equations having four or
more variables. To overcome these drawbacks and for exploring search space completely, we are
adopting Psychoclonal algorithm for the optimization of objective function depicted in equation 9.
5.1 Psychoclonal Algorithm: an Overview
The Psychoclonal algorithm (Tiwari et. al.2004) inherits its attributes from Maslow’s need hierarchy
theory and Artificial Immune System (AIS). The silent features of this algorithm are as follows.
5.1.1 Maslow Need Hierarchy Theory
Abraham Maslow was the investigator who devised an interesting theory about human nature by
amalgamating a large body of research allied to human development (Maier, N.R.F. 1965). Based on
Maslow’s research, hierarchy of human needs can be viewed as a personal evolution. This hierarchy
has a Pyramidal structure as shown in figure 5.
Self-actualization
needs
Growth needs
Social needs
Safety needs
Physiological needs
In the hierarchy of human need, deficiency is detected from the Bottom (the starting point) to the top
(the end point). Each detected deficiency is dealt with and removed before progressing to the next
level. Analogous to human needs an evolutionary algorithm can also be considered to have
deficiencies. For efficient evolution of an algorithm all deficiencies should be detected and then dealt
with them. The analogy between Maslow need hierarchy theory and an algorithmic evolution is
shown as follows.
1. Physiological needs: The physiological need lies at the bottom of Maslow Pyramid and refers to
the hunger, thirst, and bodily comfort of people. In optimization, this refers to generation of feasible
solutions based upon the problem environment.
2. Safety needs: This is the second set of needs. It has to do physical and physiological safety from
external threats to our well-being. External threats in engineering milieu are constraints imposed on
the problem. For continued existence, solutions are subjected to these threats or constraints and proper
evaluation.
3. Social needs: Next, social needs become dominant; a person will strive for meaningful relations
with others. In the optimization, this refers to the selection of the candidate solution through
interaction between candidates.
4. Growth needs: Every entity desires to produce entities of its kind through reproduction. Here,
candidate solution diversifies to extend its search space. This is the basic mechanism of every
evolutionary process.
5. Self actualization needs: Self-actualization is the need to maximize one’s potential and to the
fulfillment allied with the realizations of one’s capabilities. In term of Maslow “this need might be
phrased as the desire to become more and more what one is, to become everything that one is capable
201
For Limited Circulation
of becoming”. This is very true whether it is human or solution to an optimization problem. As per
above description, it has been concluded that more the self-actualization needs are fulfilled stronger
the individual become. This is the reason for carrying out a number of iterations to decide the near-
optimal/optimal solution.
5.1.2 A brief description of Artificial Immune System (AIS)
In last few decades the researchers, involved in designing and optimization of engineering systems,
has paid considerable attention to the evolutionary processes like Artificial Immune System (AIS),
Genetic Algorithms (GA), and Artificial Neural Networks (ANN). AIS provides a way to deal with
the complex computational problems like pattern recognition, elimination, machine learning and
optimization.
The vertebrate immune system is a complex system having large number of functional components.
The main task of the immune system is to survey the organism in search of malfunctioning cells from
their own body and foreign substances that are recognized by system called antigens. The constituents
of the immune system that recognize antigens are called antibodies. There are two major categories of
immune cells: B cells and T cells. B cell recognizes antigens free in solution (in blood stream) on the
other hand T cells require antigens to be presented by accessory cells. After generation of T-cells they
migrate in to thymus where they mature. During maturation, all T-cells that recognize self-antigen are
excluded from population of T-cells this process is termed as negative selection (Nossal 1994). If a B-
cell interacts with a non-self antigen with affinity threshold; it proliferates and differentiates in the
memory and effecter cells, this process is termed as clonal selection (Ada and Nossal 1987). In
contrast, if a B-cell recognizes a self-antigen, it might results in suppression as recommended by the
immune network theory (Jerne 1974).
When a non-self antigen is recognized by a B-cell receptor with threshold affinity, it is selected to
proliferate and produce high volume of antibodies as shown in figure 6.
None-self antigen
Proliferation
and Maturation
Selection
During reproduction the B-cell progenies (clones) having strong selective pressure participates in
hypermutation process. The whole process of mutation and selection is known as maturation of the
immune response.
Depending upon affinity, a selection is made from matured pool of antibodies. This results a high
quality solution that exhibit prudence. From engineering point of view this is the most alluring
characteristic of the immune system because, the candidate solutions with higher affinity must
202
For Limited Circulation
somehow be preserved as high quality candidate solutions and will only be replaced by matured
clones. In a T cell dependent immune response, the repertoire of antigen-activated B cells is
diversified by two mechanisms: hypermutation and receptor editing as shown in figure 7.
Affinit
Antigen-binding site
Random changes are introduced in to the genes responsible for the Ag-Ab interaction, and
occasionally one such change will lead to an increase in affinity of the antibody. The hypermutation
operator works in a similar fashion to mutation. The difference lies in the rate of modification, which
depends upon antigenic affinity. Antibodies with higher affinity are hypermutated at low rate, while
antibodies with a lower affinity are hypermutated at high rate. This phenomenon is called receptor
editing and governs the hypermutation that guides the solutions toward local optima, while receptor
editing helps to avoid local optima.
Based on the Clonal principal and Maslow needs the proposed psycho-clonal methodology is
conceived and it is described in the coming section.
5.2 Proposed Algorithm
On considering the evolution of AIS on the track similar to that of Maslow need hierarchy theory we
come across with the Psychoclonal algorithm. Need level 5 make the real difference between AIS and
the Psychoclonal algorithm. In Psychoclonal algorithm the novel feasible antibodies are constantly
introduced to make a boarder exploration of the search space and prevent the saturation of the
population with similar antibodies. This feature inherits a unique ability of maintaining the diversity
of population. Evolution of the Psychoclonal algorithm is shown in figure 8.
203
For Limited Circulation
If
gen < max_gen Y
Output: Ab, fk
Self- Memory
Editing(Abd) actualization set(Abd)
needs
The figure clearly depicts the sequence in which Maslow’s needs are fulfilled. Acronyms
used in algorithm are explained in appendix B.
Need level 3. Social needs: Here, interaction is carried out between antibodies with a view as to
identify the relation with each other.
3.1 From randomly generated set Ab select n highest affinity antibodies and compose a new set
Ab k, n of high affinity antibodies in the relation to Agj.
3.2 Now the new composed set of antibodies is cloned proportional to their antigenic affinities,
generating a repertoire C k of clones. Higher the antigenic affinity, the higher the number of
clones generated for each n selected Ab’s
Need level 4. Growth needs: Set Ck is submitted for hypermutation, inversely proportional to the
antigenic affinity, generating a population C*k of matured clones (the higher the affinity, the smaller
the mutation rate).
After satisfaction of need level 4 it is necessary to check the need level 2 once again for these entities,
as they are new denizens of society. Therefore, they must be exposed to the threats and properly
evaluated as per objective function.
Need level 2. Safety needs: Determine the affinity f k* of the matured clones c*k with relation to an
antigen. Now the whole of n, Ab’s are selected to compose the memory set.
Need level 5. Self-actualization needs: finally replace the d lowest affinity Ab’s from Abd and choose
the best among them to fulfill the self-actualization level. As mentioned above, this level becomes
stronger and stronger after a number of generations. Thus, the process is continuing until N = Ngen .
(maximum number of generation).
5.3. Implementation of the Psychoclonal Algorithm for Optimization of Supply Chain
Network
5.3.1 Need level 1
Physiological needs: Defining search space and then selecting antibodies from search space randomly
satisfy physiological needs of problem. The antibodies consist of string of candidate solution, taken
from search space (-1, 1), shown in figure 9.
204
For Limited Circulation
x1 x2 x3 x4
n = size of population
Π = hypermutation rate
ρ = decay factor
After satisfying the need level 4, the antibodies formed are again attacked by the antigens, affinity
vector of matured clones are calculated using objective function given in equation (9). The weak
clones are edited at this level using receptor editing
5.3.5 Need level 5
205
For Limited Circulation
Actualization need: Finally, the best solution is retained in need level 5 as a memory set. The weak
solutions are also edited at this level. Thus, with number of generation, the quality of solution
improves in the memory set.
Process repeats till gen = max_gen.
6. Results and discussions
In this research six factors have been considered for the simulation modeling of a supply chain
network. Among these two factors are qualitative and rests are quantitative, therefore they vary on
continuous and discrete scales respectively. Clearly, optimal level of the former can be identified
using Taguchi orthogonal array while latter can not be dealt with this method. In the past Shang et al.
(2004) have made an attempt to identify optimal setting of quantitative factors by deploying response
surface methodology (RSM) on simulation based regression analysis model. Nevertheless, because of
some draw backs coupled with RSM, we have motivated to use a new evolutionary algorithm named
Psychoclonal algorithm for identifying the optimal setting of qualitative factors incurring minimum
supply chain costs. After successful implementation of the Psychoclonal algorithm on the objective
function given in equation (9), we have obtained the optimal level of the quantitative factors in coded
form (in the range [-1, 1]) as listed in table 5.
206
For Limited Circulation
Response Surface Methodology (refer to appendix C). The results obtained by employing AIS are
listed in table 7 and 8. While, results found out using RSM is provided in table 9 and 10.
The programming of the Psychoclonal algorithm and AIS has been done in C++ language on IBM PC
with Pentium IV CPU at 1.9GHz. The minimum total supply chain costs corresponding to the optimal
setting of the factors obtained by Psychoclonal algorithm is $117.01 million. In contrast to this, at
stationary point the minimum total costs obtained by AIS and RSM are $ 118.96 million and $ 120.26
million respectively. This shows the advantage of Psychoclonal algorithm over AIS and RSM. In the
Psychoclonal algorithm convergence of result starts on very few generations, thus the computational
burden has been considerably reduced. This can be attributed to the various levels of needs and
207
For Limited Circulation
hypermutation, which accelerates the multiple local searches simultaneously. The convergence trend
followed by Psychoclonal algorithm and AIS is shown in figure 10.
psychoclonal
126 AIS
124
122
Supply Chain Cost
120
118
116
114
112
110
0 25 50 75 100 125 150 175 200 225 250 275 300
Iteration
This figure shows that the Psychoclonal algorithm converges at the 147th generation to achieve the
minimum possible supply chain costs of $117.01 million, whereas AIS converges at the 166th
generations with the total supply chain costs of $ 118.96 million. Evidently, the Psychoclonal
algorithm converges much earlier and with a lower supply chain costs. The certainty of convergence
of the Psychoclonal is due to its ability to maintain the diversity of population. This is because, the
feasible antibodies are constantly introduced to make a broader exploration of search space and
prevent saturation of the population with similar antibodies. In nutshell, the superior performance of
the Psychoclonal algorithm may be ascribed due to its ability to produce a large volume of antibodies
depending upon the needs level. It is found that the probability of finding optimum solution is
enhanced when the pool considered is large. Further, receptor editing helps in escaping local-optima
and the self-actualization needs level makes it stronger and stronger with each generation.
Parameter tuning is a matter of major concern for any optimization problem as it affects the
performance of the algorithm. Fine tuning of the parameters and scrutinizing their correlation with the
problem under consideration needs random experiments. Psycho-Clonal algorithm is explored mainly
by three parameters; namely: n (number of antibodies to be selected for cloning), Nc (number of
clones generated), and d (the amount of low-affinity antibodies to be replaced). These mainly
influence the convergence speed, and the computational complexity.
7. Conclusion
Implementation of an efficient operational strategy for a supply chain is a tough job due to difficulties
in inferring the outcomes of interactions among various factors. To deal with this difficulty, we have
proposed a hybrid approach encapsulating simulation, Taguchi method, nonlinear robust regression
analysis and the Psychoclonal algorithm. First, simulation is carried out to model a comprehensive
supply chain network. Further Taguchi method is deployed to quickly determine a robust area and to
optimize qualitative factors. Subsequently, regression analysis and Psychoclonal algorithm refines the
result obtained by Taguchi orthogonal array. In addition, authors have also done the same by
replacing Psychoclonal algorithm with AIS and RSM. The comparisons of results revealed that the
Psychoclonal algorithm explores the search space better then AIS and RSM. By adopting the
208
For Limited Circulation
proposed approach, supply chain players can coordinate their operations to achieve the highest
efficiency for the entire chain. This study makes it possible for the firms to understand the dynamic
relations among various factors, and provide guidelines for management to minimize the impact of
demand uncertainty on the supply chain performance. The results obtained help the manufacturer to
determine the proper plant capacity and right level of delayed differentiation strategy for its products.
Further research may include strategic costs relating to design changes of the supply chain. The
strategic costs may comprise product and process redesign costs for postponement, costs for
establishing information sharing through advanced information technology, and costs for reducing the
lead time.
Appendix A:
2.1 Bass model
Bass model is helpful in evaluation of demand of short life cycle products (Bass 1969). In present
research, authors focus their attention on short life cycle product and use Bass model to estimate the
demand. In past, this model has been used by number of researchers (Lawrence and Lawton 1981 and
Thomas 1985).
−( ω +υ )t
1− e
Dt = K
− ( ω +υ )t
1 + ( ω / υ )e
… (12)
dD
dt = t = ( ω + (υ / K )Dt − 1 )( K − Dt − 1 )
dt
Where,
dt = the demand at period t
Appendix B:
Ab: available set of antibodies
Abd: set of new antibodies that will replace the amount d of the lower affinity
antibodies from Ab.
Abk,n : antibodies from Ab with highest affinity
Agm : population of m antigens.
Ck : population of Nc clones generated from Abk, n
C*k : population after hypermutation.
fk : vector containing values of objective function Y (.) as the affinity of all
antibodies in the relation to antigen
f*k : vector containing the values of antigenic affinity for matured clones
N : total number of antibodies
209
For Limited Circulation
Appendix C:
Response surface methodology (RSM)
Response surface methodology (RSM) uses a group of mathematical and statistical techniques to find
the optimal condition and response (s).
The second order model (given in equation 6) can be expressed in matrix format as:
Ŷ = βˆ 0 + X ' b + X ' BX … (13)
Where
⎛ βˆ ⎞ ⎛ βˆ βˆ 12 / 2 .. .. βˆ 1k / 2 ⎞⎟
⎜ 1⎟ ⎜ 11
⎜ βˆ ⎟ ⎜ βˆ / 2 βˆ 22 .. .. βˆ 2k / 2 ⎟
⎜ 2⎟ ⎜ 21 ⎟
b=⎜ : ⎟ , Β =⎜ : : : : : ⎟ … (14)
⎜ : ⎟ ⎜ : : : : : ⎟
⎜ ⎟ ⎜ ⎟
⎜ βˆ ⎟ ⎜β /2
ˆ .. .. .. βˆ kk / 2 ⎟
⎝ k⎠ ⎝ i1 ⎠
The set of X i that optimizes the response surface is called the stationary point, where partial
∂Ŷ
derivative is Zero, i.e. = b + 2 BX = 0 … (15)
∂X
⇒ Optimal input level is X 0 = −0.5 B −1 b . … (16)
By using equation 10, 12, 14 and table 4 we obtain stationary point as:
⎡ 0.689 − 0.164 1.637 − 0.039 ⎤ ⎡0.717 ⎤
⎢− 0.164 1.109 2.247 2.863 ⎥⎥ ⎢− 1.0037⎥
X 0 = −0.5 * ⎢ ⎢ ⎥
⎢ 1.637 2.247 1.552 − 0.0005⎥ ⎢2.111 ⎥
⎢ ⎥ ⎢ ⎥
⎣− 0.039 2.863 − 0.005 1.316 ⎦ ⎣0.897 ⎦
⎡0.4227 ⎤
⎢− 0.4839⎥
= ⎢ ⎥
⎢− 0.4299⎥
⎢ ⎥
⎣0.7246 ⎦
Minimum total costs
⎡ 0.717 ⎤
⎢−1.0037⎥
Y0 = 120.23+ 0.5 [0.4557 − 0.4839 − 0.4299 0.7246] ⎢
ˆ ⎥
⎢ 2.111 ⎥
⎢ ⎥
⎣ 0.897 ⎦
= $120.26 million
210
For Limited Circulation
References
ADA, G. L., and NOSSAL, G. J. V., 1987, The Clonal selection theory. Scientific American,
257(2), 50-57.
ALDERSON, W., 1950, Marketing efficiency and the principal of postponement cost.
Profit outlook, September, 1950.
Altiparmak, F., Gen, M., Lin, L., and Paksoy, T., 2006, A genetic algorithm approach for multi-
objective optimization of supply chain networks. Computers and Industrial Engineering,
51(1) 196-215.
ARNTZEN, B. C., BROWN, G. G., HARRISON and TRAFTON, L. L., 1995, Global supply
chain management at digital equipment corporation interface. 25(1), 69-93.
BASS, F.M., 1969, A new product growth model for consumer durables. Management
science, 15, 215-227.
BEAMON, B. M. and CHEN, V. C. P., 2001, performance analysis of conjoined supply chains.
International Journal of production research, 39(14), 3195-3218.
BHASKARAN, S., 1998, Simulation analysis of manufacturing supply chain. Decision sciences,
29(3), 633-657.
CHOPRA, S. and MEINDL, P., 2001, Supply chain management: strategy, planning and
operation (Englewood cliffs, Prentice-Hall).
COHEN, M. A. and LEE, H. L., 1988, Strategic analysis of integrated production distribution
system: models and methods. Operations research, 36(2), 216- 228.
DE Castro, L.N., and VON ZUBEN, F.J., 2002, Learning and optimization using the
Clonal selection principal. IEEE transaction Evolutionary computation, special issue on the
artificial immune system, 2002, 6(3), 239-251.
GAONKAR, R., and VISWANADHAM, N., 2001, Collaboration and information sharing in
global contract manufacturing networks. IEEE/ASME on mechatronics, 6, 366-376.
GAVIRNENI, S., 1997, Inventories in supply chains under cooperation, PhD Thesis, Carnegie
Mellon University, Pittsburgh, PA, September.
GAVIRNENI, S., KAPUSCINSKI, R. and TAYUR, S., 1999, Value of information in
capacitated supply chains. Management science, 45(1).
HARIHARAN, R., and ZIPKIN, P., 1995, Customer-order information, lead times, and
inventories. Management science, 41(1), 1599-1607.
HEWITT, F., 1999, Information technology mediated business process management – lessons
from the supply chain. International journal of Technology management, Geneva, 17(1-2).
JERNE, N. K., 1974, Towards a network theory of the immune system. Ann. Immunol.
(Inst. Pasteur), 1256, 373-389.
Kwak, T.C., Kim J.S., and Chiung M., 2006, Supplier–buyer models for the bargaining process
over a long-term replenishment contract. Computers and Industrial Engineering, 51(2), 219-
228.
LAWRENCE, K. D. and LAWTON, W.H., 1981, Applications of diffusion models:
some empirical results. In Y.Wind, V.Maha, and R.C. Cardozo, (eds.), New product
forecasting (Lexington, MA: Lexington Books), pp. 529-541
LEE, H., PADMANABHAN, V., and WHANG, S., 1994, Information distortion in a
supply chain: the bullwhip effect. Management sciences, 43(4), 546-558.
211
For Limited Circulation
MAIER, N. R. F., 1965, Psychology in industry, 3rd edition, 417-419 (Houghton- Mifflin,
Boston, Massachusetts).
METTERS, R., 1997, Quantifying the bullwhip effect in supply chain. Journal of
operational management, 15, 89-100.
MONTGOMERY, D.C., 2001, Design and analysis of experiments (New York: Wiley).
NOSSAL, G.J.V., 1994, Negative selection of Lymphocytes. Cell, 76, 229-239.
SAHIN, F. and ROBINSON, E., 2002, Flow coordination and information sharing in
supply chains: review, implications and directions for future research. Decision sciences, 33,
505-536.
SHANG, J. S., LI, S., and TADIKAMALLA, P., 2004, Operational design of a supply chain
system using the Taguchi method, response surface methodology, simulation, and
optimization. International journal of production research, 42(18), 3823-3849.
Shunk, D.L., Kim, J.I., and Nam, H.Y., 2003, The application of an integrated enterprise
modeling methodology—FIDO—to supply chain integration modeling. Computers and
Industrial Engineering, 45(1) 167-193.
SPEKMAN, R. E., 1988, Strategic supplier selection: understanding long-term buyer relation-
ships. Business Horizons, July-August, pp. 80-81.
STERMAN, J.D., 1989, Modeling managerial behavior: misperceptions of feedback in a
dynamic decision making experiment. Management science, 35, 321-339.
STRADER, T. J., LIN, F. R. and SHAW, M., 1999, The impact of information sharing on order
fulfillment in divergent differentiation supply chains. Journal of Global Information
Management, Harrisburg; Jan-mar, 7(1).
SWAMINATHAN, J. M. and TAYUE, S. R., 1998, managing broader product line
through delayed differentiation using vanilla boxes. Management science, December, 161-
172.
TAGUCHI, G., 1886, Introduction to quality engineering (New York: Asia productivity
organization, UNIPUB, White Plains).
THOMAS, R. J., 1985, Estimating market growth for new products: an analogical
diffusion model approach. Journal of Product Innovation Management, 2(1), 45-55.
TIWARI, M. K., PRAKASH, KUMAR, A. and MILEHAM, A. R., 2004, Determination
of an optimal assembly sequence using the Psychoclonal algorithm. IMechE Journal
of engineering manufacture –part B, 219, 137-149.
TOMPKINS, J. A., 1998, Time to rise above supply chain management. Transportation and
Distribution,39(10).
TOWILL, D.R., NAIM, M.M., and Wikner, J., 1992, Industrial dynamics simulation
models in the design of supply chains. International Journal of Distribution and
Logistics management, 22, 3-13.
VANHOUTUM, G., INDERFURTH, K., and ZIJM, W., 1996, Material coordination in stochastic
multi-echelon systems. European Journal of Operational Research, 95, 1-23.
Zhang, D. Z., Anosike, A.I., Ming K.L., and Akanle O.M., 2006, An agent-based approach for e-
manufacturing and supply chain integration. Computers and Industrial Engineering, 51(2),
343-360.
ZHAO, X. and XIE, J., 2002, Forecasting errors and value of information sharing in a supply
chain. International Journal of Production Research, 40(2), 311-335.
ZHENG, Y., and ZIPKIN, P., 1990, A queuing model to analyze the value of centralized
inventory information. Operations Research, 38(2), 296-307.
212