0% found this document useful (0 votes)
4 views

LinearProgrammingComputation2ndEd.

Uploaded by

sr30072005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

LinearProgrammingComputation2ndEd.

Uploaded by

sr30072005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/367052505

Linear Programming Computation

Book · January 2023

CITATIONS READS

5 548

1 author:

Ping-Qi Pan
Southeast University
167 PUBLICATIONS 1,520 CITATIONS

SEE PROFILE

All content following this page was uploaded by Ping-Qi Pan on 12 January 2023.

The user has requested enhancement of the downloaded file.


Linear Programming Computation
Ping-Qi PAN

Linear Programming
Computation

Second Edition
Ping-Qi PAN
Department of Mathematics
Southeast University
Nanjing, China

ISBN 978-981-19-0146-1 ISBN 978-981-19-0147-8 (eBook)


https://doi.org/10.1007/978-981-19-0147-8

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore
Pte Ltd. 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Dedicated to my parents, Chao Pan and
Yinyun Yan; my wife, Minghua Jiang; my son,
Yunpeng; my granddaughter, Heidi
Preface to First Edition

Linear programming (LP) founded by Dantzig (1948–1951b) might be one of the


most well-known and widely used mathematical tools in the world. As a branch
of optimization, it serves as the most important cornerstone of operations research,
decision science, and management science.
This branch emerged when the American mathematician George B. Dantzig
created the LP model and the simplex method in 1947. The computer, emerging
around the same period, propelled the development of LP and the simplex method
toward practical applications. As a basic branch, LP orchestrated the birth of
a number of new branches, such as nonlinear programming, network flow and
combinatorial optimization, stochastic programming, integer programming, and
complementary theory, etc., and activated the whole field of operations research.
A prominent feature of LP is given by its broad applications. Closely related
to LP, many individuals have made pioneering contributions in their respective
fields. In the field of economics, in particular, Russian-American economist Wassily
Leontief took the 1973 Nobel Economic Prize for his epoch-making contribution
to quantitative analysis of economic activities. The academician L. V. Kantorovich
of the former Soviet Academy of Science and American economist Professor T. C.
Koopmans won the 1975 Nobel Prize for their optimal allocation theory of resources
using LP. The same prize was also given to Professors K. Arrow, P. Samuelson, H.
Simon, and L. Herwricz, several decades later when they paid close attention to LP
at the starting days of their professional careers.
The simplex method has also achieved great success in practice. As it is known,
its applications to many fields, such as economy, commerce, production, science and
technology, and defense and military affairs, etc, have brought about astonishing
economic and social benefits. It is recognized as one of The Ten Algorithms in the
Twenty Century (IEEE2002; see Cipra (2000).
After more than 70 years since its birth, LP is now a relatively mature but still
developing branch. Nevertheless, there exist great challenges. The importance of
large-scale sparse LP models is nowadays enhanced further by globalization. The
everyday practice calls upon the research community to provide more powerful
solution tools just to keep up with the ever-increasing problem sizes. Therefore, this

vii
viii Preface to First Edition

book does not only present fundamental materials but also attempts to reflect the
state of the art of LP. It has been my long-lasting belief that research, in operations
research/management science, in particular, should be of practical value, at least
potentially. The author, therefore, focuses on theories, methods, and implementation
techniques that are closely related to LP computation.
This book consists of two parts. Part I mainly covers fundamental and conven-
tional materials, such as the geometry of the feasible region, the simplex method,
duality principle, and dual simplex method, implementation of the simplex method,
sensitivity analysis, parametric LP, variants of the simplex method, decomposition
method, and interior-point method. In addition, integer linear programming (ILP),
differing from LP in nature, is also discussed not only because ILP models can be
handled by solving a sequence of LP models but also because they are so rich in
practice and form a major application area of LP computations.
Part II presents published or unpublished new results achieved by the author
himself, such as pivot rule, dual pivot rule, simplex phase-1 method, dual simplex
phase-I method, reduced simplex method, generalized reduced simplex method,
deficient-basis method, dual deficient-basis method, face method, dual face method,
and pivotal interior-point method. The last chapter contains miscellaneous topics,
such as some special forms of the LP problem, approaches to intercepting for primal
and dual optimal sets, practical pricing schemes, relaxation principle, local duality,
decomposition principle, and ILP method based on the generalized reduced simplex
framework.
To make the material easier to follow and understand, the algorithms in this book
were formulated and illustrative examples were worked out wherever possible. If
the book is used as a textbook for upper-level undergraduate/graduate course, (Part
I) may be suitable to be as basic course material.

Acknowledgments

At this very moment, I deeply cherish the memory of Professor Xuchu He, my
former mentor at Nanjing University, aroused my interest in optimization. I honor
the Father of linear programming and the simplex method, Professor George. B.
Dantzig for the encouragement given during the 16th International Symposium on
Mathematical Programming, Lausanne EPFL., in August 1997. I am very grateful
to Professor Michael Saunders of Stanford University for his thoughtful comments
and suggestions given in the past. He selflessly offered the MINOS 5.51 package
(the latest version of MINOS at that time) and related materials, from which my
work and this book benefited greatly. MINOS has been the benchmark and platform
of our computational experiments. I thank Professor R. Tyrrell Rockafellar of the
University of Washington, Professor Thomas F. Coleman of Cornell University, and
Professor Weide Zheng of Nanjing University for their support and assistance. I
would also like to thank the following colleagues for their support and assistance:
Professor James V. Burke of the University of Washington, Professor Liqun
Preface to First Edition ix

Qi of Hong Kong Polytechnic University, Professor Lizhi Liao of Hong Kong


Baptist University, Professors Leena Suhl and Dr. Achim Koberstein of Paderborn
University, Professor Uwe Suhl of Freie University, and Dr. Pablo Guerrero-Garcia
of the University of Malaga.
This book was supported partially by projects 10871043 and 70971136 of the
National Natural Science Foundation of China.

Nanjing, China Ping-Qi Pan


December 2012
Preface to Second Edition

Since its birth, linear programming (LP) has achieved great success in many fields,
such as economy, commerce, production, science, and technology, and brought
about amazing economic and social benefits. After 70 years of development, LP
now becomes a relatively mature branch in OR/MS. Nevertheless, the academic
community faces a major challenge of the growing demand, which needs more
powerful and robust tools to deal with large-scale and difficult problems.
To meet the challenge, this book draws materials from a practical point of
view, focusing on theories, methods, and implementation techniques that are
closely related to LP computation. Its first edition consists of two parts. Roughly
speaking, Part I covers fundamental materials of LP, and Part II includes pub-
lished/unpublished new results achieved by the author.
Little attention was received via the author’s ResearchGate Web page until Shi,
Zhang, and Zhu published their book review in the European Journal of Operational
Research (June 2018). Since then, the book quickly attracted considerable attention
from academic communities around the world. As of November 6, 2022, the top ten
chapter reads are as follows:

Chapter Reads
1. Duality Principle and Dual Simplex Method (Part I) 15419
2. Simplex Feasible-Pointx Method (Part II) 7861
3. Integer Linear Programming (ILP) (Part I) 7340
4. Implementation of the Simplex Method (Part I) 3284
5. Dual Simplex Phase-I Method (Part II) 2613
6. Simplex Method (Part I) 2576
7. Variants of the Simplex Method (Part I) 2328
8. Geometry of the Feasible Region (Part I) 1951
9. Pivot Rule (Part II) 1844
10. Simplex Phase-I Method (Part I) 1506

xi
xii Preface to Second Edition

The preceding data are consistent with corresponding downloads from the
Springer website.
Surprisingly enough, the chapter ”Dual Principle and Dual Simplex Method” not
only received the maximum number of reads as a whole but also almost weekly,
even if it is devoted to such a classical topic. This might be because the discussions
are comprehensive, compared with popular literature.
Proposed for the first time, the Feasible-Point Simplex Method, which appeared
as the final section of the chapter ”Pivotal Interior-Point Method,” received the
second-highest reads as a whole. It achieved this in a short period of 10 weeks.
Indeed, supported by the solid computational results, the method itself might exceed
all expectations.
The final chapter of Part I, ”Integer Linear Programming (ILP),” usually received
the second-highest reads weekly. This happened, I guess, because many researchers
are interested in the newly proposed controlled-branch method and controlled-
cutting method, which have potential applications for developing new ILP solvers.
The author’s main contributions seem to have not been fully valued by now,
including most pivot rules, reduced and D-reduced simplex methods, deficient-basis
and dual deficient-basis methods, face and dual face methods, and the decomposi-
tion principle, included in Part II. This is somewhat surprising since these methods
are supported by solid computational results, except for the newly introduced
reduced and D-reduced simplex methods, and the decomposition principle. In the
author’s view, the reduced simplex method is particularly noteworthy. It is the
first simplex method that searches along the objective-edge. In contrast to existing
simplex methods, it first determines pivot row and then pivot column, and does
so without any selection essentially. As a dual version of it, the D-reduced simplex
method shares similar attractive features, through determining pivot column first and
pivot row later. As for the decomposition principle, it allows for solving arbitrarily
large-scale (even dense) LP problems, in essence, giving a glimmer of light on some
other types of separable large-scale problems. Time will likely clarify the value of
these methods vs other LP solvers.
We are also optimistic about the face and dual face methods. Without exploiting
sparsity, the original methods were implemented using orthogonal transformation
with favorable computational results reported. In the second edition, we added
two chapters devoted to new methods with LU factorization for sparse computing.
Indeed, this is a very natural development which the author cannot help himself
from breaking into.
Some changes on chapters and sections were made with corrections and improve-
ments, and the whole second edition was organized into two parts. Roughly speak-
ing, Part I (Foundations) contains Part I of the first edition, and Part II (Advances)
includes Part II. The Simplex Feasible-Point Algorithm was improved, and removed
from the chapter ”Pivotal Interior-Point Method” to form an independent chapter
with its new title “Simplex Interior-Point Method,” since it actually represents a
new class of interior-point algorithms, which can be transformed from the traditional
simplex algorithms. The title of the original chapter was changed to “Facial Interior-
Point Method,” since the remaining algorithms represent another new class of
Preface to Second Edition xiii

interior-point algorithms, which can be transformed from the normal interior-point


algorithms. In particular, the chapter “Integer LP” was rewritten in great gains of
the introduction of the objective cutting. Another exciting improvement was the
reduced simplex method. The original derivation of its prototype was presented in
a chapter with the same title, and then transformed into the so-called “improved”
one in another chapter. Fortunately, inspired by the bisection simplex method, we
recently found a quite concise new derivation, so we can introduce it in a single
chapter now.
Finally, I am grateful to many individuals for their valuable comments and
suggestions on the first edition of this book. In particular, I appreciate comments
and encouragement given by Dr. Y.-Y Shi, Professor L.-H Zhang, and Professor
W.-X Zhu.

Nanjing, China Ping-Qi Pan


November 2022

References

1. Cipra BA (2000) The best of the 20th century: editors name top 10 algorithms.
SIAM News 33:1–2
2. Murtagh BA, Saunders MA (1998) MINOS 5.5 user’s guid. Technical report SOL
83-20R, Department of Engineering Economics Systems & Operations Research,
Stanford University, Stanford
3. Shi Y-y, Zhang L-H, Zhu W-X (2018) A review of linear programming compu-
tation by Ping-Qi Pan. European Journal of Operational Research 267(3):1182–
1183
Contents

Part I Foundations
1 Introduction .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3
1.1 Error of Floating-Point Arithmetic . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4
1.2 From Real-Life Issue to LP Model . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6
1.3 Illustrative Applications . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 9
1.4 Standard LP Problem.. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 14
1.5 Basis and Feasible Basic Solution .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 17
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 22
2 Geometry of Feasible Region .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 23
2.1 Feasible Region as Polyhedral Convex Set . . . . .. . . . . . . . . . . . . . . . . . . . 24
2.2 Interior Point and Relative Interior Point . . . . . . .. . . . . . . . . . . . . . . . . . . . 28
2.3 Face, Vertex, and Extreme Direction . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 31
2.4 Representation of Feasible Region . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 42
2.5 Optimal Face and Optimal Vertex . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 44
2.6 Graphic Approach .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 46
2.7 Heuristic Characteristic of Optimal Solution . . .. . . . . . . . . . . . . . . . . . . . 47
2.8 Feasible Direction and Active Constraint . . . . . . .. . . . . . . . . . . . . . . . . . . . 51
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 54
3 Simplex Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 57
3.1 Simplex Algorithm: Tableau Form . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 57
3.2 Getting Started.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 68
3.3 Simplex Algorithm .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 74
3.4 Degeneracy and Cycling . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 80
3.5 Finite Pivot Rule. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 84
3.6 Notes on Simplex Method . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 91
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 93
4 Implementation of Simplex Method .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 95
4.1 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 95
4.2 Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 97

xv
xvi Contents

4.3 LU Factorization of Basis . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 99


4.4 Sparse LU Factorization of Basis . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 102
4.5 Updating LU Factors .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 106
4.6 Crash Procedure for Initial Basis . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 109
4.7 Harris Rule and Tolerance Expending.. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 111
4.8 Pricing for Reduced Cost . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 113
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 117
5 Duality Principle and Dual Simplex Method . . . . . . . .. . . . . . . . . . . . . . . . . . . . 119
5.1 Dual LP Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 119
5.2 Duality Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 122
5.3 Optimality Condition.. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 125
5.4 Dual Simplex Algorithm: Tableau Form . . . . . . . .. . . . . . . . . . . . . . . . . . . . 128
5.5 Dual Simplex Algorithm .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 132
5.6 Economic Interpretation of Duality: Shadow Price .. . . . . . . . . . . . . . . . 136
5.7 Dual Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 137
5.8 Bilevel LP: Intercepting Optimal Set . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 142
5.9 Notes on Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 146
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 147
6 Primal-Dual Simplex Method . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 149
6.1 Mixed Two-Phase Simplex Algorithm . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 149
6.2 Primal-Dual Simplex Algorithm.. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 152
6.3 Self-Dual Parametric Simplex Algorithm .. . . . . .. . . . . . . . . . . . . . . . . . . . 158
6.4 Criss-Cross Algorithm Using Most-Obtuse-Angle Rule . . . . . . . . . . . 162
6.5 Perturbation Primal-Dual Simplex Algorithm . .. . . . . . . . . . . . . . . . . . . . 166
6.6 Notes on Criss-Cross Simplex Algorithm .. . . . . .. . . . . . . . . . . . . . . . . . . . 170
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 170
7 Sensitivity Analysis and Parametric LP. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 173
7.1 Change in Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 173
7.2 Change in Right-Hand Side .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 176
7.3 Change in Coefficient Matrix . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 178
7.3.1 Dropping Variable . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 178
7.3.2 Adding Variable . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 180
7.3.3 Dropping Constraint .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 182
7.3.4 Adding Constraint . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 184
7.3.5 Replacing Row/Column .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 189
7.4 Parameterizing Objective Function .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 192
7.5 Parameterizing Right-Hand Side . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 196
8 Generalized Simplex Method. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 199
8.1 Generalized Simplex Algorithm .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 200
8.1.1 Generalized Phase-I . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 207
8.2 Generalized Dual Simplex Algorithm: Tableau Form . . . . . . . . . . . . . 207
8.2.1 Generalized Dual Phase-I . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 212
Contents xvii

8.3 Generalized Dual Simplex Algorithm .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 213


8.4 Generalized Dual Simplex Algorithm with Bound-Flipping .. . . . . . 220
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 223
9 Decomposition Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 225
9.1 D-W Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 226
9.1.1 Starting-Up of D-W Decomposition .. .. . . . . . . . . . . . . . . . . . . . 231
9.2 Illustration of D-W Decomposition . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 232
9.3 Economic Interpretation of D-W Decomposition.. . . . . . . . . . . . . . . . . . 238
9.4 Benders Decomposition .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 240
9.5 Illustration of Benders Decomposition .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . 248
9.6 Dual Benders Decomposition.. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 254
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 259
10 Interior-Point Method.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 261
10.1 Karmarkar Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 262
10.1.1 Projective Transformation .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 263
10.1.2 Karmarkar Algorithm . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 264
10.1.3 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 266
10.2 Affine Interior-Point Algorithm . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 271
10.2.1 Formulation of the Algorithm .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . 272
10.2.2 Convergence and Starting-Up . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 274
10.3 Dual Affine Interior-Point Algorithm . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 276
10.4 Path-Following Interior-Point Algorithm . . . . . . .. . . . . . . . . . . . . . . . . . . . 279
10.4.1 Primal–Dual Interior-Point Algorithm.. . . . . . . . . . . . . . . . . . . . 281
10.4.2 Infeasible Primal–Dual Algorithm .. . . .. . . . . . . . . . . . . . . . . . . . 284
10.4.3 Predictor–Corrector Primal–Dual Algorithm . . . . . . . . . . . . . 286
10.4.4 Homogeneous and Self-Dual Algorithm . . . . . . . . . . . . . . . . . . 289
10.5 Notes on Interior-Point Algorithm . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 291
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 295
11 Integer Linear Programming (ILP) . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 299
11.1 Graphic Approach .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 300
11.1.1 Basic Idea Behind New ILP Solvers .. .. . . . . . . . . . . . . . . . . . . . 301
11.2 Cutting-Plane Method .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 302
11.3 Branch-and-Bound Method .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 306
11.4 Controlled-Cutting Method .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 314
11.5 Controlled-Branch Method . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 316
11.5.1 Depth-Oriented Strategy . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 316
11.5.2 Breadth-Oriented Strategy . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 319
11.6 ILP: with Reduced Simplex Framework . . . . . . . .. . . . . . . . . . . . . . . . . . . . 323
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 337
xviii Contents

Part II Advances
12 Pivot Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 341
12.1 Partial Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 342
12.2 Steepest-Edge Rule. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 344
12.3 Approximate Steepest-Edge Rule . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 346
12.4 Largest-Distance Rule.. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 348
12.5 Nested Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 350
12.6 Nested Largest-Distance Rule . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 352
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 353
13 Dual Pivot Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 355
13.1 Dual Steepest-Edge Rule. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 355
13.2 Approximate Dual Steepest-Edge Rule . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 359
13.3 Dual Largest-Distance Rule . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 360
13.4 Dual Nested Rule .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 362
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 363
14 Simplex Phase-I Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 365
14.1 Infeasibility-Sum Algorithm .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 365
14.2 Single-Artificial-Variable Algorithm . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 370
14.3 Perturbation of Reduced Cost. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 376
14.4 Using Most-Obtuse-Angle Column Rule . . . . . . .. . . . . . . . . . . . . . . . . . . . 380
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 383
15 Dual Simplex Phase-l Method . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 385
15.1 Dual Infeasibility-Sum Algorithm .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 385
15.2 Dual Single-Artificial-Variable Algorithm .. . . . .. . . . . . . . . . . . . . . . . . . . 389
15.3 Perturbation of the Right-Hand Side . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 395
15.4 Using Most-Obtuse-Angle Row Rule . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 398
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 401
16 Reduced Simplex Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 403
16.1 Reduced Simplex Algorithm.. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 404
16.2 Dual Reduced Simplex Algorithm . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 412
16.2.1 Dual Reduced Simplex Phase-I:
Most-Obtuse-Angle.. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 418
16.3 Perturbation Reduced Simplex Algorithm . . . . . .. . . . . . . . . . . . . . . . . . . . 422
16.4 Bisection Reduced Simplex Algorithm . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 426
16.5 Notes on Reduced Simplex Algorithm .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . 430
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 430
17 D-Reduced Simplex Method. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 433
17.1 D-Reduced Simplex Tableau . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 434
17.2 Dual D-Reduced Simplex Algorithm.. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 437
17.2.1 Dual D-Reduced Phase-I . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 442
17.3 D-Reduced Simplex Algorithm .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 445
17.3.1 D-Reduced Phase-I: Most-Obtuse-Angle Rule. . . . . . . . . . . . 452
Contents xix

17.4 Bisection D-Reduced Simplex Algorithm . . . . . .. . . . . . . . . . . . . . . . . . . . 456


Reference .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 460
18 Generalized Reduced Simplex Method. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 461
18.1 Generalized Reduced Simplex Algorithm . . . . . .. . . . . . . . . . . . . . . . . . . . 462
18.1.1 Generalized Reduced Phase-I: Single Artificial
Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 469
18.2 Generalized Dual Reduced Simplex Algorithm . . . . . . . . . . . . . . . . . . . . 476
18.2.1 Generalized Dual Reduced Phase-I .. . .. . . . . . . . . . . . . . . . . . . . 481
18.3 Generalized Dual D-Reduced Simplex Algorithm . . . . . . . . . . . . . . . . . 484
19 Deficient-Basis Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 497
19.1 Concept of Deficient Basis. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 498
19.2 Deficient-Basis Algorithm: Tableau Form . . . . . .. . . . . . . . . . . . . . . . . . . . 500
19.3 Deficient-Basis Algorithm . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 504
19.3.1 Computational Results . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 508
19.4 On Implementation .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 509
19.4.1 Initial Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 509
19.4.2 LU Updating in Rank-Increasing Iteration . . . . . . . . . . . . . . . . 510
19.4.3 Phase-I: Single-Artificial-Variable .. . . .. . . . . . . . . . . . . . . . . . . . 511
19.5 Deficient-Basis Reduced Algorithm .. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 516
19.5.1 Phase-I: Most-Obtuse-Angle Rule . . . . .. . . . . . . . . . . . . . . . . . . . 524
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 530
20 Dual Deficient-Basis Method . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 531
20.1 Dual Deficient-Basis Algorithm: Tableau Form . . . . . . . . . . . . . . . . . . . . 531
20.2 Dual Deficient-Basis Algorithm . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 536
20.3 Dual Deficient-Basis D-Reduced Algorithm: Tableau Form .. . . . . . 540
20.4 Dual Deficient-Basis D-Reduced Algorithm .. . .. . . . . . . . . . . . . . . . . . . . 549
20.5 Deficient-Basis D-Reduced Gradient Algorithm:
Tableau Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 552
20.6 Deficient-Basis D-Reduced Gradient Algorithm . . . . . . . . . . . . . . . . . . . 563
Reference .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 564
21 Face Method with Cholesky Factorization.. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 565
21.1 Steepest Descent Direction . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 566
21.2 Updating of Face Solution . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 569
21.3 Face Contraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 570
21.4 Optimality Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 571
21.5 Face Expansion .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 572
21.6 Face Algorithm .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 573
21.6.1 Face Phase-I: Single-Artificial-Variable .. . . . . . . . . . . . . . . . . . 574
21.6.2 Computational Results . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 577
21.7 Affine Face Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 578
21.8 Generalized Face Algorithm .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 582
21.9 Notes on Face Method . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 584
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 585
xx Contents

22 Dual Face Method with Cholesky Factorization . . . .. . . . . . . . . . . . . . . . . . . . 587


22.1 Steepest Ascent Direction.. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 588
22.2 Updating of Dual Face Solution . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 591
22.3 Dual Face Contraction . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 591
22.4 Optimality Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 593
22.5 Dual Face Expansion .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 594
22.6 Dual Face Algorithm .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 595
22.6.1 Dual Face Phase-I . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 596
22.6.2 Computational Results . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 600
22.7 Dual Face Algorithm via Updating (B T B)−1 . .. . . . . . . . . . . . . . . . . . . . 601
23 Face Method with LU Factorization .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 607
23.1 Decent Search Direction . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 607
23.2 Updating of Face Solution . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 609
23.3 Pivoting Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 612
23.4 Optimality Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 614
23.5 Face Algorithm: Tableau Form . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 615
23.6 Face Algorithm .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 618
23.7 Notes on Face Method with LU Factorization . .. . . . . . . . . . . . . . . . . . . . 624
Reference .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 624
24 Dual Face Method with LU Factorization . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 625
24.1 Key of Method.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 626
24.2 Ascent Search Direction . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 630
24.3 Updating of Dual Face Solution . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 632
24.4 Pivoting Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 634
24.5 Optimality Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 636
24.6 Dual Face Algorithm: Tableau Form . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 637
24.7 Dual Face Algorithm .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 638
24.8 Notes on Dual Face Method with LU Factorization .. . . . . . . . . . . . . . . 640
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 641
25 Simplex Interior-Point Method . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 643
25.1 Column Pivot Rule and Search Direction . . . . . . .. . . . . . . . . . . . . . . . . . . . 644
25.2 Row Pivot Rule and Stepsize . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 645
25.3 Optimality Condition and the Algorithm.. . . . . . .. . . . . . . . . . . . . . . . . . . . 647
25.4 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 649
26 Facial Interior-Point Method . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 651
26.1 Facial Affine Face Interior-Point Algorithm . . . .. . . . . . . . . . . . . . . . . . . . 651
26.2 Facial D-Reduced Interior-Point Algorithm . . . .. . . . . . . . . . . . . . . . . . . . 655
26.3 Facial Affine Interior-Point Algorithm . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 666
Reference .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 671
27 Decomposition Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 673
27.1 New Decomposition Method . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 673
27.2 Decomposition Principle: “Arena Contest” . . . . .. . . . . . . . . . . . . . . . . . . . 677
Contents xxi

27.3 Illustration on Standard LP Problem . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 678


27.4 Illustration on Bounded-Variable LP Problem .. . . . . . . . . . . . . . . . . . . . . 680
27.5 Illustration on ILP Problem .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 681
27.6 Practical Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 683

Appendix A On the Birth of LP and More . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 685


Appendix B MPS File .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 689
Appendix C Test LP Problems .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 695
Appendix D Empirical Evaluation for Nested Pivot Rules . . . . . . . . . . . . . . . 703
Appendix E Empirical Evaluation for Primal and Dual Face
Methods with Cholesky Factorization .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 721
Appendix F Empirical Evaluation for Simplex Interior-Point
Algorithm .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 727
References .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 730

Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 733
About the Book

This book represents a real breakthrough in the field of linear programming (LP).
Being both thoughtful and informative, it focuses on reflecting and promoting the
state of the art by highlighting new achievements in LP. This new edition was orga-
nized into two parts. The first part addresses foundations of LP, including geometry
of feasible region, simplex method, implementation of simplex method, duality
and dual simplex method, primal-dual simplex method, sensitivity analysis and
parametric LP, generalized simplex method, decomposition method, interior-point
method, and integer LP method. The second part mainly introduces contributions
of the author himself, such as efficient primal/dual pivot rules, primal/dual Phase-
I methods, reduced/D-reduced simplex methods, generalized reduced simplex
method, primal/dual deficient-basis methods, primal/dual face methods, and new
decomposition principle.
Many important improvements were made in this edition. The first part includes
new results, such as mixed two-phase simplex algorithm, dual elimination, fresh
pricing scheme for reduced cost, and bilevel LP model and intercepting of optimal
solution set. In particular, the chapter ”Integer LP Method” was rewritten in
great gains of the objective cutting for new ILP solvers controlled-cutting/branch
methods, as well as an attractive implementation of the controlled-branch method. In
the second part, the “simplex feasible-point algorithm” was rewritten and removed
from the chapter “Pivotal Interior-Point Method” to form an independent chapter
with the new title “Simplex Interior-Point Method,” as it represents a class of new
interior-point algorithms transformed from traditional simplex algorithms. The title
of the original chapter was then changed to “Facial Interior-Point Method,” as
the remaining algorithms represent another class of new interior-point algorithms
transformed from normal interior-point algorithms. Without exploiting sparsity, the
original primal/dual face methods were implemented using Cholesky factorization.
To cope sparse computation, two new chapters for ones with LU factorization were
added to the second part. Another exciting improvement was the reduced simplex
method. In the first edition, the derivation of its prototype was presented in a chapter
with the same title, and then converted into the so-called “improved” one in another
chapter. Fortunately, the author recently found a quite concise new derivation, so

xxiii
xxiv About the Book

he can now introduce the distinctive and very promising new simplex method in a
single chapter.
This book is a rare work in LP. With a focus on computation, it contains many
novel ideas and methods, supported by sufficient numerical results. Being clear
and succinct, its content deduces in a fresh manner, from simple to profound.
In particular, a larger number of examples were worked out to demonstrate
algorithms. So, it is an indispensable tool for undergraduate/graduate students,
teachers, practitioners, and researchers in LP and related fields.
About the Author

Ping-Qi Pan is a professor and doctoral supervisor of the Department of Math-


ematics at Southeast University, Nanjing, China. He was a visiting scholar at the
University of Washington (1986–1987) and visiting scientist at Cornell University
(1987–1988). His research interest focuses on mathematical programming and
operations research, especially large-scale linear optimization. He was standing
council member of the Mathematical Programming Society of China and standing
council member of the Operation Research Society of China. Professor Pan has
received the honorary title of Outstanding Scientific-Technical Worker of Jiangsu
Province of China. He was nominated as Top 100 scientist of 2012 by the Interna-
tional Biographical Centre, Cambridge, England. He won the Lifetime Achievement
Award by Who’s Who in the World in 2017.

xxv
Notation

In this book, in general, uppercase English letters are used to denote matrices,
lowercase English letters used to denote vectors, and lowercase Greek letters do
denote reals. Set is designated by uppercase English or Greek letters. Unless
indicated otherwise, all vectors are column ones.
LP Linear programming
ILP Integer linear programming
Rn Euclidean space of dimension n
R Euclidean space of dimension 1
0 Origin of Rn (or the zero vector)
ei Unit vector with the ith component 1 (dimension will be clear from the
context; the same below)
e Vector of all ones
I Unit matrix
A Coefficient matrix of the linear program; also stands for index set of A’s
columns, i.e., A = {1, . . . , n}
m Number of rows of A
n Number of columns of A
b Right-hand side of the equality constraint
c Cost vector, i.e., coefficient vector of the objective function
B Basis (matrix). Also the set of basic indices
N Nonbasic columns of A. Also the set of nonbasic indices
aj The j th column of A
ai j The entry of the ith row and j th column of A
vj The j th component of vector v
v Euclidean norm of vector v
max(v) The largest component of vector v
AJ Submatrix consisting of columns corresponding to index set J
vI Subvector consisting of components corresponding to row index set I
AT Transpose of A
X Diagonal matrix whose diagonal entries are components of vector x

xxvii
xxviii Notation

∇f (x) Gradient of function f (x)


⊂  is a subset of 
∪ Union set, i.e., {τ | τ ∈  or τ ∈ }
∩ Intersection set, i.e., {τ | τ ∈ , τ ∈ }
\ Complementary set, i.e., {τ | τ ∈ , τ ∈ }
∅ Empty set
| Such that. For example, {x | Ax = b} means the set of all x such that
Ax = b holds
Far less than
Far greater than
O(α) Implies a number less than and equal to kα, where k is a fixed integer
constant independent of α
|τ | Absolute value of τ if τ is a real, or cardinality of τ if τ is a set
sign (t) Sign of real t
range H Column space of matrix H
null H Null space of matrix H
dim  Dimension of set 
Cnm Number of combinations of taking m from n elements
int P Interior of set P
α The largest integer no more than α
α The smallest integer no less than α
This book involves the following two simplex FORTRAN codes:
MINOS The sophisticated smooth optimization package developed by Murtagh
and Saunders (1998) at Department of Management Science and Engi-
neering of Stanford University. Based on the simplex method, the LP
option of this sparsity-exploiting code was used as the benchmark and
platform for empirical evaluation of new LP methods
RSA The author’s private code based on the revised two-phase simplex
algorithm without exploiting sparsity. It uses the Harris two-pass row
rule

View publication stats

You might also like