Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
28 views
707 pages
Textbook - BG PDF
Uploaded by
水上由岐
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save textbook - bg (1).pdf For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
0 ratings
0% found this document useful (0 votes)
28 views
707 pages
Textbook - BG PDF
Uploaded by
水上由岐
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save textbook - bg (1).pdf For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
Download
Save textbook - bg (1).pdf For Later
You are on page 1
/ 707
Search
Fullscreen
a ee ee ae Coed bee ae tet ak ae ERB ST EELS ets: tiie COMPUTER ALGORITHMS Introduction to Design and Analysis (Third Edition) Beeps pe CL ee eae bial eg tele each} FS eee T em titer ae) oe, ce Siee ee a 2 ai) DISH YSRAATIETAE " CO cme Cu Ty Introduction to Design and Analysis, Third Edition i i BE De nora Sen tere Pt en ae eee Rabe sively cevised thistiest seller on algorithms design and analysis to make it the most curfent ard Se ee a eS ind greedy algorithms, aloag with the addition of new topics and exercises. Recher Pe eS sa tu bene! ici: ag ship eA Ne Se , Tee hd Te ee ne en An S O eet atname iets sec ab At De Ese cree . Cte en Ral EGE a oe tare ns “ay eee ad A PPE ee to ery oe Re ce Ln tel oe ee Deeds ee UEC eeu ea . OR i Te eS ae chica) rasnes related i compunng She eared heer doctorue at the University of California, Berkeley. Oe ee eto ee CeCe ka ee OO ae MR Ree go Se Tg ie oe ec? ecm noe neu Lor tomer en)ABR SHA A eR BMRA RAE SARA REAP HAMA Rit Sarit CHEM REND COMPUTER ALGORITHMS Introduction to Design and Analysis ( Third Edition ) Sara Baase Allen Van Gelder Ba FAH hw Wy Pearson Education #45481AF: 01-2001-2170 & English Reprint Copyright © 2001 by PEARSON EDUCATION NORTH ASIA LIMIED and HIGHER EDUCATION PRESS Computer Algorithms; Introduetion to Design and Analysis from Addison Wesley Longmans edition of the wotk ‘Computer Algorithms: Gelder, Copyright © 2000 All Rights Reserved Published by arrangement with ADDISON WESLEY LONGMAN, a Pearson Education company ‘This cdition is authorized for sale only in the People’s Republic of China (excluding the Special ‘Administrative Regions of Hong Kong and Macau} troduction to Design and Analysis, 3rd edition by Sara Basse, Allen Van BEAR A CIP A WRAL — RTS ORRIE. BoM / GE) OM (BaaseB.) 9. — WER. —Ib GA HAR #£, 2001.7 ISBN 7 - 4 - 010048 -7 Lit... Le... LOnritse-eeir 8 FRM BROS F EAB SeRBH RL WN. TPIS P ANRAR ETN CIP HBF (2001) 3 045055 TREE Ri Sa ROHS BA) Sera Baase $F ART RBA Ratt Hk AERTS 8S HB i00G09 & Holo sso 4558 fe A oly e401 4048 Robt hep. //www. hep. educa hupi//wwer. hep. com, on, & OF aR RRO A OB) RSA Ho BT KIN92 1/16 mw 20 ABA AR as MR 2001 7 A FM 1086 090 EB th 39.50% KPLAKH AL KAS HSM KORA RAR, FIT WAREDilf Bi WEEK, UTPMAARARARAMTEN TAA, RA. SH BR. AN. Xk. LESAHHRAFA TRAE, WU MRHEEP BRAERBARR OLE. HANES, SOA TRADER, A THE BEAR RRER: AT MRARERP LARA, ERE MEREAPRERART PESTA El p, AREER RLFAING, REBREE, SUAS HABRA RR PRP LARRRSRE BRA. REMAIN S, ABER BLES HFERAR, LFRORARRATHE EAE ORRAT SD SE zg RUEAR, ROCA VERS RELA, CYR RAME RAL, SER RR. ATROABAE RAAT, ROLAPRERRRAT EER, BEDE R-AMAA ERE SEAR BAPAERARAT, RERTERE VOR RES MKPHSDRE. AK, AAMRFAAAREAH HA BRETKHED, BK MEMEDHEFRERAP HERS THAGL, HLHRTR RRS, RROD SESH, PREG HSFRAERAER AREAS HASTE. BANE BARTLRRAN GE 21 HL LG PANEER EPRRRARH AED L, EAAPHRSSERALA ERAS HARROD RSA E PATRERMERS, VER RTAU PNET LSS AMAP HAE, Babe WT ELRRAPE REAP ATRALRAA, AO -ERELR OHHH. LER ERA ARATREHRAL, AADRFR ADR HS HS SAT LMA ROA RAE PRALHAR ERAT AMHR A BORE BMA LE. Atha AG HREM RT EE BAH. AKADEMA R OEM, RAHKN REAR EOLA SLR ESLGRRAAANKOTHEA RTH, HE RRARHMERES EKthie RAP ARG RAL SRE, RIM BR ARAL AB PPRAHARH. SVHHASAA: BHRHHATRU LES AMAS AH HAS HORE FAR ALY AWANEREREIR, HBAAARA, RA BAtLELPRURRUARR. LEBHRFREME, RERORAHER. H PSUR HY, URIAES. ER. RPE Sear TH AR, RREMPAMMAA AMAT EMRE, KWRSHEIA TAG HBS RAT IES SASHA =00-4m 4To Keith—always part of what I do SB. To Jane—for her patience AVG.Preface Purpose This book is intended for an upper-division or graduate course in algorithms. It has suffi- cient material to allow several choices of topics. ‘The purpose of the book is threefold. It is intended to teach algorithms for solving real problems that arise frequently in computer applications, to teach basic principles and techniques of computational complexity (worst-case and average behavior, space usage, and lower bounds on the complexity of a problem), and to introduce the areas of N’P- completeness and parallel algorithms Another of the book's aims, which is at least as important as teaching the subject ratte, is to develop in the reader the habit of always responding to a new algorithm with the questions: How good is it? Is there a better way? Therefore, instead of presenting a seties of complete, “pulled-out-of-a-hai” algorithms with analysis, the text often discusses a problem first, considers one or more approaches to solving it (as a reader who sees the problem for the first time might), and then begins to develp an algorithm, analyzes it, and modifies or rejects it until a satisfactory result is produced. (Alternative approaches that are ultimately rejected are also considered in the exercises; itis useful for the reader to know why they were rejected.) ‘Questions such as: How can this be done more efficiently? What data structure would. be useful here” Which operations should we focus on to analyze this algorithm? How ‘must this variable (or data structure) be initialized? appear frequently throughout the text. Answers generally follow the questions, but we suggest readers pause before reading the ensuing text and think up their own answers. Leamiing is not a passive process. ‘We hope readers will also leam to be aware of how an algorithm actually behaves ‘on various inputs—that is, Which branches are followed? What is the pattern of growth and shrinkage of stacks? How does presenting the input in different ways (¢.g., listing the vertices or cdgcs of a graph in a different order) aifect the behavior? Such questions are raised in some of the exercises, but are not emphesized in the text because they require carefully going through the details of many examples. Most of the algorithms presented are of practical use; we have chosen not to empha- size those with good asymptotic behavior that are poor for inputs of useful sizes (though some important ones are included). Specific algorithms were chesen for a variety of reasonsPreface including the importance of the problem, illustrating analysis techniques, illustrating tech- niques (e.g., depth-first search) that give rise to numerous algorithms, and illustrating the development and improvement of techniques and algorithms (e.g , Union-Find programs). Prerequisites The book assumes familiarity with data structures such as linked lists, stacks, and trees. and prior exposure to recursion. However, we include a review, with specifications, for the standard data structures and some specialized ones. We have also added a student-friendly review of recursion. Analysis of algorithms uses simple properties of logavithms and some calculus (dif- ferentiation to determine the asymptotic order of a function and integration to approximate summations), though virtually no calculus is used beyond Chapter 4, We find many sto: dents intimidated when they see the first log or integral sign because a year or more has passed since they bad a calculus course. Readers will need only a few properties of logs and a few integrals from first-semester calculus. Section 1.3 reviews some of the necessary mathematics, and Section 1.5.4 provides a practical guide. Algorithm Design Techniques Several important algorithm design techniques reappear in many algorithms. These in- clude divide-and-conquer, greedy methods, depth-first search (far graphs), and dynamic programming. This edition puts more emphasis on algorithm design tectmiques than did the second edition. Dynamic programming, as before, has its own chapter ard depth-first search is presented with many applications in the chapter on graph traversals (Chapter 7). ‘Most chapters are orgenized by application area, rather than by design technigue, so we provide here a list of places where you will find algorithms using divide-and-conquer and greedy techniques. . The divide-and-conquer technique is described in Section 4.3. It is used in Binary Search (Section 1.6), most sorting methods (Chapter 4}, median finding and the generat selection problem (Section 5.4), hinary search trees (Section 6.4), polynomial evaluation {Section 12.2), matrix multiplication (Section 12.3), the Fast Fourier Transform (Sec- tion 12.4), approximate graph coloring (Section 13.7), and, in a slightly ditferent form, for parallel computation in Section 14.5, Greedy algorithms are used for finding minimum spanning trees and shortest paths in Chapter 8, and for various approximation algorithms for NP-hard optimization problems, such as bin packing, knapsack, graph coloring, and traveling salesperson (Sections 13.4 through 13.8). Changes from the Second Edition This edition has three new chapters and many new topics, Throughout the book, numerous sections have been extensively rewritten. A few {opics from the second edition have been moved to different chapters where we think they fit better. We added mare than 100 new exervises, many bibliographic entries, and an appendix with Java examples. Chapters 2, 3, and 6 are virtually all new.Preface Chapter 2 reviews abstract data types (ADTs) and includes specifications for several standard ADTs. The rvle of abstract data types in algorithm design is emphasized through- out the book. Chapter 3 reviews recursion and induction, emphasizing the connection between the ‘wo and their usefulness in designing and proving correctness of programs, The chaprer also develops recursion trees, which provide a visual and intuitive representation of recur- rence equations that arise in the analysis of recursive algorithms. Solutions for commonly ‘occurring patterns are summarized so they are available for use in later chapters. Chapter 6 covers hashing, red-black trees for balanced binary trees, advanced priority queues, and dynamic equivalence relations (Union-Find). The latter topic was moved from a different chapter in the second edition. We rewrote all algorithms in # Java-based pseudocode. Familiarity with Java is not required; the algorithms can he read easily by anyone familiar with C or C+. Chapter 1 fhas an introduction to the Java-hused pseudocode. ‘We significantly expanded the section on mathematical tools for algorithm analysis in Chapter | ta provide a better review and reference for some of the mathernatics used in the book. The discussion of the asymptotic order of functions in Section 1.5 was designed to help stadems gain a better mastery of the concepts and techniques for dealing with asymptotic order. We added rules, in informal language, that summarize the most common ceases (Section 1.5.4), Chapler 4 contains an accelerated version of Heapsort in which the number of key comparisins is cut nearly in half. For Quicksort, we use the Hoare partition algorithm it the main text. Lomuto's method is introduced in an exercise. (This is reversed from the second edition.) ‘We split the old graph chapter imto two chapters, and changed the order of some topics. Chapter 7 concentrates on (linear time} traversal algorithins. The presentation of depth-first search has been thoroughly revised to emphasize the general structure of the technique and show more applications. We added topological sorting and critical path analysis as appheations and because of their intrinsic value and their connection to dynamic programming. Sharir’s algorithm, rather than Turjan’s, is presented for strongly connected components Chapter & covers greedy algorithms for graph problems. The presentations of the Prim algorithm for minimum spanning trees and the Dijkstra algorithm for shortest paths were rewritten to emphasize the soles of priority queues and to illustrate how the use of abstract data types can lead the designer to efficient implementations, The asymaptotically optimal im + mlog n) implementation is mentioned, but is not covered in depth. We moved Kruskal’s algorithm for minimum spanning trees to this chapter. The presentation of dynamic programming (Chapter 10) was substantially revised to emphasize a general approach to finding dynamic programming solutions. We added a new application, a text-formatting problem, to reinforce the point that not all applications call for a two-dimensional array. We moved the approximate string matching application (hich was in this chapter in the second edition) io the string matching chapter (See- tion 11.5). The exercises include some other new applications.Preface Our teaching experience has pinpointed paricularareas where students had difficulties with concepts related to P and NP (Chapter 13), particularly nondeterministic algorithms and polynomial tcansformations. We rewrote some definitions and examples to make the ‘concepts clearer. We added a shirt section on approximation algorithms for the traveling salesperson problem and a section on DNA computing, Instructors who used the second edition may particularly want to note that we changed some conventions and terminology (usually 10 conform to common usage). Array indexes now often begin at 0 instead of I. (In some cases, where numbering from 1 was clearer, vwe left it that way.) We mow use the term depth rather than level for the depth of a node jn a tree. We use height instead of depth for the maximum depth of any node in a tee. In the second edition, a path in a graph was defined to he what is commonly called a simple path; we use the more general definition for path in this edition and define simple path separately. A directed graph may now contain a self-edge Exercises and Programs Some exercises are somewhat open-ended, For example, one might ask for a good lower bound for the complexity of a problem, rather than asking students t0 show that a given function is a lower bound. We did this for two reasons. One is to make the form of the question move realistic; a solution must be discovered as well as verified. The other is that it may be hard for some students to prove the best known lower bound (or find the most efficient algorithm for a problem), but there is still a range of solutions they can offer to show their mastery of the techniques studied. Some topics and interesting problems are introduced only in exercises. For example, the maximum independent set problem for a tree is an exercise in Chapter 3, the maximum subsequence sum problem is an exercise in Chapter 4, and the sink finding problem for a graph is an exercise in Chapter 7, Several NP-complete problems are introduced in exercises in Chapter 13, The abilities, background, and mathematical sophistication of students at different uni- versities vary considerably, making it difficult to decide exactly which exercises should be marked ("starred") as “hard” We starred exercises that use more than minimal mathemat- ies, require substantial creativity, or equire a long chain of reasoning. A few exercises have two stars. Some starred exercises have hints The algorithins presented in this book are not programs: that is, many details not important to the method of the analysis are omitted, Of course, students should know how to implement efficient algorithms in efficient, debugged programs. Many instructors may teach this course as a pure “theory” course without programming. For those who want to assign programming projects, mast chapters include a list of programming assignments, ‘These are brief suggestions that may need amplification by instructors who choose to use them Selecting Topics for Your Course Clearly the amouat of material and the particular selection of topics (o cover depend on the Particular course and student population, We present sample outlines far (wo undergraduate courses and one graduate course.Preface xd This outline corresponds approximately to the senior-level course Sara Base teaches at San Diego State University in a 15-week semester with 3 hours per week of leciure, Chapter |: The whole chapter is assigned as reading but T concentrate on Sections 1.4 and 1.3 in class Chapter 2: Sections 2.1 through 2.4 assigned as reading Chapter 3: Sections 3.1 through 3.4, 3.6, and 3.7 assigned as reading with light cover- age in class, Chapter 4: Sections 4.1 thraugh 4.9. Chapter 5: Sections 5.| through 5.2, 5.8, and some of 5.4 Chapter 7: Sections 7.1 through 7.4 and either 7.5 or 7.6 and 7.7. Chapter 8: Sections 8.1 through 8.3 and brief mention of 8.4. Chapter 1: Seetions 11.1 through #14, Chapter 13: Sections 13.1 through 13.5, 13.8, and 13.9. ‘The next outline is the jumior-tevel course Allen Van Gelder teaches at the University of California, Santa Cruz, in 4 10-weck quarter with 3.5 hours per week of lecture. Chapter 1: Sections 1.3 and 1.5, and remaining sections as reading, Chapter 2: Sections 2.1 through 2.3, and remaining sections as reading. Chapter 3: All sections are touched on; a lot is left for reading, Chapter 4: Sections 4.1 through 4.9, Chapter 5: Possibly Section 5.4, the average linear time algorithm only. Chapter 6: Sections 6.4 through 6.6. Chapter 7: Sections 7.1 through 7: ‘Chapter 8: The entire chapter. Chapter 9: Sections 9.1 through 9.4. Chapter 10; Possibly Sections 10.1 through 10.3, but usually no time. For the firs-year graduate course at the University of California, Santa Cruz {also 10 weeks, 3.5 hours of lecture), the above material is compressed and the following additional topics are covered. Chapter $: The entire chapter. Chapter 6: The remainder of the chapter, with emphasis on amortized analysis. Chapter 10: The entire chapter. Chapter 13: Sections 13.1 through 13.3, and possibly Section 13.9. ‘The primary dependencies among chapters are shown in the following diagram with solid lines; some secondary dependencies are indicated with dashed lines, A secondary dependency means that only a few topics in the earlier chapler are needed in the later chapter, or that only the more advanced sections of the later chapter require the earlier one,ati Preface While material in Chaplers 2 and 6 is important to have seen, a tot of it might have been covered in an earlier course. Some sections in Chapter 6 are important for the more advanced parts of Chapter 8. We tike to remind readers of common themes or techniques, so we often refer back to earlier sections, many of these references can be ignored if the earlier sections were not covered. Several chapters have @ section on lower bounds, which benefits from the ideas and examples in Chapter 5, but the diagram does not show thar dependency because many instructors do not caver lower bounds. We marked ("started") sections that contain more complicated mathematics or more ‘complex or sophisticated arguments than most others, but only where the material is not central 10 the book. We also starred one or two sections that contain optional digressions ‘We have not starred a few sections that we consider essential to a course for which the Book is used, even though they contain a lot of mathematics. For example, at lenst some of the material in Section 1.5 on the asymptotic growth rate of functions and in Section 3.7 on solutions of recurrence equations should be covered. Acknowledgments We are happy 10 take this opportunity to thank the peaple who helped in big and small ways in she preparation of the third edition of this book. Sara Baase acknowledges the influence and inspiration of Dick Karp, wo made the subject of computational complexicy exciting and beautiful in his superb leutures, Allen Van Gelder acknowledges the insights gained from Bob Floyd, Don Knuth, Emst Mayr, ‘Vaughan Pratt, and Jeff Ullman; they all teach more than is “in the book.” Alten also wishes to acknowledge colleagucs David Helmbold for many discussions on how to present algorithms effectively and on fine points of many algorithms, and Charlie McDowell for help on many of the aspects of Java that are covered in this book's appendix, We thank Lila Kari for reading an early draft of the section on DNA computing and answering our questions. OF course, we'd have nothing to write about without the many people who did the original research that provided the material we enjoy leaming and passing on to new generations of students, We thank them for their work. Jn the years since the second edition appeared, severat students and instructors who used the book sent in lists of errors, typos, and suggestions for changes. We don’t have complete list of names, but we appreciate the tine and thought that went into their letters ‘The surveys and manuscript reviews obtained by Addison-Wesley were especially helpful. Our shanks to Iliana Bjotling-Sachs (Lafayette College), Mohammad B. Dadfar (Bowling Green State University}, Daniel Hirschberg (University of Catifornia at irvine),NOES NLA Preface Mitsunori Ogihara (University of Rochester), R. W. Robinson (University of Georgia), ‘Yaakov L. Varol (University of Nevada, Reno), Wiliam W. White (Southern Iinois Uni- versity at Edwardsville), Dawn Wilkins (University of Mississippi), and Abdou Youssef (George Washington University) We thank our editors at Addison-Wesley, Maite Suare2-Rivas and Karen Wemnholm, for tii confideace and patience in working with us on this project that oftcn departed from standard production procedures and schedules. We thank Joan Flaherty for her painstak- ingly careful copy editing and valuable suggestions for improving the presentation. Brooke Albright’s careful proofreading detected many errors that had survived earlier scrutiny; of course, any that remain are the fault of the authors, We thank Keith Mayers for assisting us in various ways. Sara thanks him for not reminding her too often that she broke her wedding vow to work less than seven days a week, Sara Baase, San Diego, California hutp:/Awww-rohan.sdsu.edu/faculty/baase Allen Van Gelder, Sante Crez, California http: //wwww.cse.ucse.edu/personnel/faculty/avg html June, 1999 xiiiContents Preface Analyzing Algorithms and Problems: Principles and Examples 1 1.1 Induction 2 1.2 Javaas an Algorithm Language 3 1.3. Mathematical Background — 11 4.4 Analyzing Algorithms and Problems 30 1.3. Classifying Functions by Their Asymptotic Growth Rates 43 1.6 Searching an Ordered Array 53 Exercises 61 Notes and References 67 Data Abstraction and Basic Data Structures 69 2.1 Intreduction 70 2.2. ADT Specification and Design Techniques 71 23° Elementary ADTs—Lists and Trees 73 24° Stacks and Queues 86 25 ADTs for Dynamic Sets 89 Exercises 95 Notes and References 100 Recursion and Induction 101 3.1 Introduction — 102 3.2. Recursive Procedures 102 3.3. What IsuProof? 108 3.4 Induction Proofs. 111 3.5 Proving Comeciness of Procedures 118Contents 36 37 Recurrence Equations 130 Recursion Trees 134 Exercises 141 Notes and References 146 Sorting 149 al 42 43 44 AS 46 47 48 49 4.10 4.1 Introduction 150 Insertion Sort 151 Divide and Conquer 157 Quicksort 159 ‘Merging Sorted Sequences 171 Mergesort 174 Lower Bounds for Sorting by Comparison of Keys 178 Heapsort 182 ‘Comparison of Four Sorting Algorithms 197° Shellsort 197 Radix Sorting 201 Exercises 206 Programs 221 Notes and References 221 Selection and Adversary Arguments 223 SA $2 53 54 $5 5.6 Introduction 224 Finding max and min 226 Finding the Second-Largest Key 229 ‘The Selection Problem 233 A Lower Bound for Finding the Median 238 Designing Against an Adversary 240 Exercises 242 Notes and References 246 Dynamic Sets and Searching 249 61 62 63 64 65 66 67 Introduction — 250 Amray Doubling 250 Amortized Time Analysis 251 Red-Black Trees 253 Hashing 275 Dynamic Equivalence Relations and Union-Find Programs 283 Priority Queues with a Decrease Key Operation 295 Exercises 30210 Contents xvii Programs 309 Notes and References 309 Graphs and Graph Traversals 313 7.1 Intoduetion 314 7.2 Definitions and Representations 314 73° Traversing Graphs 328 7.4 ~~ Depth-First Search on Directed Graphs 336. 7.5 Strongly Connected Components of a Directed Graph 357 7.8 Depth-First Search on Undirected Graphs 364 7.1 Biconnected Components of an Undirected Graph 366 Exercises 375 Programs 384 Notes and References 385 Graph Optimization Problems and Greedy Algorithms 387 8.1 Introduction 388 8.2. Prim’s Minimum Spanning Tree Algorithm 388 83. Single-Source Shortest Paths 403 84° Kruskal’s Minimum Spanning Tree Algorithm 412 Faercises 416 Programs 421 Notes and References 422 Transitive Closure, All-Pairs Shortest Paths 425 9.1 Introduction 426 9.2 The Transitive Closure of a Binary Relation 426 9.3 Warshall’s Algorithm for Transitive Closure 430 9.4 AlbPairs Shortest Paths in Graphs 433 9.5 Computing Transitive Closure by Matrix Operations 436 9.6 Multiplying Bit Matrices—Kronrod’s Algorithm 439 Exercises 446 Programs 449 Notes and References 449 Dynamic Programming 451 10.1 Introduction 452 10.2 Subproblem Graphs and Their Traversal 453, 10.3, Multiplying a Sequence of Matrices 457xviii 11 12 13 Contents 10.4 Constructing Optimal Binary Search Trees 466 10.5. Separating Sequences af Words into Lines 471 10.6 Developing a Dynamic Programming Algorithm 474 Exercises 475 Programs 481 Noles and References 482 String Matching 483 11,1 Introduction 484 11.2 A Straightforward Solution 485 11.3 The Knuth-Morris-Pratt Algorithm 487 114 The BoyerMoore Algorithm 495 115. Approximate String Matching 504 Exercises 508 Programs 512 Notes and References $12 Polynomials and Matrices 515 12.1 Introduction 516 12.2 Evaluating Polynomial Functions 516 12.3. Vector and Matrix Multiplication 522 12.4 The Fast Fourier Transform and Convolution $28 Exercises $42 Programs 546 Notes and References 546 N?-Complete Problems 547 13.1, Introduction $48. 13.2 PandNP 548 13.3 NP-Complete Problems 559 13.4 Approximation Algorithms $70 13.5 BinPacking 372 13.6 The Knapsack and Subsct Sum Problems — 577 13.7 Graph Coloring 581 13.8 The Traveling Sclesperson Problem 589 13.9 Computing withDNA 592 Exercises 600 Notes and References 608Contents xix 14 Parallel Algorithms 611 141 142 143 144 145 146 147 Al AQ AS Ad AS AG AT Introduction 612 Parallelism, the PRAM, and Other Models 612 Some Simple PRAM Algorithms 616 Handling Write Conflicis 622 Merging and Sorting 624 Finding Connected Components 628 A Lower Bound for Adding n Integers 641 Exercises 643 Notes and References 647 Java Examples and Techniques 649 Tniroduetion 650 AJava Main Program 651 A Simple Input Library 656 Documenting Java Classes 658 Generic Order and the “Comparable” Interface 659 Subclasses Extend the Capability of Their Superclass 663 Copy via the “Cloneable” Interface 667 Bibliography 669 Index 679Analyzing Algorithms and Problems: Principles and Examples 1.1 Introduction 1.2 Javaas an Algorithm Language 13 Mathematical Background 14 Analyzing Algorithms and Problems 1.5 Classifying Functions by Their Asymptotic Growth Rates 1.6 Searching an Ordered Array14 Chapter 1 Analyzing Algorithms and Problems Principles and Examples Introduction To say that a problem is solvable algorithmically means, informally, that a computer program can be written that will produce the correct answer for any input if we let it run Tong enough and allow it as much storage space as it needs. In the 1930s, before the advent of computers, mathematicians worked very actively t formalize and study the notion of aan algorithm, which was then interpreted informally to mean a clearly specified set of simple instructions to be followed to solvea problem ar compute a function. Various formal ‘models of computation were devised and investigated. Much of the emphasis in the early work in this field, called computabulity theory, was on describing or characterizing those problems that could be solved algorithmically and on exhibiting some problems that could not be. One of the important negative results, establistied ky Alan Turing, was the proof of the unsolvability of the “halting problem.” The halting problem is to determine whether an arbitrary given algorithm (or computer program) will eventually halt (rather than, say, get into an infinite loop) while working on a given input. There cannot exist a computer program that solves this problem. Although compatability theory has obvious and fundamental implications for com- puter science, the knowledge that a problem can theoretically be solved on a computer is not sufficient to tell us whether itis practical 10 do so, For example, a perfect chess-playing program could be written. This would not be a very difficult task; there are only a finite number of ways to arrange the chess pieoes om the board, and under certain rules a game ‘must terminate after a finite number of moves. The program could consider each of the computer's possible moves, each of its opponent's possible responses, each of its possi- ble responses to those moves, and so on until ezch sequence of possible moves reaches an end. Then since it knows the uitimate result of each move. the computer can choose the best one. The number of distinct arrangements of pieces on the board that it is reasonable to consider (much less the number of sequences of moves) is roughly 10° hy some esti- mates, A program that examined them all would take several thousand years to run, Thus such a program has not been run. Numerous problems with practical applications can be solved—that is, programs can be written for them—bor the time and storage requirements are much too great for these programs to be of practical use. Clearly the time and space requirements of a program are of practical importance. They have become, therefore, the subject of theoretical study in the area of computer science called computational complesity. One branch of this study, swhich is not covered in this book, is concerned with sctting up a formal and somewhat abstract theory ofthe complexity of computable functions. (Solving a problem is equivalent to computing a function from the set of inputs to the set of outputs.) Axioms for measures of complexity have been formulated; they ate basic and general enough so that either the number of instructions executed or the number of storage bits used by a program can be taken as a complexity measure. Using these axioms, we can prove the existence of arbitarily complex problems and of problems for which there is no best program. ‘The branch of computational complexity studied in this bock is concerned with an- alyaing specific problems and specific algorithms. This book is intended to help readers build a repertoire of classic algorithms to solve common problems, some general design1.2 1.2 Java as an Algorithm Language techniques, tools and principles for analyzing algorithms and problems, and methods of proving correctacss. We will present, study, and analyze algorithms to solve a variety of problems for which computer programs are frequently used. We will analyze the amount of time the algorithms take to execute, and we will also often analyze the amount of space used by the algorithms. In the course of describing algorithms for a variety of problems, swe will see that several algorithm design techriques often prove useful. Thus we will pause now and then to talk about some general techniques. such as divide-and-conquer, greedy algorithms, depth-first search, and dynamic programming, We will also study the com- putational complexity of the problems themselves, that is, the time and space inherently required to solve the problem no matter what algorithm is used. We will study the class of NP-complete problems—problems for which no efficient algorithms are known—and consider some heuristics for getting useful results. We will also describe an approach for solving these problems using DNA instead of electronic computers. Finally, we will intro- duce the subject of algorithms for parallel compares. In the following sections we outline the algorithm language, review some background and tools that will be used throughout the book, and illustrate the main concepts involved in analyzing an algorithm. Java as an Algorithm Language We chose Java as the algorithm language for this book by balancing several criteria. The algorithms should be easy to read, We want to focus on the strategy and techniques of an al- gorithm, not declarations and syntax details of concern to a compiler. The language showkt support data ahstraction and problem decomposition, to make it easy to express algorithmic ideas clearly. The language should provide a practical pathway to implementation. It should be widely available and provide support for program development, Actually implementing and running algorithms can enhance the student’s understanding greatly, and should not tum into a frustrating battle with the compiler and debugger. Finally, because this book is teaching algorithms, not a programming language, it should be reasonably easy to trans- late an algorithm to a varicty of languages that readers might wist to use, and specialized language features should be minimized, Java showed up well by several of our criteria, although we would aot claim it is ideal. It supports data abstraction naturally. Its type-safe, meaning that objects of one type. cannot be used in operations intended for a different type; arbitrary type conversions (called casts”) are not permitted, either. There is an explicit boolean type, so if one types “ (the assignment operator) when “==" (the equality operator} was intended, the compiler catches it. Java does not permit pointer manipulations, which are a frequent source of obscure errors; in fact, poimters are hidden from the programmer and handled automatically behind the scones. At run time, Java checks for out-of-range array subscripts, and other incon- sistencies that might be other sources of obscure errors. It performs “garbage collection,” ‘which means that jt recycles the storage space of objects that are no longer referenced; this takes a big burden of space management off the programmerChapter 1 Analyzing Algorithms and Problems: Principles and Examples On the downside, Java has many of the same terse, cryptic syntax features of C. The object structure may force inefficiencies in time and space. Many Java constructs require greater verbosity than other languages, such as C, for instance. Although Java has many specialized features, the algorithms presented in this book avoid most of them, in the interest of being language-independent, In fact, some steps within an algorithm may be stated in pseudocode for easier readability. This section de- scribes a small subset of Java that we use for the book, and the pseudocode conventions that we Use to improve readability of the algorithms. The Java-specitic Appendia A gives some additional implementation details for readers who want to gel a Java progcam running, but these details are not pertinent (o understanding the main text. 1.2.1 A Usable Subset of Java A thorough acqueintance with Java is not important (0 understand the algorithms in this text. This section gives a brief overview ofthe Java features that do appear, for those readers who wish to follow the implementation issues closely, In some cases we point out cbject- oriented features of Javathat might be used, but which we avoid so that the text can be fairly Janguage-independent; this is mainly for the benefit of readers who are familiar with some other object-oriented language, such as C++, but who are not completely familiar with Java. A sample Java “main program” appears in Appendix A, Many books are available for in-depth coverage of the Java language. Readers who are well acquainted with Java will undoubtedly notice many instances in Which some nice Java feature could have been used, However, the concepis behind the algorithms do not require any special features, and we want these concepts ta be easy to grasp and apply in a vatiety of languages, so we leave it to the readers, once they have grasped the concepts, ta tailor the implementations to their favorite language, Readers familiar with C syntax will recognize many similarities in Java syntax: Blocks are delimited by curly braces, “{" and °}"; squace brackeis, “I” and “J, enclose array indexes. As in C and C++, a two-dimensional array is really a one-dimensional array whose elements are themselves one-dimensional arrays, so two pairs of square brackets are needed to access an element, as in “matrix{i]f]”. Operators “==”, "and “ re the keyboard versions of the mathematical relational operators “=", “~", “<", and “2”. respectively. In pseudocode the text usually prefers the mathematical versions. Text examples use the “++” and “--" operators to increment and decrement, but never use them embedded in other expressions. There are also the operators “4=", "==", “a=”, and “/=" adopted from C. For example, pt=q: /s Add q top. +/ y-= x; // Subtract x from y. As just illustrated, comments extend from “//” to end-of- CH. Functivn headers normally look the same in Java zs in C. The function header specifies the parameter eype signature in parentheses after the function name; it specifies the return ype before the function name, The combination of return type and parameter type signature is called the function's je type signanure. or protetype, Thus +. Or trom “4+ “toe”. as in1.2 Java as an Algorithm Language int getMin(PriorityQ pq} tells us that getMin takes one parameter of type (or class) PriorityQ and returns type int, Java has a few primitive zypes and all remaining types are called classes, The primitive types are logical (boolean) and numerical (byte, char, short, int, long, float, and double} types. All classes (nonprimitive types) in Java are reference classes. Behind the scenes, variables dectared in classes are “pointers”; their values are addresses. Instances of classes, are called objects. Declaring a variable does not create an object. Generally, objects are created with a “new” operstor, which retums a reference to the new object. The data fields of an abject are called instance fields in object-oriented terminology. The binary dot operator is used to access instance fields of un object. Example 1.1 Creating and accessing Java objects For this example, let's assume that date information has the following nested logical structure: » year + number + isLeap = month = day That is, using informal terminology. year is a compound auribute that consists of the boolean attribute isLeap and the integer attribute number. while month and day are simple integer atiributes, To reflect this nested structure, we have to define two classes in Java, one for the whole date and another for the year field. Assume we choose the names Date and Year, respectively, for these classes. Then we would declare number and isLeap as instance fields in the Year class and declare year, month, and day as instance fields in the Date class, Moreover, we would most likely define Year as an inner class of Date, The syntax is shown in Figure 1.1. class Date { public Year year; public int month, public int day; public static class Year { public int number; public boolean isLeap; } Figure 1.1. Java syntax for the Date class with an inner Year classChapter 1 Analyzing Algorithms ane Problems: Principles and Examples Without the public keyword, the instance fields wautd not be accessible outside of the Date and Year classes; for simplicity, we make them public here. The reason for declaring she inner class, Year, to he static is so we can cteate an instance of Year that is not associated with any particular Date object. All inner classes will be static in this book. Suppose we have created a Date object that is referenced by variable dueDate, To access the instance field year in this object, the dot operator is used, as in “dueDate year.” If the instance field is in a class (as opposed to being in a primitive type}, then further dot ‘operators access its instance fields, as in “dueDate.year isLeap.” The assignment statement copies only the reference, or address, of an object in a class; it does not make a capy of the instance ficlds. For example, “noticeDate = dueDate” causes variable noticeDate to refer to the same object as variable dueDate. Therefore the following code fragment would probably be a logical error: noticeDate = dueDate; noticeDate.day = dueDate.day - 7; See Section 1.2.2 for additional discussion. = Contro} statements if, else, while, for, and break have the same meanings in Java as in € (and C++) and arc used in this book. Several other control statements exist, but are not used. The syntax for while and for are while (continuation condition ) body for ( initializer ; continuation condition ; incrementer } body where “initializer” and “incrementer” are simple statements (without "f. 1"). “body” is an arbitrary statement, and “continuation condition” is a boolean expression, The break statement causes an immediate exit from the closest enclosing for or while locp.! All classes form a tree (also called a hierarchy), with the Object class being the root. When declaring a new class, itis possible to say it extends a previously defined class, and the new class becomes a child of the previously defined class in the class tree. We will not sreate such structures in this text, 10 keep the cade as language-independent as possible; however, a few examples are given in Appendix A. When the new class is not declared to extend any class, then it extends Object by default, Complex class structures are not needed for the algorithms studied in this text. Operations on objects arc called methods in object-oriented terminology; however, ‘we will restrict ourselves to the use of static methods, which are simply procedures and functions. In our terminology a procedure is 4 named sequence of computation steps that may be called (with parameters); a function is a procedure that also retums a value to the caller. In Java a procedure that returns no value is declared as having retumn type void: C and C++ are similar in this respect. The term static is technical Java terminology, which means that the method can be applied to any abject or objects of the appropriate types (an object’s type is its class) according to the method's type signature (often called ' Iralso exits fram switch, but switch isnot used i this hack1.2 Java as an Algorithm Language its prototype). A static method is not “attached” to any particular object. Static methods behave like the usual functions and procedures af programming languages like C, Pascal, and so on, However, their names must he prefixed by the class in witich they are defined, 4s in “List irstlsd” to apply method first defined in class List to parameter x. In Java, instance fields of an object are private by defaull, which means that they can be accessed only by methods (Junctions and procedures) that are defined within the class. This is consistent with the theme of abstract data type (ADT) design that abjects should be accessed only through the operations defined for the ADT. The code that implemeats these ADT operations (or static methods, or funetions and procedures) exists within the class and is aware of the private instance fields and their types. Methods ase also private by default. buc usually are specified as “public.” so that methods defined in other classes may call them, However, “low-level” methods thet should be called only by other ruethods in the same class may also be private, ‘The clients of the ADT (procedures and functions that call the ADE) are implemented outside the class in which the ADT “lives,” so they have access only ta the public parts af the ADT class. The maintenance af private data is called encapsulation, or information hiding Instance fields of an object retain the values thatare assigned to them for the lifetime of the object, or until overwritten by a subsequent assignment. Here we can see the advantage of having them private to the class in which they are defined. A public instance field could be assigned an arbitrary value by any part of the overall program. A private instance field can be assigned a value only by going through a method for the ADT class that is designed for the purpose. This method can perform other computations and tests to be sure that the value assigned to an instance field is consistent with the ADT specifications, and is consistent with values stored in other instance fields of the same object. ‘A new object is created by the phrase “new className(),” for example: Date dueDate = new Date(); ‘This statement causes Java to invoke a default constructor for the Date clus. A constructor reserves storage for a new object (or instance) of the class and returns a reference (probably aan address) for accessing this object, The instance fields of this new chject might not be initialized, ava sidelight: The programmer may write additional constructor functions for a class, the bodies of which may initialize various instance fields and perform other computations. In the interest of language-independence, this text does not use such constructors, so details are omitted. Atrays ate declared somewchat differenily in Java than in C and C++, and their prop- antes are also slighty differen, The Java syntax 1 declare an aray of integers (tore precisely, (o declare a variable whose type is “array of integers") is “int{] x," whereas C ‘might use “int xf.” This statement does not initialize x; that is accormplished with x = new int{howManyl;Chapter 1 Analyzing Algorithms and Problems: Principles and Examples where howMany is either a constant or a variable whose value denotes (he desired length of the array. Declarations of arrays for classes are similar. The declaration and initialization. may be, and vsually should bc, combined into one statement: int{] x = new int|howMany]; Date[] dates = new Date[howMany], While these statements initialize x and dates in the sense of reserving storage for the arrays, they only initialize the elemenis to default values, which are unlikely to be useful. Therefore individual clements dates{0], dates(1], . .. , must be assigned values (possibly using the new operator) before they are used. The syntax, outside the Date class, is dates{0] = new Dated); datesi0].morth = 1; dates[0}.day = 1; dates[0}.year = new Date.Year(); dates[0].year.number = 2000: dates{0].year.isLeap = true; Notice that field names come after the index that selects a specific array element. Also notice that she inner class name, Year, is qualified by the outer class name, Date, in the second new statement, because the statement is outside the Date class, As mentioned, Java programmers can write constructors that ‘ake parameters 40 accomplish such initialization of newly constructed objects, but this text does not use such constructors in the interest of language independence. Once array x is initialized with a new statement, es shown a few paragraphs above, the length of the array il references cannot change. Java provides a way to query this length. which is x.length. Thatis, the instance field length is automatically “attached” to the array object as part of the new operation, and ean he accessed through x, as shown, as long as x refers to this object The valid indexes (or subscripts) for elements of this array are 0 through (x./enath — {), Java will stop the program (technically, chrow an exception) if the progeam attempts to access an element with an index cutside this range. We will often wish to use indexes in the range | through m, and therefore will initialize arrays with “new int[n¢1]” in these cases. Java permits overloading and overriding of methods. A method is said to be aver- loaded if ithas multiple definitions with varying parameter types, but the same return type. Many arithmetic operators are overloaded. Overriding means there are multiple definitions of a single method in the elass hierarchy with the same parameter types, and Java applics the “closest” definition. (Again, for compatibility with other languages and because this capability is not central for understanding the algorithms, we avoid these features and refer interested readers to books on the Java language.) The same names for methods may be used in different classes, but this is not really overloading because the class name (or ob- ject name) appears as a qualifier when the names are used outside the class in which they are defined. Later examples will make this clear. For readers acquainted with C++, it is worth pointing out that Java does not permit the programmer to define new meanings for operators. This text uses such operators for1.2 Java as an Algorithm Language readability in pseudocode (¢.g...x < y, where. and y are in some nonnumerie class, stch as String). However, if you define a class and you develop an actual Java program with it, you must write named functions (e.g., less()) and call them to compare abjects in your class. 1.2.2 Organizer Classes ‘We coin the term organizer class, which is not a standard Java term, to describe a very sim- ple class that merely groups several instance fields. This construct fuliills a role somewhat analogous to the C struct and the Pascal or Modula record; analogous constructs exist in Lisp, ML, and most other programming languages. Organizer classes are diametrically op- posite from abstract data types in their purpose; they merely organize some storage, but do not limit access to it and do not provide any customized operations on it. It is often conve- nient to define an organizer class within some other class; in this case, the organizer class is called an inner class in Java terminology. ‘An organizer class has just one method, called copy. Since variables are references to objects in Java, the assignment statement copies only the reference, not the fields of the object, as was illustrated in Example 1.1 with dueDate and noticeDate. If these variables ate declared in an organizer class named Date, then we could use the staternents noticeDate = Date.copy(dueDate); noticeDate.day ~ dueDate,day ~ 7; to copy the fields of dueDate into a new object referenced by noticeDate, then modify the day field of noticeDate only. Definition 1.1 The copy function for organizer classes ‘The general rule for how the copy function (or method) in an organizer class should assign values to the instance fields of the new object (illustrated by assuming object d is being copied into a new object d2) is as follows: 1. If the instance field (say year) is in another organizer class, then the copy method for that class is invoked, as in d2.year = Year.copy(d.vear), 2. Ifthe instance field (say day) is not in an organizer class, a simple assignment is used, as in d2.cay = d.day, ‘The complete example is given in Figure 1.2, = The programmer must ensure that cycles do not occur in the definitions of organizer classes, or else copy might not terminate. Of course, a new object in an organizer class can also be created in the usual way: Date someDate = new Dated; Java sidelight: Java provides facility for making a one-level copy of an object without having to write out each assignment statement, based on the elane method, but this will not handle nested structures such as Date automatically: you will still need to write some code for these cases. Appendix A gives the code for a “generic” copy! level function,10 Chapter 1 Analyzing Algorithms and Problems: Principles and Examples class Date { public Year year: public int day; public static class Year 1 public int number; public boolean isLeap; public static Year copy(Year y) {Year y2 = new Year(); y2.number = y.number; y2.isLea return y2; 4 5 public static Date copy(Date d) {Date d2 = new Dated; d2.year = Year.copyid.yeary: // organizer class d2,month = d.month; d2.day = d.day; return d2; i public static int_defaultCentury; i Figure 1.2. An organizer class Date with an inner organizer class Year An organizer class contains only public instance fields. If the stattc keyword also appears in the field declaration, the field is not associated with any particular object, but is, essentially a global variable, xample 1.2 Typical onganizer classes In Figure 1.2 the classes of Example 1.1 are embellished with copy functions, so they will qualify as organizer classes. As we see, the definition of copy is mechanical, though tedious. Is details will be omitied from future examples. For completeness, we included defaultCentury as an example of a “global variable,” although most organizer classes will not contain globai variables, m13 1.3 Mathematical Background ‘To summarize, we invented the term organizer class to denote a class that simply ‘groups together some instance fields and defines a function to make copi 1.2.3 Java-Based Pseudocode Conventions ‘Most algorithms in this book use Java-based pseudccode, rather than strict Java, for easier seadability. The following conventions ate used (except in the Java-specific Appendix A). 1. Block delimiters (“{" and “F") are omitted. Block boundaries are indicated by indenta- tion. 2. The keyword static is omitted from method (function and procedure) declarations Alll methods declared in the text are static. (Nonstatic built-in Java methods appear occasionally; in particular, s.leng:h0 is used to obtain the length of strings.) The keyword static does appear where needed for instance fields and inner classes. 3. Class name qualifiers are omitted from method (function and procedure) culls. For example, x = cons(z, x) might be written when the Java syntax requires x = IntList cons(z, x). (The IntList class is described in Section 2.3.2.) Class name qualifiers are required in Java whenever static methods are called from outside the class in which they are defined, 4, Keywords to control visibility, public, private, and protected, are omitted. Placing all files related to one Java program in the same directory eliminates the need to deal with visibility issues. 5. Mathematical relational operators "4." “S.” and “=” are usually written, instead of their keyboard versions. Relational operators are used on types where the meaning is clear, such as String, even though this would be invalid syntax in Jaya, 6, Keywords, which are either reserved words or standard parts of Java, are set in this font: Int, String. Comments ave set in this font. Code statements and program varlable names are set in this font. However, pseudocode statements are set in the regular font of the text, like this sentence. Occasional departures from this scheme occur when we are making a specific point about the Java language. Mathematical Background ‘We use a variety of mathematical concepts, tools, and techniques in this book. Most should already be familiar to you, although a few might be new, This section collects them to provide a ready reference, 2s well as a brief review, Proof concepts are covered in greater depth in Chapter 3. 1.3.1 Sets, Tuples, and Relations This section provides informal definidions and a few elementary properties of sets and related concepts. A set is a collection of distinct elements that we wish to treat as a single abject. Usually the elements are of the same “type” and have some additional common W2 Chapter 1 Analyzing Algorithms and Problems: Principles and Examples properties that make it useful to think of them as one object. The notation ¢ € $ is read “element ¢ is a member of set S” or, briefly, “eis in $.” Notice that ¢ and $ are different types in this case. For example. if ¢ is an integer, S is a set of integers, which is different from being an integer. A particular set is defined by listing or describing its elements between a pair of curly braces, Examples of this notation are Seabee}, 8 x|.xisanimegerpower of2}, $= {1,...em). The expression for S; is read “the set of all elements x such shat x is an integer power of 2" The “|" symbol is read “such that” in this context. Sometimes a colon (*:") is used in this place. The ellipsis “...” may be used when the implicit elements are cleat. all elements of one set, 51, are also in another set, So, then $; is said to be a subset of Szand Sp is said to be a superset of S1. The notations are Sy © Sy and S; > S). To denow. that 5; is a subset of Sp and is nor equal 10 $2, we write Sy $2 of Sy > 5). [is important not to confuse “e" with °C." The former means “is an element in” and the latter means “is a set of elements contained within.” The empty set, denoted by @, bas no elements, s0 it is a subset of every set A set has no inherent order. Thus, in the above examples, 5) could have been defined as (b, ¢, a} and Sy could have been defined as {| 1
n it is impossible to make k distinet choices, so the result is 0.} Therefore there are n(n ~ 1)-+- (n= & +1) distinct sequences of k distinct elements, But we saw that a specific set of k elements can be represented as k” sequences. So the number of distinct subsets of &, drawn from a set of is _ fn _ an} Cookin (7) =O PED = Gopi reeked UL) Since every subset must have somte size from 0 through 1. we attive at the identity r() =2. ay io Tuples and the Cross Product A tuple is a finite sequence whose elements often do not have the same type, For example, in a two-dimensional plane, a point can be represented by the ordered pair (xy). If itis a geometric plane, x and y are both “length.” But if itis a plot of running time vs. problem size, then y might be seconds and x might be an integer. Short tuples have special names: pair, triple, quadruple, quintuple, and so on. In the context of “tuple” these are understood to he ordered; in other contexts “pair” might mean “sat af two” instead of “sequence of two,” and so on, A k-tuple is a tuple of & elements. The cross product of two sets, say S and T, is the set of pairs that can be formed by choosing an element of $ as the fitst element of the tuple and an element of F as the second. In mathematical notation we have SxT=((x,y)|x€S,yeT} (3) ‘Therefore [S x T| = |S] |7|. It often happens that S and T are the same set, but this is not necessary, We can define the iterated ctoss product to praduce longer tuples, For example, SXT XU is the set ofall triples formed by taking en element of S, followed by an element of T, followed by an element of t/. Relations and Functions A relation is simply some subset of a (possibly iterated) cross product. This subset might be finite or infinite, and can be empty or the entire cross product, The most important case is a binary relation, which is simply some subset of a simple cross product, We are all familiar with many examples of binary relations, such as “less than” on the reals. Letting R denote the set of all reals, the “less than” relation can be defined formally as (tx. y) {a Ry ©R,x
Land > 0, log, + (cead “log to the base b of x") is that real number £ such that © = x; that is, log, x is the power to which b must be raised to get x. mw ‘The following properties of logarithms follow easily from the definition. Lemma L1 Let x and y be arbitrary positive real numbers, let ¢ be any real number, and let b> Land c > I be real numbers. . log, 18 a strictly increasing function, that is, if x > y, then log, x > log, ». log, is a one-to-one function, that is, if log, x = log, y, then x = y. log, 1=0. log, # =a 5. loge (ry) = logs x + log, y. Jog, (x2) =a log, x. Poy — plots 2, . To convert from one base to another: log, x = dog, x)/(log, ¢). 0 PAAKWEYDN™ Since the log to the base 2 is used most often in computational complexity, there is a special notation for it; “ig”: that is, Ig x = log, x. The natural logarithm (log to the base e) is denoted by “In”; that is, In x = log, x, When log() is used without any base being mentioned, it means the statement is true for any base. Sometimes the logarithm function is applied to itself. The notation Ig g(x) means. Ig(g(x)). The notation lg*(x) means p applications, so Ig)(x) is the same as ig Ig(x), Note that lg) (65536) = 2, which is quite different from (1g(65536))° = 4096.Chapter 1 Analyzing Algorithms and Problems: Principles and Examples ‘Throughout the text we almost always take logs of integers, not arbitrary positive numbers, and we often need an integer value close to the log rather than its exact value, Let n be a positive integer. If m is a power of 2. say n = 24, for some integer k, then Ign =k. If is not a power of 2, then there is an integer k such that 2
forrie) a This is often called the average value of f. also. The conditional expectation of f given an event S, denoted as E(f | S), is defined as EUFIS=Y. FePrelS) => fOPrte| S) a i since the conditional probability of any event notin Sis 0. Expectations are often easier to manipulate than the random variables themselves, particularly when several interrelated random variables are involved, due to the following important laws, which are casily proven from the definitions. Lemma 1.2 (Laws of expectations) For random variables f (e) and g(e) defined on a set of elementary events e € U, and any event 5: ECf + a)= Et f) + Ela). E(f) = Pr(S)E(f |S) + Pr(not S) E(f | not S). oO Example 1.5. Conditional probability and order In Chapter 4 we will consider probabilities in connection with order information gained by doing comparisons, Let’s look at an example of that type involving four elements 4,20 Chapter 1 Analyzing Algorithms and Problems: Principles and Examples B,C, D, which have distinct numerical values, but initially we know nothing about their values or relative values. We will write the letters in order (o denote the elementary event that this is their relative order, that is, CBDA is the event that C < B < D
0, ard ¢*. Less familiar monotonic functions are |x| and [x], showing that monotonic functions need not be continuous. An antimonotonic example is |/x for x > 0. Definition 1.8 Linear interpolation function The linear interpolation of a given function f(x) between two points u and v,u << v, is the function defined by 2324 Chapter 1 Analyzing Algorithms and Problems: Principles and Examples fix) F*) I {a) Linear interpolation (b) Extension of f (n} to f*(x) Figure 1.3 Blustrations for convexity discussion: The function f is different in parts (a) and {b). In part (b), f*(x) is convex. =~ Ofw) + - OO (u-u) FQ u' ass LE fuy)= =f) +@-2») that is, the straight-line segment joining f(u) and f(v) (see Figure 130). w Definition 1.9 Convex functions A function f (x) is said to be convex if forall u < v, f(x) < Ly y.u(*) in the interval (u, ») Informally, f(x) is convex if it never curves downward. ‘Thus functions like x, x7, 1/x, and e* are convex. The function in Figure 1.3(b) is convex (but not monotonic), whether interpreted on the reals or just on the integers, the function in Figure 1.3(a) is monotonic, but not convex. Also, log(x) and / are not convex. What about x log(x)? The following lemmas develop some practical ests for convexity. It is easy to see (and possible to prove) that a discontinuous function cannot be convex. Lemma 1.3 states that itis sufficient to consider equally spaced points to test for convexity, which simplifies things considerably. The proof is Exercise 1.16. Lemma 13 1, Let f(x) be a continuous function defined on the reals. Then f(x} is convex if and only if, for any points x,y, LGR +s HER) + £O))- In wotds, f evaluated at the midpoint between x and y lies on or below the midpoiat of the linear interpolation of f betweeri x and y. Note that the midpoint of the linear interpolation is just the average of f(x) and #()1.3 Mathematical Background 2. A function fi) defined on integers is convex if and only if, for any n,n + 1m +2, fats H(f@)+ flat). In words, f(7 + 1) is at most the average of f(n) and f(a +2). 0 Lemma I.4 summarizes several usefut properties of monotonicity and convexity. It states that functions defined only on the integers cen be extended to the reals by linear interpolation, preserving properties of monotonicity and convexity. Also, some properties involving derivatives are stated. The proofs are in Exercises 1.17 through 1.19. Lemma 14 1. Let f(t) be defined only on integers. Let f*(x) be the extension of f to the reals by linear interpolation between consecutive integers (see Figure 1.3b). a. f(n) ismonotonie if and only if f*(x) is monotonic. b. f(a) is convex if and only if f*(x) is convex. 2, If the first derivative of f(x) exists and is nonnegative, then f(x} is monotonic, 3. Ifthe first derivative of f(x) exists and is monotonic, then f(x) is convex, 4. Ifthe second derivative of f(x) exists and is nonnegative, then f(x) is convex. (This follows from parts 2.and 3.) 0 Summations Using Integration Several summations that arise often in the analysis of algorithms can be approximated (or bounded from above or below) using integration, First, let us review some useful integration formulas [tase feaatery. Ib a , Kel \ ' a3) [ Hin d= cial ng = nt It f(x) is monotonic (or nondecreasing), then b o +1 [ fomsdros [soa 6) ‘Similarly, if f (x) is antimonotonic (or nonincreasing), then b+ | é b [PO sorrsDr0 sf" foes au 226 Chapter 1 Analyzing Algorithms and Problems: Principles and Examples fe) = lex Lio s [roa a atl bb (8) Overapproxiraation » finds < DAW lala be (b) Underapproximation Figure t-4 Approximating a sum of values of a monotonic (or nondecreasing) function. This situation for monotonic f(x) is iMustrated in Figure 1,4, Here are two examples that are used later in the text Example 1.7 An estimate for )° 14+inx}st+inn = Ini +11.3 Mathematical Background 27 by using Equation (1.17), Notice that we split off the first term of the sum and applied the integral approximation to the rest to avoid a divide-by-zero atthe lower limit of integration. Similarly, Yb e i+. iat See Equation (1.11) for a closer approximation. Example 1.8 A lower bound for } 1g fai Siei=0+ Pei =P renas (2 1 by Equation (1,16) (see Figure 1.4b), Now [oscar f deemsdx=tee [ Inx dx 1 = (gei(e nx —2)/| = Ages inn —n + 1) =nign-algetige=nign—nige. Since Ig e < 1.443, 4 Vlei eaten 1.4430 (18) im Using the ideas of the previous example, but with more precise mathemathics, itis possible to derive Stirling's formula giving bounds for n!: (ey Vian < nl < cy Jinn {1 + in) fornz= 1. (1.19) = = in Manipulating Inequalities ‘These rules for combining inequalities are frequently useful. Transitivity ‘Addition Positive Scuting Wo oASB I ASB ooASs and BEC and csD and a0 (1.20) then A
3°42 is an integer,” and “x + I > x." Notice that a logical statement need aot be true. The objective of a proof is to show that a logical statement is true. ‘The most familiar logical connectives are “A” (and), “V” (or), and “= (nox), which are also called Boolean operators. The truth value of a complex statement is derived from the truth values of its atomic formulas, according to rutes for the connectives. Let A and B be logical statements. Then, 1. AA Bis true if and only if A is true and B is truc; 2. Av Bis true if and only if A is true of B is true, or hoth; 3. A is true if and only if A is false. Another important connective for reasoning is called “implies,” which we denote with the symbol “=>”. (The symbol “-+” is also seen.) ‘The statement A => B is read as “A implies 8,” or “if A then B.” (Notice that this statement has no “else” clause.) The “implies” operator can be represented with a combination of other operators, according to the following identity: A= B islogically equivalentto +A B. a2 This can be verified by checking all combinations of truth assignments to A and B. Another useful set of identities are called DeMorgan’s laws: “(AA B) is logically equivalentio. -Av ~B, 4.22) “(AV B) is logically equivalent 0 A AB. (1.23) Quantifiers Another important kind of logical connective isthe quantifier. The symbol Vx is called the universal quantifier and is read “fer all x.” while the symbol Sx is called the existential quaniifier and is read “there exists x.” These. connectives can be applied to statements that contain the variable x. The statement ¥x P(x) is true if and only if P(x) is true for all x. The statement 4x P(x) is true if and only if P(x) is true for some value of x. Most frequently, a universally quantified statement is conditional: ¥x(A(x) = B(x)). This can be read “For all x such that A(x) holds, B(x) holds.” Quantified statemeats obey a variation on DeMorgan’s laws: Ver A(x) is logically equivalent to -Sx(-A(r)), 1.24) 3rA(x)_ is logically equivalent to “W2(-A(x)). 1.25} Sometimes the translation from natural language into a quantified statement is trouble- some. People don’t speak in the stilted Janguage of logic, usually. We need to realize that “for any x” usually means “forall x.” alhough “any” and “some” are often interchangeable in normal speech. The best guideline is to try rephrasing a sentence in natoral language to1.3 Mathematical Background be more like the logical form, and then ask yourself if it means the same thing in natural ianguage. For example, “Any person must breathe to live” might be the sentence you start with. Possible rephrasings are “For all people x, x nist breathe to live” and “For some person x. x must breathe to live.” Which means the same as the original sentence? Negating a Quantified Statement, Counterexamples ‘What is necessary to prove that a general statement, say Vx(A(x) => B(x)), is false? We can use the foregoing idemtities to clarify the goal. The first thing to realize is that it is not necessary 10 prove Wx(A(x) = —B(x}). This is too strong a statement. The negation of Yx(A(x) => B(x}) is ~(¥x( A(x} => B(x))), which can be put through a series of transformations: x (Ata) > B(x))) is logically equivalent to Ax-( A(x) => B(x) is togically equivalent to. 3x—~(— A(x) v Bx) (1.26) is logically equivalent to Ar(A(x) A ~BOx)). In words, if we can exhibit some object x for which A(x) is trie and B(x is false, then we have proven that ¥x(A(x) = B(x) is false, Such an object (x) is called 4 counterexample. Contrapositives When crying to prove a statement, it is often convenient to manipulate it into a logically equivaledt form. One such form is the contrapositive. The contrapositive of A => B is (SB) = (=A), Equation (1.21) allows us to verify that the contrapositive of an implication is tue exactly when the implication itself is true: A> B. islogically equivalentto (+B) = (+4). (1.27) Sometims, proving the contrapositive of a statement is called “proof by contradiction.” but “proof by contraposition” is a more accurate description. The genuine “proof by con- tradiction” is described next. Proof by Contradiction Suppose the goal is to prove a statement of the form A => B. A genuine proof by contradic- tion adds an additional hypothesis of ~B, and then proves B itself. Thatis, (A AB) = B {is the full statement that is proved. The following identity justifies this method: A= B islogically equivalentto (4 AB) =» B. (1.28) A genuine proof by contradiction is rare in algorithm analysis. However, Exercise 1.21 calls for one, Most so-called proofs by contradiction are actually proofs by contraposition. Rules of Inference So far we have seen numerous pairs of logically equivalent statements, or logical identities: One statement is true if and only if the second statement is true. Identities are “reversible” 2930 14 Chapter 1 Analyzing Algorithms and Problems: Principles and Examples Most proofs are directed at “imeversible” combinations of statements, however, The com- plete statement to be proved is of the form “if Aypotheses, then conclusion.” The reversal, “ff conclusion, then hypotheses” is often nor true, Logical identities are not flexible enough to prove such “if-then” statements, In these situations, we need rules of inference, A rule of inference is a generat pattern that allows us to draw some new conclusion from a set of given stacements. It can be stated, “If we know By, .... Bp, then we can conclude C/" where B;,.-.. Bg, and C are logical statements in theis own right, Here are a few well-known rules: Mowe know then we ean conclude B ad BSC OC (1.29) A=>B md BSC ASC (1.30) B3C and BSC C a3b Some of the these rules are known by their Greek of Latin names, Equation (1.29) is modus ponens, Equation (1.30) s syllogism, and Equation (1.31) is the rule af vases, These rules are not independent; in Exercise 1.21 you will prove, the rule of cuses using other rules of inference and logical identities. Analyzing Algorithms and Problems ‘We analyze algorithms with the intention of improving them, if possible. and for choosing ‘among several avilable for a problem. We will use the following criteria: 1. Correctness 2, Amount of work done 3. Amount of space used 4. Simplicity, clarity 5. Optimality ‘We will discuss euch of these criteria at length and give several examples of their appli- cation. When considering the optimality of algorithms, we will introduce techniques for establishing lower bounds on the complexity of problems. 1.4.1 Correctness There are thee major steps involved in establishing the correctness of an algorithm. First, before we can even attempt to detesmine whether an algorithm is correet, we must have a clear understanding of what “correct” means. We need a precise stctement about the characteristics of the inputs itis expected to work on (called the preconditions), and what result it is to produce for each input (called the postconditions). Then we can try 10 prove statements about the relationships between the input and the output, that is, that if the preconditions are satistied, the postconditions will be true when the algorithm terminates. There are two aspects to an algorithm: the solution method and the sequence of instructions for carrying it out, that is, its implementation, Establishing the correctness of1.4 Analyzing Algorithms and Problems: the method and/or formulas used may be easy of may require a long sequence of lemmas and thecrems about the objects on which the algorithm works (e.g., graphs, permutations, matrices). For example, the validity of the Gauss elimination method for solving systems of lineat equations depends on a number of theorems in linear algebra. Some of the methods used in algorithms in this book are not obviously correct; they must be justified by theorems, Once the method is established, we implement it in a program. [fan algorithm is fairly short and straightforward, we generally use some informal means of convincing ourselves that the various parts do what we expect them to do. We may check some details carefully (e.g. initial and final values of loop counters), and hand-simulate the algorithm on a few small examples. Nore of this proves that itis correct, but informal techniques may suffice for small programs, More formal techniques, such as loop invariants, may be used to verify correctness of parts af programs. Section 3.3 expands upon this topic. Most programs written outside of classes are very large and very complex. To prove the correctness of a large program, we can try to break the program down into smaller ‘modules; show that, if all of the smaller modules do their jobs properly, then the whole Program is correct; ancl then prove that each of the modules is correct. This task is made. easier if (it may be more accurate to say, “This task is possibie only if") algorithms and programs are written in modules that are largely independent and can be verified separately. ‘This is one of the many strong arguments for structured, modolar programming. Most of, the algorithms presented in this book are the small segments from which large programs are built, so we will not deal with the difficulties of proving the correctness of very long algorithms or programs. We will not always do formal proofs of correctness in this book, though we will give arguments or explanations to justify complex or tricky parts of algorithms. Correctness ‘can be proved, though indeed fer long and complex programs it is a formidable task. In ‘Chapter 3 we will introduce some techniques to help make proofs more manageable. 14.2 Amount of Work Done How shall we measure the amount of work done by an algorithm’ The measure we choose should aid in comparing two algorithms for the same problem so thal we can determine whether one is more efficient than the other It would be handy if our measure of work gave some indication of how the actual execution times of the two algorithms compare, but we will not use execution time as 2 measure of work for a number of reasons. First, of course, it varies with the computer used, and we don't want to develop a theory for ‘one particular computer. We may instead count all the instructions or statements executed ‘by a program, but this measure stiJl has several of the other faults of execution time, It is highly dependent on the programming language used and on the programmer's style. IL would also require that we spend time and effort writing and debugging programs for each algorithm to be studied. We want a measure of work that tells us something about the efficiency of the method used by an algorithm independent of not only the computer, Programming language, and programmer, but also of the many implementation details, overhead (or “bookkeeping” operations) such as incrementing loop indexes, computing uM
You might also like
Catalogue CSE 2025
PDF
No ratings yet
Catalogue CSE 2025
181 pages
Design and Analysis of Algorithms. A Contemporary Perspective (PDFDrive)
PDF
No ratings yet
Design and Analysis of Algorithms. A Contemporary Perspective (PDFDrive)
395 pages
(Ebook PDF) Analysis and Design of Algorithms 3rd Ed. Edition by Amrinder Arora PDF Download
PDF
100% (2)
(Ebook PDF) Analysis and Design of Algorithms 3rd Ed. Edition by Amrinder Arora PDF Download
61 pages
(Ebook PDF) Analysis and Design of Algorithms 3rd Ed. Edition by Amrinder Arorainstant Download
PDF
100% (7)
(Ebook PDF) Analysis and Design of Algorithms 3rd Ed. Edition by Amrinder Arorainstant Download
65 pages
Horowitz and Sahani Fundamentals of Computer Algorithms 2nd Edition PDF
PDF
No ratings yet
Horowitz and Sahani Fundamentals of Computer Algorithms 2nd Edition PDF
777 pages
I Fundamentals 1
PDF
100% (1)
I Fundamentals 1
593 pages
Sara Baase, Allen Van Gelder - Computer Algorithms. Introduction To Design and Analysis-Addison Wesley (1999)
PDF
100% (1)
Sara Baase, Allen Van Gelder - Computer Algorithms. Introduction To Design and Analysis-Addison Wesley (1999)
694 pages
Techniques For Designing and Analyzing Algorithms 9780367228897 9780429277412 9781032024103 Compress
PDF
No ratings yet
Techniques For Designing and Analyzing Algorithms 9780367228897 9780429277412 9781032024103 Compress
445 pages
(Ebook PDF) A Guide To Algorithm Design: Paradigms, Methods, and Complexity Analysis Instant Download
PDF
100% (1)
(Ebook PDF) A Guide To Algorithm Design: Paradigms, Methods, and Complexity Analysis Instant Download
50 pages
Algorithms - Rodney R Howell
PDF
No ratings yet
Algorithms - Rodney R Howell
611 pages
The Design and Analysis of Computer Algorithms (Aho, Hopcroft & Ullman 1974-01-11) PDF
PDF
67% (6)
The Design and Analysis of Computer Algorithms (Aho, Hopcroft & Ullman 1974-01-11) PDF
479 pages
Design and Analysis of Algorithms Lecture Notes
PDF
No ratings yet
Design and Analysis of Algorithms Lecture Notes
136 pages
(Ebook PDF) Analysis and Design of Algorithms 3rd Ed. Edition by Amrinder Arora Download
PDF
100% (2)
(Ebook PDF) Analysis and Design of Algorithms 3rd Ed. Edition by Amrinder Arora Download
49 pages
(Ebook PDF) Analysis and Design of Algorithms 3rd Ed. Edition by Amrinder Arora PDF Download
PDF
100% (1)
(Ebook PDF) Analysis and Design of Algorithms 3rd Ed. Edition by Amrinder Arora PDF Download
57 pages
Tips - Fundamentals of Algorithmics PDF
PDF
100% (4)
Tips - Fundamentals of Algorithmics PDF
546 pages
Algorithms and Complexity
PDF
100% (2)
Algorithms and Complexity
140 pages
(Ebook PDF) A Guide To Algorithm Design: Paradigms, Methods, and Complexity Analysis Install Download
PDF
No ratings yet
(Ebook PDF) A Guide To Algorithm Design: Paradigms, Methods, and Complexity Analysis Install Download
55 pages
Design and Analysis of Algorithms-Compressed (1) - Compressed
PDF
100% (1)
Design and Analysis of Algorithms-Compressed (1) - Compressed
290 pages
The Design and Analysis of Algorithms - 1992 Kozen, Dext
PDF
No ratings yet
The Design and Analysis of Algorithms - 1992 Kozen, Dext
332 pages
DWGX
PDF
No ratings yet
DWGX
262 pages
Aaanotes
PDF
No ratings yet
Aaanotes
156 pages
DAA Handouts Apr 29
PDF
No ratings yet
DAA Handouts Apr 29
114 pages
Udi Manber - Introduction To Algorithms
PDF
100% (1)
Udi Manber - Introduction To Algorithms
339 pages
Cse Daa LN Ug20
PDF
No ratings yet
Cse Daa LN Ug20
115 pages
Algorithmics
PDF
100% (3)
Algorithmics
381 pages
CO3002/CO7002 Analysis and Design of Algorithms: S C M S
PDF
No ratings yet
CO3002/CO7002 Analysis and Design of Algorithms: S C M S
103 pages
PHI LEARNING Computer Science IT Engineering Electrical Electronics Mechanical Civil Chemical Metallurgy and Agricultural Catalogue 2015 PDF
PDF
No ratings yet
PHI LEARNING Computer Science IT Engineering Electrical Electronics Mechanical Civil Chemical Metallurgy and Agricultural Catalogue 2015 PDF
316 pages
Cormen Introduction To Algorithms 2e 2001 MIT - 0014
PDF
No ratings yet
Cormen Introduction To Algorithms 2e 2001 MIT - 0014
2 pages
Data Structures
PDF
No ratings yet
Data Structures
239 pages
Sandeep Sen Algorithms Notes
PDF
No ratings yet
Sandeep Sen Algorithms Notes
397 pages
Ga Tech Student Notes
PDF
No ratings yet
Ga Tech Student Notes
130 pages
Algorithms and Complexity
PDF
No ratings yet
Algorithms and Complexity
139 pages
Algorithms Illuminated: Part 3: Greedy Algorithms and Dynamic Programming Tim Roughgarden
PDF
No ratings yet
Algorithms Illuminated: Part 3: Greedy Algorithms and Dynamic Programming Tim Roughgarden
43 pages
No 5
PDF
No ratings yet
No 5
6 pages
Introduction To Algorithms
PDF
No ratings yet
Introduction To Algorithms
19 pages
Cormen Contents 1stedition
PDF
No ratings yet
Cormen Contents 1stedition
5 pages