100% found this document useful (4 votes)
106 views

Performance modeling and design of computer systems queueing theory in action Harchol-Balter 2024 scribd download

computer

Uploaded by

burgazhilmie
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (4 votes)
106 views

Performance modeling and design of computer systems queueing theory in action Harchol-Balter 2024 scribd download

computer

Uploaded by

burgazhilmie
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

Download the full version of the ebook at

https://ebookultra.com

Performance modeling and design of computer


systems queueing theory in action Harchol-
Balter

https://ebookultra.com/download/performance-
modeling-and-design-of-computer-systems-queueing-
theory-in-action-harchol-balter/

Explore and download more ebook at https://ebookultra.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Queueing Networks and Markov Chains Modeling and


Performance Evaluation With Computer Science Applications
Second Edition Gunter Bolch
https://ebookultra.com/download/queueing-networks-and-markov-chains-
modeling-and-performance-evaluation-with-computer-science-
applications-second-edition-gunter-bolch/
ebookultra.com

Human Computer Interaction and Operators Performance


Optimizing Work Design with Activity Theory Ergonomics
Design and Management Theory and Applications 1st Edition
Gregory Z. Bedny
https://ebookultra.com/download/human-computer-interaction-and-
operators-performance-optimizing-work-design-with-activity-theory-
ergonomics-design-and-management-theory-and-applications-1st-edition-
gregory-z-bedny/
ebookultra.com

Advances in Modeling and Design of Adhesively Bonded


Systems 1st Edition S. Kumar

https://ebookultra.com/download/advances-in-modeling-and-design-of-
adhesively-bonded-systems-1st-edition-s-kumar/

ebookultra.com

Design and modeling for computer experiments 1st Edition


Kai-Tai Fang

https://ebookultra.com/download/design-and-modeling-for-computer-
experiments-1st-edition-kai-tai-fang/

ebookultra.com
Fundamentals of Performance Evaluation of Computer and
Telecommunications Systems 1st Edition Mohammed S. Obaidat

https://ebookultra.com/download/fundamentals-of-performance-
evaluation-of-computer-and-telecommunications-systems-1st-edition-
mohammed-s-obaidat/
ebookultra.com

Informed systems organizational design for learning in


action 1st Edition Somerville

https://ebookultra.com/download/informed-systems-organizational-
design-for-learning-in-action-1st-edition-somerville/

ebookultra.com

Computer Design and Computational Defense Systems 1st


Edition Nikos E. Mastorakis

https://ebookultra.com/download/computer-design-and-computational-
defense-systems-1st-edition-nikos-e-mastorakis/

ebookultra.com

Theory of Quantum and Classical Connections In Modeling


Atomic Molecular And Electrodynamical Systems 1st Edition
Alexandru Popa
https://ebookultra.com/download/theory-of-quantum-and-classical-
connections-in-modeling-atomic-molecular-and-electrodynamical-
systems-1st-edition-alexandru-popa/
ebookultra.com

Design for Reliability Information and Computer Based


Systems 1st Edition Eric Bauer

https://ebookultra.com/download/design-for-reliability-information-
and-computer-based-systems-1st-edition-eric-bauer/

ebookultra.com
Performance modeling and design of computer systems
queueing theory in action Harchol-Balter Digital Instant
Download
Author(s): Harchol-Balter, Mor
ISBN(s): 9781107027503, 1107027500
Edition: Rep
File Details: PDF, 8.26 MB
Year: 2014
Language: english
more information - www.cambridge.org/9781107027503
Performance Modeling and Design of Computer Systems
Computer systems design is full of conundrums:
r Given a choice between a single machine with speed s, or n machines each with
speed s/n, which should we choose?
r If both the arrival rate and service rate double, will the mean response time stay the
same?
r Should systems really aim to balance load, or is this a convenient myth?
r If a scheduling policy favors one set of jobs, does it necessarily hurt some other jobs,
or are these “conservation laws” being misinterpreted?
r Do greedy, shortest-delay, routing strategies make sense in a server farm, or is what
is good for the individual disastrous for the system as a whole?
r How do high job size variability and heavy-tailed workloads affect the choice of a
scheduling policy?
r How should one trade off energy and delay in designing a computer system?
r If 12 servers are needed to meet delay guarantees when the arrival rate is 9 jobs/sec,
will we need 12,000 servers when the arrival rate is 9,000 jobs/sec?
Tackling the questions that systems designers care about, this book brings queueing theory
decisively back to computer science. The book is written with computer scientists and
engineers in mind and is full of examples from computer systems, as well as manufacturing
and operations research. Fun and readable, the book is highly approachable, even for
undergraduates, while still being thoroughly rigorous and also covering a much wider span
of topics than many queueing books.
Readers benefit from a lively mix of motivation and intuition, with illustrations, examples,
and more than 300 exercises – all while acquiring the skills needed to model, analyze,
and design large-scale systems with good performance and low cost. The exercises are an
important feature, teaching research-level counterintuitive lessons in the design of computer
systems. The goal is to train readers not only to customize existing analyses but also to
invent their own.

Mor Harchol-Balter is an Associate Professor in the Computer Science Department at


Carnegie Mellon University. She is a leader in the ACM Sigmetrics Conference on Measure-
ment and Modeling of Computer Systems, having served as technical program committee
chair in 2007 and conference chair in 2013.
Performance Modeling and
Design of Computer Systems

Queueing Theory in Action

Mor Harchol-Balter
Carnegie Mellon University, Pennsylvania
cambridge university press
Cambridge, New York, Melbourne, Madrid, Cape Town,
Singapore, São Paulo, Delhi, Mexico City
Cambridge University Press
32 Avenue of the Americas, New York, NY 10013-2473, USA
www.cambridge.org
Information on this title: www.cambridge.org/9781107027503


C Mor Harchol-Balter 2013

This publication is in copyright. Subject to statutory exception


and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.

First published 2013

Printed in the United States of America

A catalog record for this publication is available from the British Library.

Library of Congress Cataloging in Publication Data


Harchol-Balter, Mor, 1966–
Performance modeling and design of computer systems : queueing theory in
action / Mor Harchol-Balter.
pages cm
Includes bibliographical references and index.
ISBN 978-1-107-02750-3
1. Transaction systems (Computer systems) – Mathematical models. 2. Computer
systems – Design and construction – Mathematics. 3. Queueing theory.
4. Queueing networks (Data transmission) I. Title.
QA76.545.H37 2013
519.8 2–dc23 2012019844

ISBN 978-1-107-02750-3 Hardback

Cambridge University Press has no responsibility for the persistence or accuracy of


URLs for external or third-party Internet websites referred to in this publication and
does not guarantee that any content on such websites is, or will remain, accurate or
appropriate.
To my loving husband Andrew, my awesome son Danny,
and my parents, Irit and Micha
I have always been interested in finding better designs for computer systems, designs
that improve performance without the purchase of additional resources. When I look
back at the problems that I have solved and I look ahead to the problems I hope to
solve, I realize that the problem formulations keep getting simpler and simpler, and my
footing less secure. Every wisdom that I once believed, I have now come to question:
If a scheduling policy helps one set of jobs, does it necessarily hurt some other jobs,
or are these “conservation laws” being misinterpreted? Do greedy routing strategies
make sense in server farms, or is what is good for the individual actually disastrous for
the system as a whole? When comparing a single fast machine with n slow machines,
each of 1/nth the speed, the single fast machine is typically much more expensive – but
does that mean that it is necessarily better? Should distributed systems really aim to
balance load, or is this a convenient myth? Cycle stealing, where machines can help
each other when they are idle, sounds like a great idea, but can we quantify the actual
benefit? How much is the performance of scheduling policies affected by variability
in the arrival rate and service rate and by fluctuations in the load, and what can we do
to combat variability? Inherent in these questions is the impact of real user behaviors
and real-world workloads with heavy-tailed, highly variable service demands, as
well as correlated arrival processes. Also intertwined in my work are the tensions
between theoretical analysis and the realities of implementation, each motivating the
other. In my search to discover new research techniques that allow me to answer
these and other questions, I find that I am converging toward the fundamental core
that defines all these problems, and that makes the counterintuitive more believable.
Contents

Preface xvii
Acknowledgments xxiii

I Introduction to Queueing
1 Motivating Examples of the Power of Analytical Modeling 3
1.1 What Is Queueing Theory? 3
1.2 Examples of the Power of Queueing Theory 5
2 Queueing Theory Terminology 13
2.1 Where We Are Heading 13
2.2 The Single-Server Network 13
2.3 Classification of Queueing Networks 16
2.4 Open Networks 16
2.5 More Metrics: Throughput and Utilization 17
2.6 Closed Networks 20
2.6.1 Interactive (Terminal-Driven) Systems 21
2.6.2 Batch Systems 22
2.6.3 Throughput in a Closed System 23
2.7 Differences between Closed and Open Networks 24
2.7.1 A Question on Modeling 25
2.8 Related Readings 25
2.9 Exercises 26

II Necessary Probability Background


3 Probability Review 31
3.1 Sample Space and Events 31
3.2 Probability Defined on Events 32
3.3 Conditional Probabilities on Events 33
3.4 Independent Events and Conditionally Independent Events 34
3.5 Law of Total Probability 35
3.6 Bayes Law 36
3.7 Discrete versus Continuous Random Variables 37
3.8 Probabilities and Densities 38
3.8.1 Discrete: Probability Mass Function 38
3.8.2 Continuous: Probability Density Function 41
3.9 Expectation and Variance 44
3.10 Joint Probabilities and Independence 47

vii
viii contents

3.11 Conditional Probabilities and Expectations 49


3.12 Probabilities and Expectations via Conditioning 53
3.13 Linearity of Expectation 54
3.14 Normal Distribution 57
3.14.1 Linear Transformation Property 58
3.14.2 Central Limit Theorem 61
3.15 Sum of a Random Number of Random Variables 62
3.16 Exercises 64
4 Generating Random Variables for Simulation 70
4.1 Inverse-Transform Method 70
4.1.1 The Continuous Case 70
4.1.2 The Discrete Case 72
4.2 Accept-Reject Method 72
4.2.1 Discrete Case 73
4.2.2 Continuous Case 75
4.2.3 Some Harder Problems 77
4.3 Readings 78
4.4 Exercises 78
5 Sample Paths, Convergence, and Averages 79
5.1 Convergence 79
5.2 Strong and Weak Laws of Large Numbers 83
5.3 Time Average versus Ensemble Average 84
5.3.1 Motivation 85
5.3.2 Definition 86
5.3.3 Interpretation 86
5.3.4 Equivalence 88
5.3.5 Simulation 90
5.3.6 Average Time in System 90
5.4 Related Readings 91
5.5 Exercise 91

III The Predictive Power of Simple Operational Laws: “What-If”


Questions and Answers
6 Little’s Law and Other Operational Laws 95
6.1 Little’s Law for Open Systems 95
6.2 Intuitions 96
6.3 Little’s Law for Closed Systems 96
6.4 Proof of Little’s Law for Open Systems 97
6.4.1 Statement via Time Averages 97
6.4.2 Proof 98
6.4.3 Corollaries 100
6.5 Proof of Little’s Law for Closed Systems 101
6.5.1 Statement via Time Averages 101
6.5.2 Proof 102
6.6 Generalized Little’s Law 102
contents ix

6.7 Examples Applying Little’s Law 103


6.8 More Operational Laws: The Forced Flow Law 106
6.9 Combining Operational Laws 107
6.10 Device Demands 110
6.11 Readings and Further Topics Related to Little’s Law 111
6.12 Exercises 111
7 Modification Analysis: “What-If” for Closed Systems 114
7.1 Review 114
7.2 Asymptotic Bounds for Closed Systems 115
7.3 Modification Analysis for Closed Systems 118
7.4 More Modification Analysis Examples 119
7.5 Comparison of Closed and Open Networks 122
7.6 Readings 122
7.7 Exercises 122

IV From Markov Chains to Simple Queues


8 Discrete-Time Markov Chains 129
8.1 Discrete-Time versus Continuous-Time Markov Chains 130
8.2 Definition of a DTMC 130
8.3 Examples of Finite-State DTMCs 131
8.3.1 Repair Facility Problem 131
8.3.2 Umbrella Problem 132
8.3.3 Program Analysis Problem 132
8.4 Powers of P: n-Step Transition Probabilities 133
8.5 Stationary Equations 135
8.6 The Stationary Distribution Equals the Limiting Distribution 136
8.7 Examples of Solving Stationary Equations 138
8.7.1 Repair Facility Problem with Cost 138
8.7.2 Umbrella Problem 139
8.8 Infinite-State DTMCs 139
8.9 Infinite-State Stationarity Result 140
8.10 Solving Stationary Equations in Infinite-State DTMCs 142
8.11 Exercises 145
9 Ergodicity Theory 148
9.1 Ergodicity Questions 148
9.2 Finite-State DTMCs 149
9.2.1 Existence of the Limiting Distribution 149
9.2.2 Mean Time between Visits to a State 153
9.2.3 Time Averages 155
9.3 Infinite-State Markov Chains 155
9.3.1 Recurrent versus Transient 156
9.3.2 Infinite Random Walk Example 160
9.3.3 Positive Recurrent versus Null Recurrent 162
9.4 Ergodic Theorem of Markov Chains 164
x contents

9.5 Time Averages 166


9.6 Limiting Probabilities Interpreted as Rates 168
9.7 Time-Reversibility Theorem 170
9.8 When Chains Are Periodic or Not Irreducible 171
9.8.1 Periodic Chains 171
9.8.2 Chains that Are Not Irreducible 177
9.9 Conclusion 177
9.10 Proof of Ergodic Theorem of Markov Chains∗ 178
9.11 Exercises 183
10 Real-World Examples: Google, Aloha, and Harder Chains∗ 190
10.1 Google’s PageRank Algorithm 190
10.1.1 Google’s DTMC Algorithm 190
10.1.2 Problems with Real Web Graphs 192
10.1.3 Google’s Solution to Dead Ends and Spider Traps 194
10.1.4 Evaluation of the PageRank Algorithm 195
10.1.5 Practical Implementation Considerations 195
10.2 Aloha Protocol Analysis 195
10.2.1 The Slotted Aloha Protocol 196
10.2.2 The Aloha Markov Chain 196
10.2.3 Properties of the Aloha Markov Chain 198
10.2.4 Improving the Aloha Protocol 199
10.3 Generating Functions for Harder Markov Chains 200
10.3.1 The z-Transform 201
10.3.2 Solving the Chain 201
10.4 Readings and Summary 203
10.5 Exercises 204
11 Exponential Distribution and the Poisson Process 206
11.1 Definition of the Exponential Distribution 206
11.2 Memoryless Property of the Exponential 207
11.3 Relating Exponential to Geometric via δ-Steps 209
11.4 More Properties of the Exponential 211
11.5 The Celebrated Poisson Process 213
11.6 Merging Independent Poisson Processes 218
11.7 Poisson Splitting 218
11.8 Uniformity 221
11.9 Exercises 222
12 Transition to Continuous-Time Markov Chains 225
12.1 Defining CTMCs 225
12.2 Solving CTMCs 229
12.3 Generalization and Interpretation 232
12.3.1 Interpreting the Balance Equations for the CTMC 234
12.3.2 Summary Theorem for CTMCs 234
12.4 Exercises 234
contents xi

13 M/M/1 and PASTA 236


13.1 The M/M/1 Queue 236
13.2 Examples Using an M/M/1 Queue 239
13.3 PASTA 242
13.4 Further Reading 245
13.5 Exercises 245

V Server Farms and Networks: Multi-server, Multi-queue Systems


14 Server Farms: M/M/k and M/M/k/k 253
14.1 Time-Reversibility for CTMCs 253
14.2 M/M/k/k Loss System 255
14.3 M/M/k 258
14.4 Comparison of Three Server Organizations 263
14.5 Readings 264
14.6 Exercises 264
15 Capacity Provisioning for Server Farms 269
15.1 What Does Load Really Mean in an M/M/k? 269
15.2 The M/M/∞ 271
15.2.1 Analysis of the M/M/∞ 271
15.2.2 A First Cut at a Capacity Provisioning Rule for the M/M/k 272
15.3 Square-Root Staffing 274
15.4 Readings 276
15.5 Exercises 276
16 Time-Reversibility and Burke’s Theorem 282
16.1 More Examples of Finite-State CTMCs 282
16.1.1 Networks with Finite Buffer Space 282
16.1.2 Batch System with M/M/2 I/O 284
16.2 The Reverse Chain 285
16.3 Burke’s Theorem 288
16.4 An Alternative (Partial) Proof of Burke’s Theorem 290
16.5 Application: Tandem Servers 291
16.6 General Acyclic Networks with Probabilistic Routing 293
16.7 Readings 294
16.8 Exercises 294
17 Networks of Queues and Jackson Product Form 297
17.1 Jackson Network Definition 297
17.2 The Arrival Process into Each Server 298
17.3 Solving the Jackson Network 300
17.4 The Local Balance Approach 301
17.5 Readings 306
17.6 Exercises 306
18 Classed Network of Queues 311
18.1 Overview 311
18.2 Motivation for Classed Networks 311
xii contents

18.3 Notation and Modeling for Classed Jackson Networks 314


18.4 A Single-Server Classed Network 315
18.5 Product Form Theorems 317
18.6 Examples Using Classed Networks 322
18.6.1 Connection-Oriented ATM Network Example 322
18.6.2 Distribution of Job Classes Example 325
18.6.3 CPU-Bound and I/O-Bound Jobs Example 326
18.7 Readings 329
18.8 Exercises 329
19 Closed Networks of Queues 331
19.1 Motivation 331
19.2 Product-Form Solution 333
19.2.1 Local Balance Equations for Closed Networks 333
19.2.2 Example of Deriving Limiting Probabilities 335
19.3 Mean Value Analysis (MVA) 337
19.3.1 The Arrival Theorem 338
19.3.2 Iterative Derivation of Mean Response Time 340
19.3.3 An MVA Example 341
19.4 Readings 343
19.5 Exercises 343

VI Real-World Workloads: High Variability and Heavy Tails


20 Tales of Tails: A Case Study of Real-World Workloads 349
20.1 Grad School Tales . . . Process Migration 349
20.2 UNIX Process Lifetime Measurements 350
20.3 Properties of the Pareto Distribution 352
20.4 The Bounded Pareto Distribution 353
20.5 Heavy Tails 354
20.6 The Benefits of Active Process Migration 354
20.7 Pareto Distributions Are Everywhere 355
20.8 Exercises 357
21 Phase-Type Distributions and Matrix-Analytic Methods 359
21.1 Representing General Distributions by Exponentials 359
21.2 Markov Chain Modeling of PH Workloads 364
21.3 The Matrix-Analytic Method 366
21.4 Analysis of Time-Varying Load 367
21.4.1 High-Level Ideas 367
21.4.2 The Generator Matrix, Q 368
21.4.3 Solving for R 370
21.4.4 Finding π0 371
21.4.5 Performance Metrics 372
21.5 More Complex Chains 372
21.6 Readings and Further Remarks 376
21.7 Exercises 376
contents xiii

22 Networks with Time-Sharing (PS) Servers (BCMP) 380


22.1 Review of Product-Form Networks 380
22.2 BCMP Result 380
22.2.1 Networks with FCFS Servers 381
22.2.2 Networks with PS Servers 382
22.3 M/M/1/PS 384
22.4 M/Cox/1/PS 385
22.5 Tandem Network of M/G/1/PS Servers 391
22.6 Network of PS Servers with Probabilistic Routing 393
22.7 Readings 394
22.8 Exercises 394
23 The M/G/1 Queue and the Inspection Paradox 395
23.1 The Inspection Paradox 395
23.2 The M/G/1 Queue and Its Analysis 396
23.3 Renewal-Reward Theory 399
23.4 Applying Renewal-Reward to Get Expected Excess 400
23.5 Back to the Inspection Paradox 402
23.6 Back to the M/G/1 Queue 403
23.7 Exercises 405
24 Task Assignment Policies for Server Farms 408
24.1 Task Assignment for FCFS Server Farms 410
24.2 Task Assignment for PS Server Farms 419
24.3 Optimal Server Farm Design 424
24.4 Readings and Further Follow-Up 428
24.5 Exercises 430
25 Transform Analysis 433
25.1 Definitions of Transforms and Some Examples 433
25.2 Getting Moments from Transforms: Peeling the Onion 436
25.3 Linearity of Transforms 439
25.4 Conditioning 441
25.5 Distribution of Response Time in an M/M/1 443
25.6 Combining Laplace and z-Transforms 444
25.7 More Results on Transforms 445
25.8 Readings 446
25.9 Exercises 446
26 M/G/1 Transform Analysis 450
26.1 The z-Transform of the Number in System 450
26.2 The Laplace Transform of Time in System 454
26.3 Readings 456
26.4 Exercises 456
27 Power Optimization Application 457
27.1 The Power Optimization Problem 457
27.2 Busy Period Analysis of M/G/1 459
27.3 M/G/1 with Setup Cost 462
xiv contents

27.4 Comparing ON/IDLE versus ON/OFF 465


27.5 Readings 467
27.6 Exercises 467

VII Smart Scheduling in the M/G/1


28 Performance Metrics 473
28.1 Traditional Metrics 473
28.2 Commonly Used Metrics for Single Queues 474
28.3 Today’s Trendy Metrics 474
28.4 Starvation/Fairness Metrics 475
28.5 Deriving Performance Metrics 476
28.6 Readings 477
29 Scheduling: Non-Preemptive, Non-Size-Based Policies 478
29.1 FCFS, LCFS, and RANDOM 478
29.2 Readings 481
29.3 Exercises 481
30 Scheduling: Preemptive, Non-Size-Based Policies 482
30.1 Processor-Sharing (PS) 482
30.1.1 Motivation behind PS 482
30.1.2 Ages of Jobs in the M/G/1/PS System 483
30.1.3 Response Time as a Function of Job Size 484
30.1.4 Intuition for PS Results 487
30.1.5 Implications of PS Results for Understanding FCFS 487
30.2 Preemptive-LCFS 488
30.3 FB Scheduling 490
30.4 Readings 495
30.5 Exercises 496
31 Scheduling: Non-Preemptive, Size-Based Policies 499
31.1 Priority Queueing 499
31.2 Non-Preemptive Priority 501
31.3 Shortest-Job-First (SJF) 504
31.4 The Problem with Non-Preemptive Policies 506
31.5 Exercises 507
32 Scheduling: Preemptive, Size-Based Policies 508
32.1 Motivation 508
32.2 Preemptive Priority Queueing 508
32.3 Preemptive-Shortest-Job-First (PSJF) 512
32.4 Transform Analysis of PSJF 514
32.5 Exercises 516
33 Scheduling: SRPT and Fairness 518
33.1 Shortest-Remaining-Processing-Time (SRPT) 518
33.2 Precise Derivation of SRPT Waiting Time∗ 521
contents xv

33.3 Comparisons with Other Policies 523


33.3.1 Comparison with PSJF 523
33.3.2 SRPT versus FB 523
33.3.3 Comparison of All Scheduling Policies 524
33.4 Fairness of SRPT 525
33.5 Readings 529

Bibliography 531
Index 541
Preface

The ad hoc World of Computer System Design

The design of computer systems is often viewed very much as an art rather than a
science. Decisions about which scheduling policy to use, how many servers to run,
what speed to operate each server at, and the like are often based on intuitions rather
than mathematically derived formulas. Specific policies built into kernels are often
riddled with secret “voodoo constants,”1 which have no explanation but seem to “work
well” under some benchmarked workloads. Computer systems students are often told
to first build the system and then make changes to the policies to improve system
performance, rather than first creating a formal model and design of the system on
paper to ensure the system meets performance goals.
Even when trying to evaluate the performance of an existing computer system, students
are encouraged to simulate the system and spend many days running their simulation
under different workloads waiting to see what happens. Given that the search space of
possible workloads and input parameters is often huge, vast numbers of simulations
are needed to properly cover the space. Despite this fact, mathematical models of the
system are rarely created, and we rarely characterize workloads stochastically. There is
no formal analysis of the parameter space under which the computer system is likely to
perform well versus that under which it is likely to perform poorly. It is no wonder that
computer systems students are left feeling that the whole process of system evaluation
and design is very ad hoc. As an example, consider the trial-and-error approach to
updating resource scheduling in the many versions of the Linux kernel.

Analytical Modeling for Computer Systems

But it does not have to be this way! These same systems designers could mathematically
model the system, stochastically characterize the workloads and performance goals,
and then analytically derive the performance of the system as a function of workload
and input parameters. The fields of analytical modeling and stochastic processes have
existed for close to a century, and they can be used to save systems designers huge
numbers of hours in trial and error while improving performance. Analytical modeling
can also be used in conjunction with simulation to help guide the simulation, reducing
the number of cases that need to be explored.

1 The term “voodoo constants” was coined by Prof. John Ousterhout during his lectures at the University of
California, Berkeley.

xvii
xviii preface

Unfortunately, of the hundreds of books written on stochastic processes, almost none


deal with computer systems. The examples in those books and the material covered are
oriented toward operations research areas such as manufacturing systems, or human
operators answering calls in a call center, or some assembly-line system with different
priority jobs.
In many ways the analysis used in designing manufacturing systems is not all that
different from computer systems. There are many parallels between a human operator
and a computer server: There are faster human operators and slower ones (just as
computer servers); the human servers sometimes get sick (just as computer servers
sometimes break down); when not needed, human operators can be sent home to save
money (just as computer servers can be turned off to save power); there is a startup
overhead to bringing back a human operator (just as there is a warmup cost to turning
on a computer server); and the list goes on.
However, there are also many differences between manufacturing systems and com-
puter systems. To start, computer systems workloads have been shown to have ex-
tremely high variability in job sizes (service requirements), with squared coefficients
of variation upward of 100. This is very different from the low-variability service times
characteristic of job sizes in manufacturing workloads. This difference in variability
can result in performance differences of orders of magnitude. Second, computer work-
loads are typically preemptible, and time-sharing (Processor-Sharing) of the CPU is
extremely common. By contrast, most manufacturing workloads are non-preemptive
(first-come-first-serve service order is the most common). Thus most books on stochas-
tic processes and queueing omit chapters on Processor-Sharing or more advanced pre-
emptive policies like Shortest-Remaining-Processing-Time, which are very much at
the heart of computer systems. Processor-Sharing is particularly relevant when analyz-
ing server farms, which, in the case of computer systems, are typically composed of
Processor-Sharing servers, not First-Come-First-Served ones. It is also relevant in any
computing application involving bandwidth being shared between users, which typi-
cally happens in a processor-sharing style, not first-come-first-serve order. Performance
metrics may also be different for computer systems as compared with manufacturing
systems (e.g., power usage, an important metric for computer systems, is not mentioned
in stochastic processes books). Closed-loop architectures, in which new jobs are not
created until existing jobs complete, and where the performance goal is to maximize
throughput, are very common in computer systems, but are often left out of queueing
books. Finally, the particular types of interactions that occur in disks, networking pro-
tocols, databases, memory controllers, and other computer systems are very different
from what has been analyzed in traditional queueing books.

The Goal of This Book

Many times I have walked into a fellow computer scientist’s office and was pleased to
find a queueing book on his shelf. Unfortunately, when questioned, my colleague was
quick to answer that he never uses the book because “The world doesn’t look like an
M/M/1 queue, and I can’t understand anything past that chapter.” The problem is that
preface xix

the queueing theory books are not “friendly” to computer scientists. The applications
are not computer-oriented, and the assumptions used are often unrealistic for computer
systems. Furthermore, these books are abstruse and often impenetrable by anyone who
has not studied graduate-level mathematics. In some sense this is hard to avoid: If one
wants to do more than provide readers with formulas to “plug into,” then one has to
teach them to derive their own formulas, and this requires learning a good deal of math.
Fortunately, as one of my favorite authors, Sheldon Ross, has shown, it is possible to
teach a lot of stochastic analysis in a fun and simple way that does not require first
taking classes in measure theory and real analysis.
My motive in writing this book is to improve the design of computer systems by intro-
ducing computer scientists to the powerful world of queueing-theoretic modeling and
analysis. Personally, I have found queueing-theoretic analysis to be extremely valuable
in much of my research including: designing routing protocols for networks, designing
better scheduling algorithms for web servers and database management systems, disk
scheduling, memory-bank allocation, supercomputing resource scheduling, and power
management and capacity provisioning in data centers. Content-wise, I have two goals
for the book. First, I want to provide enough applications from computer systems to
make the book relevant and interesting to computer scientists. Toward this end, almost
half the chapters of the book are “application” chapters. Second, I want to make the
book mathematically rich enough to give readers the ability to actually develop new
queueing analysis, not just apply existing analysis. As computer systems and their
workloads continue to evolve and become more complex, it is unrealistic to assume
that they can be modeled with known queueing frameworks and analyses. As a designer
of computer systems myself, I am constantly finding that I have to invent new queueing
concepts to model aspects of computer systems.

How This Book Came to Be

In 1998, as a postdoc at MIT, I developed and taught a new computer science class,
which I called “Performance Analysis and Design of Computer Systems.” The class
had the following description:
In designing computer systems one is usually constrained by certain performance
goals (e.g., low response time or high throughput or low energy). On the other hand,
one often has many choices: One fast disk, or two slow ones? What speed CPU will
suffice? Should we invest our money in more buffer space or a faster processor?
How should jobs be scheduled by the processor? Does it pay to migrate active jobs?
Which routing policy will work best? Should one balance load among servers? How
can we best combat high-variability workloads? Often answers to these questions are
counterintuitive. Ideally, one would like to have answers to these questions before
investing the time and money to build a system. This class will introduce students
to analytic stochastic modeling, which allows system designers to answer questions
such as those above.
Since then, I have further developed the class via 10 more iterations taught within
the School of Computer Science at Carnegie Mellon, where I taught versions of the
xx preface

class to both PhD students and advanced undergraduates in the areas of computer
science, engineering, mathematics, and operations research. In 2002, the Operations
Management department within the Tepper School of Business at Carnegie Mellon
made the class a qualifier requirement for all operations management students.
As other faculty, including my own former PhD students, adopted my lecture notes in
teaching their own classes, I was frequently asked to turn the notes into a book. This is
“version 1” of that book.

Outline of the Book

This book is written in a question/answer style, which mimics the Socratic style that
I use in teaching. I believe that a class “lecture” should ideally be a long sequence
of bite-sized questions, which students can easily provide answers to and which lead
students to the right intuitions. In reading this book, it is extremely important to try
to answer each question without looking at the answer that follows the question. The
questions are written to remind the reader to “think” rather than just “read,” and to
remind the teacher to ask questions rather than just state facts.
There are exercises at the end of each chapter. The exercises are an integral part of the
book and should not be skipped. Many exercises are used to illustrate the application
of the theory to problems in computer systems design, typically with the purpose of
illuminating a key insight. All exercises are related to the material covered in the
chapter, with early exercises being straightforward applications of the material and
later exercises exploring extensions of the material involving greater difficulty.
The book is divided into seven parts, which mostly build on each other.
Part I introduces queueing theory and provides motivating examples from computer
systems design that can be answered using basic queueing analysis. Basic queueing
terminology is introduced including closed and open queueing models and performance
metrics.
Part II is a probability refresher. To make this book self-contained, we have included
in these chapters all the probability that will be needed throughout the rest of the book.
This includes a summary of common discrete and continuous random variables, their
moments, and conditional expectations and probabilities. Also included is some mate-
rial on generating random variables for simulation. Finally we end with a discussion of
sample paths, convergence of sequences of random variables, and time averages versus
ensemble averages.
Part III is about operational laws, or “back of the envelope” analysis. These are
very simple laws that hold for all well-behaved queueing systems. In particular, they
do not require that any assumptions be made about the arrival process or workload
(like Poisson arrivals or Exponential service times). These laws allow us to quickly
reason at a high level (averages only) about system behavior and make design decisions
regarding what modifications will have the biggest performance impact. Applications
to high-level computer system design are provided throughout.
preface xxi

Part IV is about Markov chains and their application toward stochastic analysis of
computer systems. Markov chains allow a much more detailed analysis of systems
by representing the full space of possible states that the system can be in. Whereas
the operational laws in Part III often allow us to answer questions about the overall
mean number of jobs in a system, Markov chains allow us to derive the probability
of exactly i jobs being queued at server j of a multi-server system. Part IV includes
both discrete-time and continuous-time Markov chains. Applications include Google’s
PageRank algorithm, the Aloha (Ethernet) networking protocol, and an analysis of
dropping probabilities in finite-buffer routers.
Part V develops the Markov chain theory introduced in Part IV to allow the analysis of
more complex networks, including server farms. We analyze networks of queues with
complex routing rules, where jobs can be associated with a “class” that determines
their route through the network (these are known as BCMP networks). Part V also
derives theorems on capacity provisioning of server farms, such as the “square-root
staffing rule,” which determines the minimum number of servers needed to provide
certain delay guarantees.
The fact that Parts IV and V are based on Markov chains necessitates that certain
“Markovian” (memoryless) assumptions are made in the analysis. In particular, it is
assumed that the service requirements (sizes) of jobs follow an Exponential distribu-
tion and that the times between job arrivals are also Exponentially distributed. Many
applications are reasonably well modeled via these Exponential assumptions, allowing
us to use Markov analysis to get good insights into system performance. However,
in some cases, it is important to capture the high-variability job size distributions or
correlations present in the empirical workloads.
Part VI introduces techniques that allow us to replace these Exponential distributions
with high-variability distributions. Phase-type distributions are introduced, which allow
us to model virtually any general distribution by a mixture of Exponentials, leverag-
ing our understanding of Exponential distributions and Markov chains from Parts IV
and V. Matrix-analytic techniques are then developed to analyze systems with phase-
type workloads in both the arrival process and service process. The M/G/1 queue
is introduced, and notions such as the Inspection Paradox are discussed. Real-world
workloads are described including heavy-tailed distributions. Transform techniques
are also introduced that facilitate working with general distributions. Finally, even
the service order at the queues is generalized from simple first-come-first-served ser-
vice order to time-sharing (Processor-Sharing) service order, which is more common
in computer systems. Applications abound: Resource allocation (task assignment) in
server farms with high-variability job sizes is studied extensively, both for server farms
with non-preemptive workloads and for web server farms with time-sharing servers.
Power management policies for single servers and for data centers are also studied.
Part VII, the final part of the book, is devoted to scheduling. Smart scheduling is
extremely important in computer systems, because it can dramatically improve system
performance without requiring the purchase of any new hardware. Scheduling is at the
heart of operating systems, bandwidth allocation in networks, disks, databases, memory
hierarchies, and the like. Much of the research being done in the computer systems
xxii preface

area today involves the design and adoption of new scheduling policies. Scheduling can
be counterintuitive, however, and the analysis of even basic scheduling policies is far
from simple. Scheduling policies are typically evaluated via simulation. In introducing
the reader to analytical techniques for evaluating scheduling policies, our hope is that
more such policies might be evaluated via analysis.
We expect readers to mostly work through the chapters in order, with the following
exceptions: First, any chapter or section marked with a star (*) can be skipped without
disturbing the flow. Second, the chapter on transforms, Chapter 25, is purposely moved
to the end, so that most of the book does not depend on knowing transform analysis.
However, because learning transform analysis takes some time, we recommend that
any teacher who plans to cover transforms introduce the topic a little at a time, starting
early in the course. To facilitate this, we have included a large number of exercises at
the end of Chapter 25 that do not require material in later chapters and can be assigned
earlier in the course to give students practice manipulating transforms.
Finally, we urge readers to please check the following websites for new errors/software:
http://www.cs.cmu.edu/∼harchol/PerformanceModeling/errata.html
http://www.cs.cmu.edu/∼harchol/PerformanceModeling/software.html
Please send any additional errors to [email protected].
Acknowledgments

Writing a book, I quickly realized, is very different from writing a research paper, even
a very long one. Book writing actually bears much more similarity to teaching a class.
That is why I would like to start by thanking the three people who most influenced my
teaching. Manuel Blum, my PhD advisor, taught me the art of creating a lecture out
of a series of bite-sized questions. Dick Karp taught me that you can cover an almost
infinite amount of material in just one lecture if you spend enough time in advance
simplifying that material into its cleanest form. Sheldon Ross inspired me by the depth
of his knowledge in stochastic processes (a knowledge so deep that he never once
looked at his notes while teaching) and by the sheer clarity and elegance of both his
lectures and his many beautifully written books.
I would also like to thank Carnegie Mellon University, and the School of Computer
Science at Carnegie Mellon, which has at its core the theme of interdisciplinary re-
search, particularly the mixing of theoretical and applied research. CMU has been the
perfect environment for me to develop the analytical techniques in this book, all in
the context of solving hard applied problems in computer systems design. CMU has
also provided me with a never-ending stream of gifted students, who have inspired
many of the exercises and discussions in this book. Much of this book came from the
research of my own PhD students, including Sherwin Doroudi, Anshul Gandhi, Varun
Gupta, Yoongu Kim, David McWherter, Takayuki Osogami, Bianca Schroeder, Adam
Wierman, and Timothy Zhu. In addition, Mark Crovella, Mike Kozuch, and particu-
larly Alan Scheller-Wolf, all longtime collaborators of mine, have inspired much of
my thinking via their uncanny intuitions and insights.
A great many people have proofread parts of this book or tested out the book and
provided me with useful feedback. These include Sem Borst, Doug Down, Erhun
Ozkan, Katsunobu Sasanuma, Alan Scheller-Wolf, Thrasyvoulos Spyropoulos, Jarod
Wang, and Zachary Young. I would also like to thank my editors, Diana Gillooly and
Lauren Cowles from Cambridge University Press, who were very quick to answer my
endless questions, and who greatly improved the presentation of this book. Finally, I am
very grateful to Miso Kim, my illustrator, a PhD student at the Carnegie Mellon School
of Design, who spent hundreds of hours designing all the fun figures in the book.
On a more personal note, I would like to thank my mother, Irit Harchol, for making
my priorities her priorities, allowing me to maximize my achievements. I did not know
what this meant until I had a child of my own. Lastly, I would like to thank my
husband, Andrew Young. He won me over by reading all my online lecture notes and
doing every homework problem – this was his way of asking me for a first date. His
ability to understand it all without attending any lectures made me believe that my
lecture notes might actually “work” as a book. His willingness to sit by my side every
night for many months gave me the motivation to make it happen.

xxiii
PART I

Introduction to Queueing

Part I serves as an introduction to analytical modeling.


We begin in Chapter 1 with a number of paradoxical examples that come up in the
design of computer systems, showing off the power of analytical modeling in making
design decisions.
Chapter 2 introduces the reader to basic queueing theory terminology and notation
that is used throughout the rest of the book. Readers are introduced to both open and
closed queueing networks and to standard performance metrics, such as response time,
throughput, and the number of jobs in the system.

1
CHAPTER 1

Motivating Examples of the


Power of Analytical Modeling

1.1 What Is Queueing Theory?

Queueing theory is the theory behind what happens when you have lots of jobs,
scarce resources, and subsequently long queues and delays. It is literally the “theory
of queues”: what makes queues appear and how to make them go away.
Imagine a computer system, say a web server, where there is only one job. The job
arrives, it uses certain resources (some CPU, some I/O), and then it departs. Given the
job’s resource requirements, it is very easy to predict exactly when the job will depart.
There is no delay because there are no queues. If every job indeed got to run on its own
computer, there would be no need for queueing theory. Unfortunately, that is rarely the
case.

Server

Arriving customers

Figure 1.1. Illustration of a queue, in which customers wait to be served, and a server. The
picture shows one customer being served at the server and five others waiting in the queue.

Queueing theory applies anywhere that queues come up (see Figure 1.1). We all have
had the experience of waiting in line at the bank, wondering why there are not more
tellers, or waiting in line at the supermarket, wondering why the express lane is for 8
items or less rather than 15 items or less, or whether it might be best to actually have two
express lanes, one for 8 items or less and the other for 15 items or less. Queues are also
at the heart of any computer system. Your CPU uses a time-sharing scheduler to serve
a queue of jobs waiting for CPU time. A computer disk serves a queue of jobs waiting
to read or write blocks. A router in a network serves a queue of packets waiting to be
routed. The router queue is a finite capacity queue, in which packets are dropped when
demand exceeds the buffer space. Memory banks serve queues of threads requesting
memory blocks. Databases sometimes have lock queues, where transactions wait to
acquire the lock on a record. Server farms consist of many servers, each with its own
queue of jobs. The list of examples goes on and on.
The goals of a queueing theorist are twofold. The first is predicting the system perfor-
mance. Typically this means predicting mean delay or delay variability or the proba-
bility that delay exceeds some Service Level Agreement (SLA). However, it can also
mean predicting the number of jobs that will be queueing or the mean number of servers

3
4 motivating examples of the power of analytical modeling

being utilized (e.g., total power needs), or any other such metric. Although prediction
is important, an even more important goal is finding a superior system design to im-
prove performance. Commonly this takes the form of capacity planning, where one
determines which additional resources to buy to meet delay goals (e.g., is it better to
buy a faster disk or a faster CPU, or to add a second slow disk). Many times, however,
without buying any additional resources at all, one can improve performance just by
deploying a smarter scheduling policy or different routing policy to reduce delays.
Given the importance of smart scheduling in computer systems, all of Part VII of this
book is devoted to understanding scheduling policies.

Queueing theory is built on a much broader area of mathematics called stochastic


modeling and analysis. Stochastic modeling represents the service demands of jobs and
the interarrival times of jobs as random variables. For example, the CPU requirements
of UNIX processes might be modeled using a Pareto distribution [84], whereas the
arrival process of jobs at a busy web server might be well modeled by a Poisson
process with Exponentially distributed interarrival times. Stochastic models can also
be used to model dependencies between jobs, as well as anything else that can be
represented as a random variable.

Although it is generally possible to come up with a stochastic model that adequately


represents the jobs or customers in a system and its service dynamics, these stochastic
models are not always analytically tractable with respect to solving for performance.
As we discuss in Part IV, Markovian assumptions, such as assuming Exponentially
distributed service demands or a Poisson arrival process, greatly simplify the analysis;
hence much of the existing queueing literature relies on such Markovian assumptions.
In many cases these are a reasonable approximation. For example, the arrival process of
book orders on Amazon might be reasonably well approximated by a Poisson process,
given that there are many independent users, each independently submitting requests
at a low rate (although this all breaks down when a new Harry Potter book comes
out). However, in some cases Markovian assumptions are very far from reality; for
example, in the case in which service demands of jobs are highly variable or are
correlated.

While many queueing texts downplay the Markovian assumptions being made, this
book does just the opposite. Much of my own research is devoted to demonstrating the
impact of workload assumptions on correctly predicting system performance. I have
found many cases where making simplifying assumptions about the workload can lead
to very inaccurate performance results and poor system designs. In my own research,
I therefore put great emphasis on integrating measured workload distributions into the
analysis. Rather than trying to hide the assumptions being made, this book highlights
all assumptions about workloads. We will discuss specifically whether the workload
models are accurate and how our model assumptions affect performance and design,
as well as look for more accurate workload models. In my opinion, a major reason
why computer scientists are so slow to adopt queueing theory is that the standard
Markovian assumptions often do not fit. However, there are often ways to work around
these assumptions, many of which are shown in this book, such as using phase-type
distributions and matrix-analytic methods, introduced in Chapter 21.
1.2 examples of the power of queueing theory 5

1.2 Examples of the Power of Queueing Theory

The remainder of this chapter is devoted to showing some concrete examples of the
power of queueing theory. Do not expect to understand everything in the examples. The
examples are developed in much greater detail later in the book. Terms like “Poisson
process” that you may not be familiar with are also explained later in the book. These
examples are just here to highlight the types of lessons covered in this book.
As stated earlier, one use of queueing theory is as a predictive tool, allowing one to
predict the performance of a given system. For example, one might be analyzing a
network, with certain bandwidths, where different classes of packets arrive at certain
rates and follow certain routes throughout the network simultaneously. Then queueing
theory can be used to compute quantities such as the mean time that packets spend
waiting at a particular router i, the distribution on the queue buildup at router i, or the
mean overall time to get from router i to router j in the network.
We now turn to the usefulness of queueing theory as a design tool in choosing the
best system design to minimize response time. The examples that follow illustrate that
system design is often a counterintuitive process.

Design Example 1 – Doubling Arrival Rate


Consider a system consisting of a single CPU that serves a queue of jobs in First-Come-
First-Served (FCFS) order, as illustrated in Figure 1.2. The jobs arrive according to some
random process with some average arrival rate, say λ = 3 jobs per second. Each job
has some CPU service requirement, drawn independently from some distribution of job
service requirements (we can assume any distribution on the job service requirements
for this example). Let’s say that the average service rate is μ = 5 jobs per second (i.e.,
each job on average requires 1/5 of a second of service). Note that the system is not
in overload (3 < 5). Let E [T ] denote the mean response time of this system, where
response time is the time from when a job arrives until it completes service, a.k.a.
sojourn time.

FCFS CPU
If λ 2λ ,
by how much
λ=3 µ=5
should µ increase?

Figure 1.2. A system with a single CPU that serves jobs in FCFS order.

Question: Your boss tells you that starting tomorrow the arrival rate will double. You
are told to buy a faster CPU to ensure that jobs experience the same mean response
time, E [T ]. That is, customers should not notice the effect of the increased arrival
rate. By how much should you increase the CPU speed? (a) Double the CPU speed;
(b) More than double the CPU speed; (c) Less than double the CPU speed.
Answer: (c) Less than double.
6 motivating examples of the power of analytical modeling

Question: Why not (a)?


Answer: It turns out that doubling CPU speed together with doubling the arrival
rate will generally result in cutting the mean response time in half! We prove this in
Chapter 13. Therefore, the CPU speed does not need to double.
Question: Can you immediately see a rough argument for this result that does not
involve any queueing theory formulas? What happens if we double the service rate and
double the arrival rate?
Answer: Imagine that there are two types of time: Federation time and Klingon time.
Klingon seconds are faster than Federation seconds. In fact, each Klingon second is
equivalent to a half-second in Federation time. Now, suppose that in the Federation,
there is a CPU serving jobs. Jobs arrive with rate λ jobs per second and are served at
some rate μ jobs per second. The Klingons steal the system specs and reengineer the
same system in the Klingon world. In the Klingon system, the arrival rate is λ jobs
per Klingon second, and the service rate is μ jobs per Klingon second. Note that both
systems have the same mean response time, E [T ], except that the Klingon system
response time is measured in Klingon seconds, while the Federation system response
time is measured in Federation seconds. Consider now that Captain Kirk is observing
both the Federation system and the Klingon reengineered system. From his perspective,
the Klingon system has twice the arrival rate and twice the service rate; however, the
mean response time in the Klingon system has been halved (because Klingon seconds
are half-seconds in Federation time).
Question: Suppose the CPU employs time-sharing service order (known as Processor-
Sharing, or PS for short), instead of FCFS. Does the answer change?
Answer: No. The same basic argument still works.

Design Example 2 – Sometimes “Improvements” Do Nothing


Consider the batch system shown in Figure 1.3. There are always N = 6 jobs in this
system (this is called the multiprogramming level). As soon as a job completes service,
a new job is started (this is called a “closed” system). Each job must go through the
“service facility.” At the service facility, with probability 1/2 the job goes to server 1,
and with probability 1/2 it goes to server 2. Server 1 services jobs at an average rate
of 1 job every 3 seconds. Server 2 also services jobs at an average rate of 1 job every 3
seconds. The distribution on the service times of the jobs is irrelevant for this problem.
Response time is defined as usual as the time from when a job first arrives at the service
facility (at the fork) until it completes service.
N = 6 jobs

Server 1

µ =⅓
½
Server 2
½
µ =⅓

Figure 1.3. A closed batch system.


1.2 examples of the power of queueing theory 7

Question: You replace server 1 with a server that is twice as fast (the new server
services jobs at an average rate of 2 jobs every 3 seconds). Does this “improvement”
affect the average response time in the system? Does it affect the throughput? (Assume
that the routing probabilities remain constant at 1/2 and 1/2.)
Answer: Not really. Both the average response time and throughput are hardly affected.
This is explained in Chapter 7.
Question: Suppose that the system had a higher multiprogramming level, N . Does the
answer change?
Answer: No. The already negligible effect on response time and throughput goes to
zero as N increases.
Question: Suppose the system had a lower value of N . Does the answer change?
Answer: Yes. If N is sufficiently low, then the “improvement” helps. Consider, for
example, the case N = 1.
Question: Suppose the system is changed into an open system, rather than a closed
system, as shown in Figure 1.4, where arrival times are independent of service com-
pletions. Now does the “improvement” reduce mean response time?
Answer: Absolutely!

Server 1

µ =⅓
½
Server 2
½
µ =⅓

Figure 1.4. An open system.

Design Example 3 – One Machine or Many?


You are given a choice between one fast CPU of speed s, or n slow CPUs each of speed
s/n (see Figure 1.5). Your goal is to minimize mean response time. To start, assume
that jobs are non-preemptible (i.e., each job must be run to completion).

µ =1

µ =1
versus µ =4
µ =1

µ =1

Figure 1.5. Which is better for minimizing mean response time: many slow servers or one
fast server?
8 motivating examples of the power of analytical modeling

Question: Which is the better choice: one fast machine or many slow ones?
Hint: Suppose that I tell you that the answer is, “It depends on the workload.” What
aspects of the workload do you think the answer depends on?
Answer: It turns out that the answer depends on the variability of the job size distribu-
tion, as well as on the system load.
Question: Which system do you prefer when job size variability is high?
Answer: When job size variability is high, we prefer many slow servers because we
do not want short jobs getting stuck behind long ones.
Question: Which system do you prefer when load is low?
Answer: When load is low, not all servers will be utilized, so it seems better to go with
one fast server.
These observations are revisited many times throughout the book.
Question: Now suppose we ask the same question, but jobs are preemptible; that is,
they can be stopped and restarted where they left off. When do we prefer many slow
machines as compared to a single fast machine?
Answer: If your jobs are preemptible, you could always use a single fast machine
to simulate the effect of n slow machines. Hence a single fast machine is at least as
good.
The question of many slow servers versus a few fast ones has huge applicability in a
wide range of areas, because anything can be viewed as a resource, including CPU,
power, and bandwidth.
For an example involving power management in data centers, consider the problem
from [69] where you have a fixed power budget P and a server farm consisting of
n servers. You have to decide how much power to allocate to each server, so as
to minimize overall mean response time for jobs arriving at the server farm. There
is a function that specifies the relationship between the power allocated to a server
and the speed (frequency) at which it runs – generally, the more power you allocate
to a server, the faster it runs (the higher its frequency), subject to some maximum
possible frequency and some minimum power level needed just to turn the server on.
To answer the question of how to allocate power, you need to think about whether
you prefer many slow servers (allocate just a little power to every server) or a few fast
ones (distribute all the power among a small number of servers). In [69], queueing
theory is used to optimally answer this question under a wide variety of parameter
settings.
As another example, if bandwidth is the resource, we can ask when it pays to partition
bandwidth into smaller chunks and when it is better not to. The problem is also
interesting when performance is combined with price. For example, it is often cheaper
(financially) to purchase many slow servers than a few fast servers. Yet in some cases,
many slow servers can consume more total power than a few fast ones. All of these
factors can further influence the choice of architecture.
1.2 examples of the power of queueing theory 9

Design Example 4 – Task Assignment in a Server Farm


Consider a server farm with a central dispatcher and several hosts. Each arriving job is
immediately dispatched to one of the hosts for processing. Figure 1.6 illustrates such
a system.

Host 1

Host 2
Dispatcher
Arrivals
(Load Balancer)

Host 3

Figure 1.6. A distributed server system with a central dispatcher.

Server farms like this are found everywhere. Web server farms typically deploy a
front-end dispatcher like Cisco’s Local Director or IBM’s Network Dispatcher. Super-
computing sites might use LoadLeveler or some other dispatcher to balance load and
assign jobs to hosts.
For the moment, let’s assume that all the hosts are identical (homogeneous) and that
all jobs only use a single resource. Let’s also assume that once jobs are assigned to a
host, they are processed there in FCFS order and are non-preemptible.
There are many possible task assignment policies that can be used for dispatching jobs
to hosts. Here are a few:

Random: Each job flips a fair coin to determine where it is routed.


Round-Robin: The ith job goes to host i mod n, where n is the number of hosts,
and hosts are numbered 0, 1, . . . , n − 1.
Shortest-Queue: Each job goes to the host with the fewest number of jobs.
Size-Interval-Task-Assignment (SITA): “Short” jobs go to the first host, “medium”
jobs go to the second host, “long” jobs go to the third host, etc., for some definition
of “short,” “medium,” and “long.”
Least-Work-Left (LWL): Each job goes to the host with the least total remaining
work, where the “work” at a host is the sum of the sizes of jobs there.
Central-Queue: Rather than have a queue at each host, jobs are pooled at one central
queue. When a host is done working on a job, it grabs the first job in the central
queue to work on.

Question: Which of these task assignment policies yields the lowest mean response
time?
10 motivating examples of the power of analytical modeling

Answer: Given the ubiquity of server farms, it is surprising how little is known about
this question. If job size variability is low, then the LWL policy is best. If job size
variability is high, then it is important to keep short jobs from getting stuck behind long
ones, so a SITA-like policy, which affords short jobs isolation from long ones, can be
far better. In fact, for a long time it was believed that SITA is always better than LWL
when job size variability is high. However, it was recently discovered (see [90]) that
SITA can be far worse than LWL even under job size variability tending to infinity. It
turns out that other properties of the workload, including load and fractional moments
of the job size distribution, matter as well.
Question: For the previous question, how important was it to know the size of jobs?
For example, how does LWL, which requires knowing job size, compare with Central-
Queue, which does not?
Answer: Actually, most task assignment policies do not require knowing the size of
jobs. For example, it can be proven by induction that LWL is equivalent to Central-
Queue. Even policies like SITA, which by definition are based on knowing the job size,
can be well approximated by other policies that do not require knowing the job size;
see [82].
Question: Now consider a different model, in which jobs are preemptible. Specifically,
suppose that the servers are Processor-Sharing (PS) servers, which time-share among
all the jobs at the server, rather than serving them in FCFS order. Which task assignment
policy is preferable now? Is the answer the same as that for FCFS servers?
Answer: The task assignment policies that are best for FCFS servers are often a
disaster under PS servers. For PS servers, the Shortest-Queue policy is near optimal
([79]), whereas that policy is pretty bad for FCFS servers if job size variability is high.
There are many open questions with respect to task assignment policies. The case of
server farms with PS servers, for example, has received almost no attention, and even
the case of FCFS servers is still only partly understood. There are also many other
task assignment policies that have not been mentioned. For example, cycle stealing
(taking advantage of a free host to process jobs in some other queue) can be combined
with many existing task assignment policies to create improved policies. There are also
other metrics to consider, like minimizing the variance of response time, rather than
mean response time, or maximizing fairness. Finally, task assignment can become even
more complex, and more important, when the workload changes over time.
Task assignment is analyzed in great detail in Chapter 24, after we have had a chance
to study empirical workloads.

Design Example 5 – Scheduling Policies


Suppose you have a single server. Jobs arrive according to a Poisson process. Assume
anything you like about the distribution of job sizes. The following are some possible
service orders (scheduling orders) for serving jobs:

First-Come-First-Served (FCFS): When the server completes a job, it starts working


on the job that arrived earliest.
1.2 examples of the power of queueing theory 11

Non-Preemptive Last-Come-First-Served (LCFS): When the server completes a


job, it starts working on the job that arrived last.
Random: When the server completes a job, it starts working on a random job.

Question: Which of these non-preemptive service orders will result in the lowest mean
response time?
Answer: Believe it or not, they all have the same mean response time.
Question: Suppose we change the non-preemptive LCFS policy to a Preemptive-LCFS
policy (PLCFS), which works as follows: Whenever a new arrival enters the system,
it immediately preempts the job in service. How does the mean response time of this
policy compare with the others?
Answer: It depends on the variability of the job size distribution. If the job size
distribution is at least moderately variable, then PLCFS will be a huge improvement.
If the job size distribution is hardly variable (basically constant), then PLCFS policy
will be up to a factor of 2 worse.
We study many counterintuitive scheduling theory results toward the very end of the
book, in Chapters 28 through 33.

More Design Examples


There are many more questions in computer systems design that lend themselves to a
queueing-theoretic solution.
One example is the notion of a setup cost. It turns out that it can take both significant time
and power to turn on a server that is off. In designing an efficient power management
policy, we often want to leave servers off (to save power), but then we have to pay the
setup cost to get them back on when jobs arrive. Given performance goals, both with
respect to response time and power usage, an important question is whether it pays to
turn a server off. If so, one can then ask exactly how many servers should be left on.
These questions are discussed in Chapters 15 and 27.
There are also questions involving optimal scheduling when jobs have priorities (e.g.,
certain users have paid more for their jobs to have priority over other users’ jobs, or
some jobs are inherently more vital than others). Again, queueing theory is very useful
in designing the right priority scheme to maximize the value of the work completed.

Figure 1.7. Example of a difficult problem: The M/G/2 queue consists of a single queue and
two servers. When a server completes a job, it starts working on the job at the head of the
queue. Job sizes follow a general distribution, G.
12 motivating examples of the power of analytical modeling

However, queueing theory (and more generally analytical modeling) is not currently
all-powerful! There are lots of very simple problems that we can at best only analyze
approximately. As an example, consider the simple two-server network shown in
Figure 1.7, where job sizes come from a general distribution. No one knows how to
derive mean response time for this network. Approximations exist, but they are quite
poor, particularly when job size variability gets high [76]. We mention many such open
problems in this book, and we encourage readers to attempt to solve these!
CHAPTER 2

Queueing Theory Terminology

2.1 Where We Are Heading

Queueing theory is the study of queueing behavior in networks and systems. Figure 2.1
shows the solution process.

Model as

Queueing
Real-world system network
with question:
Analyze!
“Should we buy a faster
disk or a faster CPU?” Result

Translate back
Figure 2.1. Solution process.

In Chapter 1, we looked at examples of the power of queueing theory as a design tool.


In this chapter, we start from scratch and define the terminology used in queueing
theory.

2.2 The Single-Server Network

A queueing network is made up of servers.


The simplest example of a queueing network is the single-server network, as shown
in Figure 2.2. The discussion in this section is limited to the single-server network with
First-Come-First-Served (FCFS) service order. You can think of the server as being a
CPU.
FCFS
Arriving jobs
=4
λ=3

Figure 2.2. Single-server network.

13
14 queueing theory terminology

There are several parameters associated with the single-server network:

Service Order This is the order in which jobs will be served by the server. Unless
otherwise stated, assume First-Come-First-Served (FCFS).
Average Arrival Rate This is the average rate, λ, at which jobs arrive to the server
(e.g., λ = 3 jobs/sec).
Mean Interarrival Time This is the average time between successive job arrivals
(e.g., 1/λ = 13 sec).
Service Requirement, Size The “size” of a job is typically denoted by the random
variable S . This is the time it would take the job to run on this server if there were
no other jobs around (no queueing). In a queueing model, the size (a.k.a. service
requirement) is typically associated with the server (e.g., this job will take 5 seconds
on this server).
Mean Service Time This is the expected value of S , namely the average time required
to service a job on this CPU, where “service” does not include queueing time. In
Figure 2.2, E [S] = 14 sec.
Average Service Rate This is the average rate, μ, at which jobs are served (e.g.,
1
μ = 4 jobs/sec = E[S ]
).

Observe that this way of speaking is different from the way we normally talk about
servers in conversation. For example, nowhere have we mentioned the absolute speed
of the CPU; rather we have only defined the CPU’s speed in terms of the set of jobs
that it is working on.
In normal conversation, we might say something like the following:
r The average arrival rate of jobs is 3 jobs per second.
r Jobs have different service requirements, but the average number of cycles re-
quired by a job is 5,000 cycles per job.
r The CPU speed is 20,000 cycles per second.

That is, an average of 15,000 cycles of work arrive at the CPU each second, and the
CPU can process 20,000 cycles of work a second.
In the queueing-theoretic way of talking, we would never mention the word “cycle.”
Instead, we would simply say
r The average arrival rate of jobs is 3 jobs per second.
r The average rate at which the CPU can service jobs is 4 jobs per second.

This second way of speaking suppresses some of the detail and thus makes the problem
a little easier to think about. You should feel comfortable going back and forth between
the two.
We consider these common performance metrics in the context of a single-server
system:

Response Time, Turnaround Time, Time in System, or Sojourn Time (T ) We


define a job’s response time by T = tdepart − tarrive , where tdepart is the time when the
2.2 the single-server network 15

job leaves the system, and tarrive is the time when the job arrived to the system. We
are interested in E [T ], the mean response time; Var(T ), the variance in response
time; and the tail behavior of T , P {T > t}.
Waiting Time or Delay (TQ ) This is the time that the job spends in the queue, not
being served. It is also called the “time in queue” or the “wasted time.” Notice that
E [T ] = E [TQ ] + E [S]. Under FCFS service order, waiting time can be defined as
the time from when a job arrives to the system until it first receives service.
Number of Jobs in the System (N ) This includes those jobs in the queue, plus the
one being served (if any).
Number of Jobs in Queue (NQ ) This denotes only the number of jobs waiting (in
queue).

There are some immediate observations that we can make about the single-server
network. First, observe that as λ, the mean arrival rate, increases, all the performance
metrics mentioned earlier increase (get worse). Also, as μ, the mean service rate,
increases, all the performance metrics mentioned earlier decrease (improve).
We require that λ ≤ μ (we always assume λ < μ).
Question: If λ > μ what happens?
Answer: If λ > μ the queue length goes to infinity over time.
Question: Can you provide the intuition?
Answer: Consider a large time t. Then, if N (t) is the number of jobs in the system
at time t, and A(t) (respectively, D(t)) denotes the number of arrivals (respectively,
departures) by time t, then we have:

E[N (t)] = E[A(t)] − E[D(t)] ≥ λt − μt = t(λ − μ).

(The inequality comes from the fact that the expected number of departures by time t
is actually smaller than μt, because the server is not always busy). Now observe that if
λ > μ, then t(λ − μ) → ∞, as t → ∞.
Throughout the book we assume λ < μ, which is needed for stability (keeping queue
sizes from growing unboundedly). Situations where λ ≥ μ are touched on in Chapter 9.
Question: Given the previous stability condition (λ < μ), suppose that the interarrival
distribution and the service time distribution are Deterministic (i.e., both are constants).
What is TQ ? What is T ?
Answer: TQ = 0, and T = S .
Therefore queueing (waiting) results from variability in service time and/or interarrival
time distributions. Here is an example of how variability leads to queues: Let’s discretize
time. Suppose at each time step, an arrival occurs with probability p = 1/6. Suppose at
each time step, a departure occurs with probability q = 1/3. Then there is a non-zero
probability that the queue will build up (temporarily) if several arrivals occur without
a departure.
16 queueing theory terminology

2.3 Classification of Queueing Networks

Queueing networks can be classified into two categories: open networks and closed
networks. Stochastic processes books (e.g., [149, 150]) usually limit their discussion
to open networks. By contrast, the systems performance analysis books (e.g., [117,
125]) almost exclusively discuss closed networks. Open networks are introduced in
Section 2.4. Closed networks are introduced in Section 2.6.

2.4 Open Networks

An open queueing network has external arrivals and departures. Four examples of open
networks are illustrated in this section.

Example: The Single-Server System


This was shown in Figure 2.2.

Example: Network of Queues with Probabilistic Routing


This is shown in Figure 2.3. Here server i receives external arrivals (“outside arrivals”)
with rate ri . Server i also receives internal arrivals from some of the other servers. A
packet that finishes service at server i is next routed to server j with probability pij .
We can even allow the probabilities to depend on the “class” of the packet, so that not
all packets have to follow the same routing scheme.

Server 2
r2 p2,out
µ2
p12 p23

Server 1 Server 3
p13
r1 µ1 µ3

p31
p1,out r3

Figure 2.3. Network of queues with probabilistic routing.

Application: In modeling packet flows in the Internet, for example, one could make
the class of the packet (and hence its route) depend on its source and destination IP
addresses. In modeling delays, each wire might be replaced by a server that would be
used to model the wire latency. The goal might be to predict mean round-trip times for
packets on a particular route, given the presence of the other packets. We solve this
problem in Chapter 18.
2.5 more metrics: throughput and utilization 17

Example: Network of Queues with Non-Probabilistic Routing


This is shown in Figure 2.4. Here all jobs follow a predetermined route: CPU to disk 1
to disk 2 to disk 1 to disk 2 to disk 1 and out.

CPU Disk 1
Arriving
Jobs (λ)
2X around
Disk 2 (Disk 1, 2,1, 2,1)

Figure 2.4. Network of queues with non-probabilistic routing.

Example: Finite Buffer


An example of a single-server network with finite buffer is shown in Figure 2.5. Any
arrival that finds no room is dropped.

CPU

λ µ CPU

Space for 9 jobs


plus 1 in service
Figure 2.5. Single-server network with finite buffer capacity.

2.5 More Metrics: Throughput and Utilization

We have already seen four performance metrics: E [N ], E [T ], E [NQ ], and E [TQ ].


Although these were applied to a single-server system, they can also be used to describe
performance in a multi-server, multi-queue system. For example, E [T ] would denote
the mean time a job spends in the whole system, including all time spent in various
queues and time spent receiving service at various servers, whereas E [TQ ] refers to just
the mean time the job “wasted” waiting in various queues. If we want to refer to just the
ith queue in such a system, we typically write E [Ni ] to denote the expected number
of jobs both queueing and in service at server i, and E [Ti ] to denote the expected time
a job spends queueing and in service at server i.
Now we introduce two new performance metrics: throughput and utilization. Through-
put is arguably the performance metric most used in conversation. Everyone wants
higher throughput! Let’s see why.
Question: How does maximizing throughput relate to minimizing response time? For
example, in Figure 2.6, which system has higher throughput?
18 queueing theory terminology

λ= µ =⅓

versus

λ= =

Figure 2.6. Comparing throughput of two systems.

Answer: We will see soon.


Let’s start by defining utilization.
Device Utilization (ρi ) is the fraction of time device i is busy. Note our current
definition of utilization applies only to a single device (server). When the device is
implied, we simply write ρ (omitting the subscript).
Suppose we watch a device i for a long period of time. Let τ denote the length of the
observation period. Let B denote the total time during the observation period that the
device is non-idle (busy). Then
B
ρi = .
τ
Device Throughput (Xi ) is the rate of completions at device i (e.g., jobs/sec). The
throughput (X) of the system is the rate of job completions in the system.
Let C denote the total number of jobs completed at device i during time τ . Then
C
Xi = .
τ
So how does Xi relate to ρi ? Well,
 
C C B
= · .
τ B τ
C
Question: So what is B
?
B C 1
Answer: Well, C
= E [S]. So B
= E[S ]
= μi .
So we have

Xi = μi · ρi .

Here is another way to derive this expression by conditioning:

Xi = Mean rate of completion at server i


= E [Rate of completion at server i | server i is busy] · P {server i is busy}
+ E [Rate of completion at server i | server i is idle] · P {server i is idle}
= μi · P {server i is busy} + 0
= μi · ρi
2.5 more metrics: throughput and utilization 19

Or, equivalently,
ρi = Xi · E [S] .
This latter formulation has a name: the Utilization Law.

Example: Single-Server Network: What Is the Throughput?


In Figure 2.7 we have a single-server system.
FCFS

λ= =⅓

Figure 2.7. Single-server model.

Question: What is X ?
Answer: X = ρ · μ. But what is ρ? In Chapter 6, we will prove that ρ = μλ . For now
here is a hand-wavy but intuitive way to see this, but not a proof!!
ρ = Fraction of time server is busy
Average service time required by a job
=
Average time between arrivals
1/μ
=
1/λ
λ
= .
μ
So, this leaves us with
λ
X = ρ · μ = · μ = λ.
μ
So the throughput does not depend on the service rate whatsoever!
In particular, in the example shown in Figure 2.6, repeated again in Figure 2.8, both
systems have the same throughput of 1/6 jobs/sec. In the case of the faster processor,
the response time drops and the queue length drops, but X does not change. Therefore
lower response time is not related to higher throughput.

λ= =⅓

versus

λ= =

Figure 2.8. Same model, but different values of μ. Throughput, X, is the same in both.

Question: Explain why X does not change.


Answer: No matter how high we make μ, the completion rate is still bounded by the
arrival rate: “rate in = rate out.” Changing μ affects the maximum possible X , but
Exploring the Variety of Random
Documents with Different Content
pleased. Let this become a fact. The writer Sirdar Nizam-ul-
Mulk, Tuesday the 11th of Shevvál, from Turikoh to London.
May it be received!

APPENDIX III.
FABLES, LEGENDS, AND SONGS OF CHITRAL[113]
(called Chitrár by the natives).

Collected by H. H. Sirdar Nizám-ul-Mulk, Raja of Yasin, etc., and by Dr. G. W.


Leitner, and translated from Persian or Chitráli.

I. Fables.

1. The Vindictive Fowl.

A fowl sat near a thistle, and opened a rag, in which corals were tied up.
Suddenly one fell into the thistle; the fowl said, “O thistle, give me my coral.”
The thistle said, “This is not my business.” The fowl said, “Then I will burn
thee.” The thistle agreed. The fowl then begged the fire to burn the thistle.
The fire replied, “Why should I burn this weak thorn?” The fowl thereupon
threatened to extinguish the fire by appealing to water: “O water, kill this fire
for my sake.” The water asked, “What is thy enmity with the fire, that I should
kill it?” The fowl said, “I will bring a lean cow to drink thee up.” The water
said, “Well”; but the cow refused, as it was too lean and weak to do so. Then
the fowl threatened to bring the wolf to eat the cow. The wolf refused, as he
could feed better on fat sheep. The fowl threatened the wolf with the
huntsman, as he would not eat the lean cow. The huntsman refused to shoot
the wolf, as it was not fit to eat. The fowl then threatened the huntsman with
the mouse. The huntsman replied, “Most welcome.” But the mouse said that it
was feeding on almonds and other nice things, and had no need to gnaw the
leather-skin of the huntsman. The fowl then said, “I will tell the cat to eat
thee.” The mouse said, “The cat is my enemy in any case, and will try to catch
and eat me, wherever it comes across me, so what is the use of your telling
the cat?” The fowl then begged the cat to eat the mouse. The cat agreed to
do so whenever it was hungry: “Now,” it added, “I do not care to do so.” The
fowl then became very angry, and threatened to bring little boys to worry the
cat. The cat said, “Yes.” The fowl then begged the little boys to snatch the cat
one from the other, so that it might know what it was to be vexed. The boys,
however, just then wanted to play and fight among themselves, and did not
care to interrupt their own game. The fowl then threatened to get an old man
to beat the boys. The boys said, “By all means.” But the old man refused to
beat the boys without any cause, and called the fowl a fool. The fowl then
said to the Pîr (old man), “I will tell the wind to carry away thy wool.” The old
man acquiesced; and the wind, when ordered by the fowl, with its usual
perverseness, obeyed the fowl, and carried off the old man’s wool. Then the
old man beat the boys, and the boys worried the cat, and the cat ran after the
mouse, and the mouse bit the huntsman in the waist, and the huntsman went
after the wolf, and the wolf bit the cow, and the cow drank the water, and the
water came down on the fire, and the fire burnt the thistle, and the thistle
gave the coral to the fowl, and the fowl took back its coral.

2. The Story of the Golden Mouse who tells the Story of a Mouse and a Frog.

There was a kind of mice that had a golden body. They never went out of
their hole. One day one of them thought: “I will go out and see the wonders
of God’s creation.” So it did; and when thirty or forty yards from its hole, a
cat, prowling for game, saw it come out from the hole. The cat, that was full
of wiles, plotted to get near the hole, awaiting the return of the mouse, who,
after its peregrinations, noticed the mouth of the hole closed by the wicked
cat. The mouse then wished to go another way, and turned to the left,
towards a tree, on which sat concealed a crow, expecting to devour the
mouse when it should run away from the cat. The crow then pounced on the
mouse, who cried out to God, “O God, why have these misfortunes overtaken
such a small being as myself? My only help is in thee, to save me from these
calamities.” The mouse was confused, and ran hither and thither, in vain
seeking a refuge, when it saw another cat stealthily approaching it; and, in its
perplexity, the mouse nearly ran into the cat’s paws; but that cat had been
caught in a hunter’s net, and could do nothing. The crow, and the cat which
was watching at the hole, saw that the mouse had got near another cat
between the two. They thought that the mouse had fallen a victim to the
second cat, and that it was no use remaining. It was the fortune of the mouse
that they should be so deceived. The trembling mouse saw that the two
enemies had gone. It thanked the Creator for having escaped from the cat
and the crow, and it said, “It would be most unmanly of me not to deliver the
cat in the net, as it has been the instrument of my safety; but then, if I set it
free, it will eat me.” The mouse was immersed in thought, and came to the
conclusion to gnaw the net at a distance from the cat, and that as soon as the
hunter should come in sight, the cat then, being afraid of the hunter, would
seek its own safety, and not trouble itself about the mouse. “Thus I will free
the cat from the hunter and the net, and deliver my own life from the cat,”
was the thought of the mouse. It then began to gnaw the net at a distance.
The cat then said to the mouse, “If you want to save me, for God’s sake, then
gnaw the net round my throat, and not at a distance; that is no use to me
when the hunter will come. You err if you think that I will eat you as soon as I
get out. For all the faults, hitherto, have been on the side of cats, which you
mice have never injured, so that, if you are magnanimous and release me,
there is no such ungrateful monster in the world as would return evil for the
unmerited good that I implore you to bestow on me.” The golden mouse,
which was very wise, did not attend to this false speech, but continued to
gnaw the net at a distance, so that, when the hunter came, there only
remained the threads round the neck of the cat, which the mouse bit asunder
at the last moment and then ran back into its hole. The cat bolted up the tree
where the crow had sat, the huntsman saw that the cat had escaped, and
that his net was gnawed in several places, so he took the net to get it
repaired in the Bazaar.
Then the cat descended from the tree and said to herself, “The time of
meals is over, it is no use to go home; I had better make friends with the
mouse, entice it out of the hole, and eat it.” This she did, and going to the
hole, called out: “O faithful companion and sympathizing friend, although
there has been enmity between cats and mice for a long time, thou hast, by
God’s order, been the cause of my release, therefore come out of the hole,
and let us lay the foundation of our friendship.” The mouse replied: “I once
tried to come out, and then I fell from one danger into another. Now it is
difficult for me to comply with your request. I have cut the threads encircling
your throat, not out of friendship for you, but out of gratitude to God. Nor is
our friendship of any use in this world, as you will gather from the story of

3. “The Frog and the Mouse.”

The mouse then narrated: “There was once a mouse that went out for a
promenade, and going into people’s houses, found food here and there, and
in the dawn of the next morning it was returning to its home. It came to a
place where there was a large tank, round which there were flowers and
trees; and a voice was heard from out of the tank. Coming near, it saw that it
emanated from a being that had no hair on its body, no tail, and no ear. The
mouse said to itself: ‘What is this ill-formed being?’ and thanked God that it
was not the ugliest of creatures. With this thought the mouse, that was
standing still, shook its head to and fro. The frog, however, thought that the
mouse was smitten with astonishment at his beauty and entranced with
pleasure at his voice, and jumping out of the corner of the tank came near: ‘I
know, beloved, that you are standing charmed with my voice; we ought to lay
the firm basis of our friendship, but you are sharper than I am, therefore go
to the house of an old woman and steal from it a thread, and bring it here.’
The mouse obeyed the order. The frog then said: ‘Now tie one end to your tail
and I will tie the other end to my leg, because I want to go to your house,
where you have a large family and there are many other mice, so that I may
know you from the others. If again you visit me, the tank is large, my friends
many, and you too ought to distinguish me from the rest. Again, when I want
to see you I will follow the thread to your hole, and when you want to see me
you will follow it to the tank.’ This being settled, they parted. One day the frog
wanted to see the mouse. Coming out of the tank he was going to its hole,
when he saw the mouse-hawk, who pounced upon the frog as he was limping
along, and flew up with him in its claws. This pulled the end to which the
mouse was tied. It thought that its lover had come to the place and wanted to
see it; so it came out, only to be dragged along in the air under the mouse-
hawk. As the unfortunate mouse passed a Bazaar it called out: ‘O ye
Mussulmans, learn from my fate what happens to whoever befriends beings of
a different species.’
“Now,” said the golden mouse to the cat, “this is the story which teaches
me what to do; and that is, to decline your friendship and to try never again
to see your face.”

4. The Quail and the Fox.

The Quail said: I teach thee art.


Night and day I work at art;
Whoever lies, the shame is on his neck.

A quail and a fox were friends. The fox said: “Why should you not make me
laugh some day?” The quail replied, “This is easy.” So they went to a Bazaar,
where the quail, looking through the hole in the wall of a house, saw a man
sitting, and his wife turning up and down the “samanak” sweetmeat with a big
wooden ladle (much in the same way as the Turkish rakat lokum, or lumps of
delight, are made). The quail then settled on the head of the man. The
woman said to him, “Don’t stir; I will catch it.” Then the quail sat on the
woman’s head, so the man asked the woman to be quiet, as he would catch
the quail, which, however, then flew back to the head of the man. This
annoyed the wife, who struck at the quail with the wooden ladle, but hit
instead the face of her husband, whose eye and beard were covered with the
sweetmeat, and who thereupon beat his wife. When the fox saw this, he
rejoiced and laughed greatly; and both the fox and quail returned to their
home. After a time the fox said to the quail: “It is true that you have made
me laugh, but could you feed me?” This the quail undertook to do, and with
the fox went to a place where a woman was carrying a plate of loaves of
bread to her husband in the fields. Then the quail repeated her tactics, and
sat on the head of the woman, who tried to catch it with one hand. The quail
escaped and settled on one shoulder, then on another, and so on till the
woman became enraged, put the plate of bread on the ground, and ran after
the quail, who, by little leaps, attracted her further and further away till she
was at a considerable distance from it, when the fox pounced on the bread
and appeased his hunger.
Some time after, the fox wanted to put the cleverness of the quail again to
the test, and said: “You have made me laugh, you have fed me, now make
me weep.” The quail replied, “Why, this is the easiest task of all,” so she took
the fox to the gate of the town and called out: “O ye dogs of the Bazaar,
come ye as many as ye are, for a fox has come to the gate!” So all the dogs,
hearing this good news, assembled to hunt the fox, which, seeing the
multitude of its enemies, fled till he reached a high place. Turning round, he
saw the dogs following, so he jumped down and broke his back. The fox
therefore helplessly sat down and said to the approaching quail: “O
sympathizing companion, see how my mouth has become filled with mud and
blood, and how my back has been broken. This is my fate in this world; now,
could you kindly clean my mouth from mud and blood, as my end is near?”
The intention of the fox was, that he should take the opportunity of this
artifice to swallow the quail in revenge of her being the cause of its death.
The quail, in her unwise friendship, began to clean the fox’s mouth. The
accursed fox caught her in his mouth; but the quail, which was intelligent and
clever, said, “O beloved friend, your eating me is lawful, because I forgive you
my blood, on condition that you pronounce my name, otherwise you will
suffer an injury.” The base fox, although full of wiles, clouded by approaching
death, fell into the trap, and as soon as he said “O quail,” his teeth separated,
and the quail flew away from him and was safe, whilst the fox died.

II. Stories and Legends.

There is a story which seems to illustrate the fact that private hatred is
often the cause of the injury that is ascribed to accident. A man slaughtered a
goat, and kept it over-night in an outhouse. His enemy put a number of cats
through the airhole, and when their noise awoke the master of the house he
only found the bones of his goat. But he took their bones, and scattered them
over the field of his enemy the same night; and the dogs came, smelling the
bones, searched for them, and destroyed the wheat that was ripe for reaping.
One blamed the cats, the other blamed the dogs; but both had the reward of
their own actions.

Sulei was a man well known on the frontier of Chitrál for his eloquence.
One day, as he was travelling, he met a man from Badakhshan, who asked
him whether he knew Persian. Sulei said, “No.” “Then,” replied the Badakhshi,
“you are lost” [nobody, worthless]. Sulei at once rejoined, “Do you know
Khowár?” (the language of Chitrál). “No,” said the Badakhshi. “Then you too
are lost,” wittily concluded Sulei (to show that personal worth or eloquence
does not depend on knowing any particular language).

It is related that beyond Upper Chitrár there is a country called Shin or


Rashan. It is very beautiful, and its plains are gardens, and its trees bear
much fruit, and its chunars (plane trees) and willows make it a shaded land.
Its earth is red, and its water is white and tasty. They say that in ancient
times the river of that district for a time flowed with milk without the dashing
(of the waves) of water.

Besir is a place near Ayin towards Kafiristan. The inhabitants were formerly
savage Kafirs, but are now subjects of the Mehter (Prince) of Chitrár. They
carry loads of wood, and do not neglect the work of the Mehter. They are
numerous and peaceful, and in helplessness like fowls, but they are still
Kafirs; though in consequence of their want of energy and courage they are
called “Kalàsh.” The people of Ayin say that in ancient times five savages fled
into the Shidi Mount and concealed themselves there.
Shidi is below Ayin opposite Gherát on the east (whence Shidi is on the
west). Between them is a river. It is said that these savages had to get their
food by the chase. One day word came to them from God that “to-day three
troops of deer will pass; don’t interfere with the first, but do so with the
others.” When, however, the troops came, the savages forgot the injunctions
of God, and struck the first deer. Now there was a cavern in the mountain
where they lived, into which they took the two or three deer that they had
killed and were preparing to cook, two being sent out to fetch water. By God’s
order the lips of the cavern were closed, and the three men imprisoned in it.
God converted the three into bees, whilst the two who had gone to fetch
water fled towards Afghanistan. Thus were created the first honey-bees, who,
finding their way out of the cavern, spread themselves and their sweet gift all
over the world. This is a story told by the Kalàsh, who credit that the bees are
there still; but it is difficult to get there, as the mountains are too steep, but
people go near it and, pushing long rods into the hole of the cavern, bring
them back covered with honey.

Shah Muhterim is the name of a Mehter (prince), the grandfather of the


present Ruler of Chitrár. This Mehter was renowned as a descendant of fairies,
who all were under his command. Whatever he ordered the fairies did. Thus
some time passed. From among them he married a fairy, with whom he made
many excursions. She bore him a daughter. Seven generations have passed
since that time. This daughter is still alive, and her sign among the fairies is
that her hair is white, which does not happen to ordinary fairies. Whenever a
descendant of the Shah Muhterim leaves this transitory world for the region of
permanence, all the fairies, who reside in the mountains of Chitrár, together
with that white-haired lady, weep and lament, and their voices are clearly
heard. This statement is sure and true, and all the men on the frontiers of
Chitrár are aware of the above fact.

The People of Aujer (the Bœotia of Chitral).

There is a country “Aujer,” on the frontier of Chitrár (or Chitrāl as we call it),
the inhabitants of which in ancient times were renowned for their stupidity.
One had taken service at Chitrár, and at a certain public dinner noticed that
the King (Padishah) ate nothing. So he thought that it was because the others
had not given anything to the king. This made him very sorry. He left the
assembly, and reached home towards evening; there he prepared a great
amount of bread, and brought it next day to the council enclosure, beckoning
to the king with his finger to come secretly to him. The king could not make
this out, and sent a servant to inquire what was the matter; but the man
would not say anything except that the king should come himself. On this the
king sent his confidant to find out what all this meant. The man answered the
inquiries of the confidant by declaring that he had no news or claim, but “as
they all ate yesterday and gave nothing to the king, my heart has become
burnt, and I have cooked all this bread for him.” The messenger returned and
told the king, who told the meeting, causing them all to laugh. The king, too,
smiled, and said: “As this poor man has felt for my need, I feel for his;” and
ordered the treasurer to open for him the door of the treasury, so that he
might take from it what he liked. The treasurer took him to the gate, next to
which was the treasurer’s own house, where he had put a big water-melon,
on which fell the eye of that stupid man from Aujer. He had never seen such a
thing, and when he asked, “What is it?” the treasurer, knowing what a fool he
had to deal with, said, “This is the egg of a donkey.” Then he showed him the
gold, silver, jewels, precious cloths, and clean habiliments of the treasury from
which to select the king’s present. The man was pleased with nothing, and
said, “I do not want this; but, if you please, give me the egg of the donkey,
then I shall indeed be glad.” The treasurer and the king’s confidant, consulting
together, came to the conclusion that this would amuse the king to hear, and
gave him the melon, with the injunction not to return to the king, but to take
the egg to his house, and come after some nights (days). The fool was
charmed with this request, went towards his home, but climbing a height, the
melon fell out of his hand, rolled down towards a tree and broke in two
pieces. Now there was a hare under that tree, which fled as the melon
touched the tree. The fool went to his house full of grief, said nothing to his
wife and children, but sat mournfully in a corner. The wife said, “O man, why
art thou sorry? and what has happened?” The man replied: “Why do you ask?
there is no necessity.” Finally, on the woman much cajoling him, he said:
“From the treasury of the prince (mehter) I had brought the egg of the
donkey; it fell from me on the road, broke, and the young one fled out from
its midst. I tried my utmost, but could not catch it.” The woman said: “You
silly fellow! had you brought it, we might have put loads on it.” The man
replied, “You flighty thing! how could it do so, when it was still so young?
Why, its back would have been broken.” So he got into a great rage, took his
axe, and cut down his wife, who died on the spot.
Once, a donkey having four feet, in this country of donkeys having two feet,
put his head into a jar of jáo (barley), but could not extricate it again. So the
villagers assembled, but could not hit on a plan to effect this result. But there
was a wise man in that land, and he was sent for and came. He examined all
the circumstances of the case, and finally decided that they should do him
“Bismillah”; that is to say, that they should cut his throat with the formula, “in
the name of God,” which makes such an act lawful. When they had done this
to the poor donkey, the head remained in the jar, and the wise man ordered
them now to break the jar. This they did, and brought out the head of the
donkey. The wise man then said: “If I had not been here, in what manner
could you have been delivered of this difficulty?” This view was approved by
all, even by the owner of the donkey.

Two brothers in that country of idiots, being tired of buying salt every day,
decided on sowing it over their fields, so that it may bring forth salt
abundantly. The grass grew up, and the grasshoppers came; and the
brothers, fearing that their crop of salt would be destroyed, armed themselves
with bows and arrows to kill the grasshoppers. But the grasshoppers jumped
hither and thither, and were difficult to kill; and one of the brothers hit the
other by mistake with an arrow instead of a grasshopper, and he got angry,
and shot back and killed his brother.

A penknife once fell into the hands of this people, so they held a council in
order to consider what it was. Some thought it was the young one of a sword,
the others that it was the baby of an axe, but that its teeth had not yet come
out. So the argument waxing hot, they fell to fight one another, and many
were wounded and killed.

A number of these people, considering that it was not proper that birds
alone should fly, and that they were able to do so, clad themselves in
posteens (some of which are made from the light down of the Hindukush
eagle), and threw themselves down from a great height, with the result that
they reached the ground killed and mangled.
III. Songs.

A Song (of evidently recent date, as the influence on it of Persian poetry is


obvious).
The Confession of the Soul.
1. (He.) If thy body be as lithe as (the letter) Alif (‎‫‏ا‏‬‎), thy eye is as full as (the letter) Nûn (‎‫‏ن‏‬‎).
If thou art Laila, this child (or lover) is Majnûn (referring to the well-known story of these true
lovers).

2. (She.) If thou art the Prince of the Sultan of Rûm (Turkey)


Come and sit by me, free from constraint;
My eye has fallen on thee, and I now live.

3. My friend had scarcely come near me—why, alas, has he left?


My flesh has melted from these broken limbs.

4. How could I guard against the enmity of a friend?


May God now save me from such grief!

5. (He.) Were I to see 200 Fairies and 100,000 Houris,


I should be a Káfir (infidel), O my beloved!
If my thoughts then even strayed from thee.

6. Yea, not the Houri nightingale, nor my own soul and eyes as Houris,
Would, on the day of judgment, divert my thought from thee.

7. I envy the moth, for it can fly


Into the fire in which it is burnt (whereas I cannot meet thee).

8. (She.) My friend, who once came nigh me, suddenly left me—to weep.
My grief should move the very highest heaven.
A coral bed with its root has been torn out and gone.

9. A ship of pilgrims (Calendárs) has sunk, and yet the world does not care.
The end of all has been a bad name to me.

10. (He.) On this black earth how can I do (sing) thy praise?
Imbedded in the blue heaven (of my heart) thou wilt find it;
And yet, O child (himself), how great a failure (and below thy merits)!

11. Before thy beauty the very moon is nothing,


For sometimes she is full and sometimes half.
May God give thee to me, my perfect universe!

12. (She.) If an angel were a mortal like myself,


It would be ashamed to see my fate (unmoved).

13. (He.) O angel! strangely without pity,


Thou hast written her good with my evil (linked our fates).

14. (Both.) All have friends, but my friend is the Chief (God),
And of my inner grief that friend is cognizant;
His light alone loves our eyes and soul.

15. Break with the world, its vanities, its love;


Leave ignorance, confess, and let thy goal be heaven!
The following is an attempt to render the pretty tune of a more worldly Laila and Majnûn song, which
reminds one of the “Yodeln” of the Tyrolese. It was sung to me by Taighûn Shah, the poet-minstrel of
the Raja, to the accompaniment of a kind of guitar. The Chitráli language, it will be perceived, is
musical.

Shin·djùr is-prûo sar ma bul-bul hut bó·wor Tsá·ren-tu ru-pé

dūr thu mor lo - lé gam - - bū - - ro shūnn donn do - sé

Lai - lī - ki ha - rōsh o - ré Majnun o lo - - lé!

APPENDIX IV.
THE RACES AND LANGUAGES OF THE HINDU-KUSH.

By Dr. G. W. Leitner.
GROUP OF DARDS AND CENTRAL ASIATICS WITH DR. LEITNER.

Standing Nos. 1 2 3 4 5 6 (see next page.)


Sitting Nos. 7 8 9 10 11 (see next page.)
Standing—1. Khundayar, son of a Shiah Akhun
(priest) at Nagyr; 2. Maulvi Najmuddin, a poet from
Kolab; 3 and 4. Khudadad and Hatamu, pilgrims from
Nagyr; 5. A Chitrali soldier; 6. Matavalli, of Hunza.
Sitting—7. Mir Abdullah, a famous Arabic scholar and
jurist from Gabrial; 8. Hakim Habibullah, a Tajik, a
physician from Badakshan; 9. Ghulam Muhammad, Dr.
Leitner’s Gilgit retainer; 10. Ibrahim Khan, a Shiah,
Rono (highest official caste), of Nagyr; 11. Sultan Ali
Yashkun, of Nagyr.
The accompanying illustration was autotyped some years ago from a photograph taken in 1881, and
is now published for the first time. Following the numbers on each figure represented we come first to
No. 1, the tall Khudayár, the son of an Akhun or Shiah priest of Nagyr, a country ruled by the old and
wise Tham or Raja Zafar Ali Khan, whose two sons, Alidád Khan in 1866, and Habib ulla Khan in 1886,
instructed me in the Khajuná language, which is spoken alike in gentle but brave Nagyr and in its
hereditary rival country, the impious and savage Hunza “Hun-land,” represented by figure 6, Matavalli,
the ex-kidnapper whom I took to England, trained to some Muhammadan piety, and sent to Kerbelá a
year ago. No. 2 was an excellent man, an Uzbek visitor from Koláb, one Najmuddin, a poet and
theologian, who gave me an account of his country. Nos. 3 and 4 are pilgrims from Nagyr to the distant
Shiah shrine in Syria of the martyrdom of Husain at Kerbelá; No. 5 is a Chitráli soldier, whilst No. 7 is a
distinguished Arabic Scholar from Gabriál, from whom much of my information was derived regarding a
peaceful and learned home, now, alas! threatened by European approach, which my travels in 1866 and
1872, and my sympathetic intercourse with the tribes of the Hindu Kush, have unfortunately facilitated.
The Jalkóti, Dareyli, and others, who are referred to in the course of the present narrative, will either
figure on other illustrations or must be “taken as read.” No. 8 is the Sunni Moulvi Habibulla, a Tájik of
Bukhara and a Hakîm (physician). No. 9 is my old retainer, Ghulám Muhammad, a Shiah of Gilgit, a Shîn
Dard (highest caste), who was prevented by me from cutting down his mother, which he was
attempting to do in order “to save her the pain of parting from him.” 10. Ibrahim Khan, a Shiah, Rôno
(highest official caste) of Nagyr, pilgrim to Kerbelá. 11. Sultan Ali Yashkun (2nd Shîn caste) Shiah, of
Nagyr, pilgrim to Kerbelá. The word “Yashkun” is, perhaps, connected with “Yuechi.”
The languages spoken by these men are: Khajuná by the Hunza-Nagyr men; Arnyiá by the Chitráli;
Turki by the Uzbek from Koláb; Shiná by the Gilgiti; Pakhtu and Shuthun, a dialect of Shiná, by the
Gabriáli. The people of Hunza are dreaded robbers and kidnappers; they, together with the people of
Nagyr, speak a language, Khajuná, which philologists have not yet been able to classify, but which I
believe to be a remnant of a pre-historic language. They are great wine-drinkers and most licentious.
They are nominally Muláis, a heresy within the Shiah schism from the orthodox Sunni Muhammadan
faith, but they really only worship their Chief or Raja, commonly called “Thàm.” The present ruler’s
name is Mohammad Khan. They are at constant feud with the people of Nagyr, who have some
civilization, and are now devoted Shiahs (whence the number of pilgrims, four, from one village). They
are generally fair, and taller than the people of Hunza, who are described as dark skeletons. The Nagyris
have fine embroideries, and are said to be accomplished musicians. Their forts confront those of Hunza
on the other side of the same river. The people of Badakhshán used to deal largely in kidnapped slaves.
A refugee, Shahzada Hasan, from the former royal line (which claims descent from Alexander the
Great), who has been turned out by the Afghan faction, was then at Gilgit with a number of retainers on
fine Badakhshi horses, awaiting the fortunes of war, or, perhaps, the support of the British. He was a
younger brother of Jehandár Shah, who used to infest the Koláb road, after being turned out by a
relative, Mahmûd Shah, with the help of the Amir of Kabul. Koláb is about eleven marches from
Faizabád, the capital of Badakhshán. The Chitráli is from Shogòt, the residence of Adam Khor (man-
eater), brother of Aman-ul-Mulk, of Chitrál, who used to sell his Shiah subjects regularly into slavery
and to kidnap Bashgeli Kafirs. The man from Gabriál was attracted to Lahore by the fame of the Oriental
College, Lahore, as were also several others in this group; and there can be no doubt that this
institution may still serve as a nucleus for sending pioneers of our civilization throughout Central Asia.
Gabriál is a town in Kandiá, or Kiliá, which is a secluded Dard country, keeping itself aloof from tribal
wars. Gilgit and its representative have been described in my “Dardistan,” to which refer, published in
parts between 1866 and 1877.

I. POLO IN HUNZA-NAGYR.

Although our first practical knowledge of “Polo” was derived from the Manipuri game as played at
Calcutta, it is not Manipur, but Hunza and Nagyr, that maintain the original rules of the ancient
“Chaughán-bazi,” so famous in Persian history. The account given by J. Moray Brown for the “Badminton
Library” of the introduction of Polo into England (Longmans, Green & Co., 1891), seems to me to be at
variance with the facts within my knowledge, for it was introduced into England in 1867, not 1869, by
one who had played the Tibetan game as brought to Lahore by me in 1866, after a tour in Middle and
Little Tibet. Since then it has become acclimatized not only in England, but also in Europe. The Tibet
game, however, does not reach the perfection of the Nagyr game, although it seems to be superior to
that of Manipur. Nor is Polo the only game in Hunza-Nagyr. “Shooting whilst galloping” at a gourd filled
with ashes over a wooden scaffold rivals the wonderful performances of “archery on horseback,” in
which the people of Hunza and Nagyr (not “Nagar,” or the common Hindi word for “town,” as the
telegram has it) are so proficient. Nor are European accompaniments wanting to these Central Asian
games; for prizes are awarded, people bet freely in Hunza as they do here, they drink as freely, listen to
music, and witness the dancing of lady charmers, the Dayál, who, in Hunza, are supposed to be
sorceresses, without whom great festivities lose their main attraction. The people are such keen
sportsmen that it is not uncommon for the Tham, or ruler, to confiscate the house of the unskilful
hunter who has allowed a Markhôr (Ibex) that he might have shot to escape him. Indeed, this even
happens when a number of Markhôrs are shut up in an enclosure, “tsá,” as a preserve for hunting. The
following literally translated dialogue regarding Polo and its rules tells an attentive reader more
“between the lines” than pages of instructions:—

Poló = Bolá.—The Raja has ordered many people: To-morrow Polo I will play. To
the musicians give notice they will play.
Hast thou given notice, O (thou)?
Yes, I have given notice, O Nazúr; let me be thy offering (sacrifice).
Well, we will come out, that otherwise it will become (too) hot.
The Raja has gone out for Polo; go ye, O (ye); the riders will start.
Now divided will be, O ye! (2) goals nine nine (games) we will do (play). Tola-
half (= 4 Rupees) a big sheep bet we will do.
Now bet we have made. To the Raja the ball give, O ye, striking (whilst
galloping) he will take.
O ye, efforts (search) make, young men, to a man disgrace is death; you your
own party abandon not; The Raja has taken the ball to strike; play up, O ye
musicians!
Now descend (from your horses) O ye; Tham has come out (victorious); now
again the day after to-morrow, he (from fatigue) recovering Poló we will strike
(play).
Rules:—The musical instruments of Polo; the ground for the game; the riders;
the goals; 9, 9 games let be (nine games won); the riders nine one side; nine one
(the other) side; when this has become (the case) the drum (Tsagàr) they will
strike.
First the Tham takes the ball (out into the Maidan to strike whilst galloping at full
speed).
The Tham’s side upper part will take.
The rest will strike from the lower part (of the ground).
Those above the goal when becoming will take to the lower part.
Those below the goal when becoming to above taking the ball will send it flying.
Thus being (or becoming) whose goal when becoming, the ball will be sent flying
and the musicians will play.
Whose nine goals when has become, they issue (victorious).

No. 1. Dareyli. No. 2. Gabriali.

No. 3. Hunza Man. No. 4. Nagyri.

II. THE KOHISTÁN OF THE INDUS, INCLUDING GABRIÁL.

Account of Mir Abdulla.

The real native place of Mir Abdulla is in the territory of Nandiyar; but his uncle migrated to, and
settled in, Gabriál. The Mir narrates:—
“In the country of Kunar there is a place called Pusht, where lives a Mulla who is famous for his
learning and sanctity. I lived for a long time as his pupil, studying Logic, Philosophy, and Muhammadan
Law, the subjects in which the Mulla was particularly proficient. When my absence from my native place
became too long, I received several letters and messages from my parents, asking me to give up my
studies and return home. At last I acceded to their pressing demands and came to my native village.
There I stayed for a long time with my parents; but as I was always desirous to pursue my studies, I
was meditating on my return to Pusht, or to go down to India.
In the meantime I met one Abdulquddūs of Kohistan, who was returning from India. He told me that
a Dár-ul-u’lûm (House of Sciences) had been opened at Lahore, the capital of the Punjab, where every
branch of learning was taught, and that it was superintended by Dr. L., who being himself a proficient
scholar of Arabic and Persian, was a patron of learning and a warm supporter of students from foreign
countries. I was accompanied by two pupils of mine, named Sher Muhammad and Burhánuddin; and I
started together with them from my native village. We passed through the territory of Dir, which is
governed by Nawab Rahmatulla Khan. The Qazi of that place was an old acquaintance of mine, and he
persuaded me to stop my journey, and promised to introduce me to the Nawab, and procure for me a
lucrative and honourable post. I declined his offer, and continued my journey. The next territory we
entered in was that of Nawab Tore Mian Khan, who reigns over eight or nine hundred people. After
staying there some days we reached Kanan Gharin, which was governed jointly by Nawabs Fazl Ahmad
and Bayazid Khan. After two days’ march we came to Chakesur, which was under a petty chief named
Suhe Khan. Here we were told that there are two roads to India from this place—one, which is the
shorter, is infested with robbers; and the other, the longer one, is safe; but we were too impatient to
waste our time, and decided at once to go by the shorter way, and proceeded on our journey. We met,
as we were told, two robbers on the road, who insisted on our surrendering to them all our baggage.
But we made up our minds to make a stand, though we were very imperfectly armed, having only one
“tamancha” among three persons. In the conflict which ensued, one of the robbers fell, and the other
escaped; but Burhanuddin, one of our party, was also severely wounded, and we passed the night on
the banks of a neighbouring stream, and reached next day Ganagar Sirkol Jatkol, where we halted for
eight or nine days. In this place the sun is seen only three or four times a year, when all the dogs of the
village, thinking him an intruding stranger, begin to bark at him. Burhanuddin, having recovered there,
went back to his home, and I, with the other companion, proceeded to the Punjab, and passing through
the territory of a chief, named Shálkhan, entered the British dominions. On arriving at Lahore we were
told that Dr. L. was not there, and my companion, too impatient to wait, went down to Rampur, and I
stayed at Lahore.” He then gave an account of—

THE KOHISTÁN (OR MOUNTAINOUS COUNTRY).


(A Different Country from one of the Same Name near Kabul.)

Boundaries.—It is bounded on the north by Chitrál, Yasin, and Hunza, on the east by Chilas, Kashmir,
and a part of Hazara; on the south by Yaghistán (or wild country); on the west by Swat and Yaghistán.
It is surrounded by three mountainous ranges running parallel to each other, dividing the country into
two parts (the northern part is called Gabriál). The Indus flows down through the country, and has a
very narrow bed here, which is hemmed in by the mountains.
The northern part, which is called Gabriál, has only two remarkable villages—Kandyá, on the western
side of the river, and Siwa on the eastern; and the southern part contains many towns and villages:—

On the eastern side of the river,—


Name of
Town. influential Malak
(Landowner).
(1) Ladai Machú.
(2) Kolai Shah Said.
(3) Palas (9,000 pop.) Lachur.
(4) Marín Karm Khán.
On the western side of the river,—
Name of
Town.
influential Malak
(5) Batera
(6) Patan (8,000 pop.) Qudrat Ali.
(7) Chakarga
(8) Ranotia

That part of Yaghistán which bounds Kohistan on the west is divided into (1) Thakot, which is
governed by Shalkhán, and (2) Dishán, which is under Ram Khan; and that part of Yaghistán which
bounds it on the south is divided into three valleys,—

(1) Alahi, governed by Arsalan Khan.


(2) Nandiyar, ” Zafar Khan.
(3) Tikráí, ” Ghaffar Khan (has also two cannons).

Between the southern part of Kohistan and Alahi, in the eastern corner, there is a plain, of a circular
form, surrounded on all sides by mountains. This plain is always covered with grass, and streams of
clear and fresh water run through it. Both the grass and the water of this vast meadow are remarkable
for their nourishing and digestive qualities. This plain is called “Chaur,” and is debatable ground
between the Kohistanis of Ladai, Kolai, and Palas, and the Afghans of Alahi.
People.—The people of this country are not allied to the Afghans, as their language shows, but have
the same erect bearing and beautiful features.
Language.—Their language is altogether different from that of their neighbours, the Afghans, as will
be shown by the following comparison:—

Kohistani. Pushto (the Afghan Language).

1. To-morrow night to Lahore I will go. 1. To-morrow night to Lahore I will go.
Douche rate Lahore bajanwa. Saba shapa ba Lahore shazam.

2. Thou silent be. 2. Thou silent be.


Tohe chut guda. Tah chup shai.

3. Prepare, ye young men. 3. Prepared be, O young men.


Jubti masha. Saubhal she zalmú.

There is a song very current in Kohistan which begins,—


Palas kulal mariga, Patane jirga hotiga, Johle johal madado propár asáli = “In Palas a potter was
killed, in Patan the jirga (or tribal assembly) sat.”
“The corrupted (Jirga of Malaks) took a bribe, and retaliation was ignored.” The Afghans are called
Pathans.
Religion.—They have been converted to Islám since four or five generations, and they have forsaken
their old religion so completely that no tinge of it now remains; and when a Kohistani is told that they
are “nau-Muslims,” that is, “new Muhammadans,” he becomes angry.
Muslim learning, and the building of mosques have become common in Kohistan, and now we find
twenty or thirty learned mullas in every considerable town, besides hundreds of students, studying in
mosques.
Dress.—Their national dress consists of a woollen hat, brimmed like that of Europeans, and a loose
woollen tunic having a long ‎‫‏ جاكى‏‬‎ along the right breast, so that one can easily get out the right hand
to wield one’s arms in a fight. Their trousers are also made of wool and are very tight. In the summer
they wear a kind of leathern shoes borrowed from the Afghans, but in the winter they wear a kind of
boots made of grass (the straw of rice) reaching to the knees. They call it “pájola.”
Till very lately their only arms were a small “khanjar” (dagger), bows and arrows; but they have
borrowed the use of guns and long swords from the Afghans.
The dress of their women consists of a loose woollen head-dress with silken fringes, a woollen tunic
and blue or black trousers of cotton cloth, which they call “shakara.” Generally their women work with
their husbands in the corn-fields, and do not live confined to their houses.
Government.—They have no chiefs like the Afghans, but influential Malaks lead them to battle, who
are paid no tribute, salary, etc.
When an enemy enters their country they whistle so sharply that the sound is heard for miles; then
the whole tribe assembles in one place for the defence of their country, with their respective Malaks at
their heads.
Mode of Living, and other Social Customs.—In winter they live in the valleys, in houses made of wood
and stones; but in summer they leave their houses in the valleys for those on the peaks of mountains,
and the mass of the population spends the summer in the cooler region; but those who cultivate the
land live the whole day in the valley, and when night comes go up to their houses on the heights. Their
food is the bread of wheat, and milk furnished by their herds of cattle (gaómesh, cows, goats, and
sheep), which is their sole property. There are no regular Bazárs even in the large villages; but the
arrival of a merchant from India is generally hailed throughout the country. The woollen cloth which
they use generally is manufactured by them.
Marriage.—Very lately there was a custom amongst them that the young man was allowed to court
any girl he wished; but now, from their contact with the Afghans, the system of “betrothal” at a very
early age is introduced, and the boy does not go till his marriage to that part of the village in which the
girl betrothed to him lives. The Kohistanis say that they have learned three things from the Afghans:—
(1) The use of leathern shoes,
(2) The use of long swords and guns,
(3) The system of betrothal.

III. A ROUGH SKETCH OF KHATLÁN (KOLÁB) AND ADJOINING COUNTRIES.[114]

By Maulvi Najmuddin, a Theologian and Poet from Koláb.

Names of Manzils (Stations) From Kolab to the Punjab.

‎‫‏کوالب‏‬‎ (1) Kolab.


‎‫‏صیاد‏‬‎ (2) Sayad. Situated on this side of the Amoo, and belongs to Badakhshan.
‎‫‏ین قلع‏‬‎ (3) Yan-Qalá.
‎‫‏چاھیاب‏‬‎ (4) Chahyáb. Governed then (18 years ago) by Sultan Azdahar, son of
Yusuf Ali Khán.
‎‫‏دشت سبز‏‬‎ (5) Dashti-sabz. A halting-place.
‎‫‏رستاق‏‬‎ (6) Rustáq. Governed then by Ismail Khán, son of Yusuf Ali Khán.
‎‫‏قزل درہ‏‬‎ (7) Kizil Dara.
‎‫‏ال‌کاشان‏‬‎ (8) Elkáshán. The Himalaya begins.
‎‫‏اتن جلب‏‬‎ (9) Átin Jalab. Here the river Kokcha[115] is crossed.
‫‏دشت سفید‏‬‎ (10) Dasht-e-sufed.
‎‫‏فیض اباد‏‬‎ (11) Faízabad. Capital of Badakhshan; governed then by Jahandár Shah; is
situated on the river Kokchá.
‎‫‏رباط‏‬‎ (11) Rubát.
‎‫‏دشت فراخ‏‬‎ (12) Dashti Farákh.
‎‫‏وردوج‏‬‎ (13) Wardúj. Contains a mine of sulphur.
(14) Names are forgotten.
(15)
‎‫‏زیباق‏‬‎ (16) Zibáq. Peopled by Shi’as (or rather Muláis).
‎‫‏دہ گول‏‬‎ (17) Deh Gôl. The frontier village of Badakhshán; only a kind of inn.
‎‫‏سنگر‏‬‎ (18) Sanghar. A halting-place.
‎‫‏چترال‏‬‎ (19) Chitrál. Governed then by Aman-ul-mulk (as now).
‎‫‏سرغال‏‬‎ (20) Sarghál.
‎‫‏رباَطك‏‬‎ (21) Rubatak.
‎‫‏دیر‏‬‎ (22) Dír. Governed then by Ghazan Khán.
‎‫‏سوات‏‬‎ (23) Swat.
‎‫‏پشاور‏‬‎ (24) Peshawar.

That part of the country lying at the foot of the Hindu Kush mountains, which is bounded on the
north by Kokand and Karatigan, on the east by Durwaz, on the south by Badakhshan and the Amu, on
the west by Sherabad and Hissar (belonging to Bukhara) is called Khatlan ‎‫‏ختالن‏‬‎. Koláb, a considerable
town containing a population of about ten thousand, is situated at the distance of five miles from the
northern bank of the Amu, and is the capital of the province. The other towns of note are Muminabad
‎‫‏مؤمن اباد‏‬‎, Daulatabad ‎‫‏دولتاباد‏‬‎, Khawaling ‎‫‏خوالنگ‏‬‎, Baljawan ‎‫‏بلجوان‏‬‎, and Sarchashmá ‎‫‏سرچشمہ‏‬‎.
The country, being situated at the foot of mountains, and being watered by numerous streams, is
highly fertile. The most important products are rice, wheat, barley, kharpazá, etc.; and the people
generally are agricultural.
There is a mine of salt in the mountains of ‎‫‏خواجه مؤمن‏‬‎ Khawaja Mumin; and the salt produced
resembles the Lahori salt, though it is not so pure and shining, and is very cheap.
Cattle breeding is carried on on a great scale, and the wealth of a man is estimated by the number of
cattle he possesses. There is a kind of goat in this country which yields a very soft kind of wool (called
Tibit); and the people of Kolah prepare from it hoses and a kind of turban, called Shamali (from shamal,
the northern wind, from which it gives shelter).
Religion.—Generally the whole of the population belongs to the Sunni sect (according to the Hanafi
rite).
Tribes.—The population of the country is divided into Laqai, Battash, and Tajiks. The Laqais live in
movable tents (khargah) like the Kirghiz, and lead a roving life, and are soldiers and thieves by
profession. The Battashes live in villages, which are generally clusters of kappás (thatched cottages),
and are a peaceful and agricultural people. The Tajiks live in the towns, and are mostly artisans.
Language.—Turki is spoken in the villages and a very corrupt form of Persian in the towns. Most of
the words are so twisted and distorted that a Persian cannot understand the people of the country
without effort.
Government.—The country is really a province of Bukhárá; but a native of Kolab, descended from the
Kapchaqs by the father’s and from the Laqais by the mother’s side, became independent of Bukhará.
After his death, his four sons, Sayer Khan, Sara Khan, Qamshin Khan, Umra Khan, fought with one
another for the crown; and Sara Khan, having defeated the other three, came to be the Chief of the
province, but was defeated by an army from Bukhará and escaped to Kabul.
When Najmuddin left his country, it was governed by a servant of the court of Bukhárá.
The houses are generally built of mud, cut into smooth and symmetrical walls, and are plastered by a
kind of lime called guch. Burnt bricks are very rare, and only the palace of the governor is made
partially of them. The walls are roofed by thatch made of “damish” (reeds), which grow abundantly on
the banks of the Amoo.
The dress consists of long, flowing choghás (stuffed with cotton) and woollen turbans. The Khatlanis
wear a kind of full boot which they call chamush, but lately a kind of shoe is introduced from Russia,
and is called nughai.
The country is connected with Yarkand by two roads, one running through Kokand and the other
through the Pamir.

The above and following accounts were in answer to questions by Dr. Leitner, whose independent
researches regarding Kandiá in 1866-72 were thus corroborated in 1881, and again in 1886, when the
photographs which serve as the basis of our illustrations were taken.

IV. THE LANGUAGE, CUSTOMS, SONGS, AND PROVERBS OF GABRIÁL.

Position.—A town in Kandiá, a part of Yaghistan (the independent, or wild, country) situated beyond
the river Indus (Hawā-sinn), which separates it from Chilás. The country of Kandiá extends along both
sides of the Kheri Ghá, a tributary of the Indus, and is separated from Tangir by a chain of mountains.
The town of Gabriál is situated three days’ march from Jalkôt, in a north-west direction, and is one
day’s march from Patan, in a northerly direction. Patan is the chief city of Southern Kandiá.
Inhabitants.—The whole tract of Kandiá can send out 20,000 fighting men. They are divided into the
following castes:—
(1) Shîn, the highest, who now pretend to be Quraishes, the Arabs of the tribe to which the Prophet
Muhammad belonged. (Harif Ullá, the Gabriáli, and Ghulam Mohammad, of Gilgit, call themselves
Quraishes.)
(2) Yashkun, who now call themselves Mughals, are inferior to the Shîn. A Yashkun man cannot marry
a Shîn woman. Ahmad Shah, the Jalkoti belonged to this caste.

(3) Doeúzgar, carpenters.


(4) Jolá, weavers.
In reality these people constitute no distinct castes, but all
3 (5) Akhár, blacksmiths.
belong to a third, the Kamin, caste.
(6) Dôm, musicians.
(7) Kámìn, lowest class.

The people of Northern Kandiá (Gabriál) are called Bunzárî, and of the southern part (i.e., Patan)
Maní, as the Chilasis are called Boté. A foreigner is called Raráwi, and fellow-countryman, Muqámi.
Religion.—The Gabriális, as well as all the people of Chilás, Patan, and Palas, are Sunnis, and are very
intolerant to the Shias, who are kidnapped and kept in slavery (Ghulam Mohammad, the Gilgiti, has
been for many years a slave in Chilás, as Ahmad Shah reports). The Gabriális were converted to
Muhammadanism by a saint named Bâbâji, whose shrine is in Gabriál, and is one of the most
frequented places by pilgrims. The Gabriális say that this saint lived six or seven generations ago. Mir
Abdulla (who is really of Afghanistan, but now lives in Gabriál,) says that the Gabriális were converted to
Islám about 150 years ago. Lately, this religion has made great progress among the people of Kandiá
generally. Every little village has a mosque, and in most of the towns there are numerous mosques with
schools attached to them, which are generally crowded by students from every caste. In Gabriál, the
Mullahs or priests are, for the most part, of the Shîn caste, but men of every caste are zealous in giving
education to their sons. Their education is limited to Muhammadan law (of the Hanifite school), and
Arabian logic and philosophy. Very little attention is paid to Arabic or Persian general literature and
caligraphy, that great Oriental art; so little, indeed, that Harifullah and Mir Abdulla, who are scholars of
a very high standard, are wholly ignorant of any of the caligraphic forms, and their handwriting is
scarcely better than that of the lowest primary class boys in the schools of the Punjab.
The most accomplished scholar in Kandiá is the high priest and chief of Patan, named Hazrat Ali, who
is a Shîn.
The people generally are peaceful, and have a fair complexion and erect bearing. Their social and
moral status has lately been raised very high. Robbery and adultery are almost unknown, and the usual
punishment for these crimes is death. Divorce is seldom practised; polygamy is not rare among the rich
men (wadán), but is seldom found among the common people.
Government.—Every village or town is governed by a Council of elders, chosen from among every tribe
or “taífa.” The most influential man among these elders for the time being is considered as the chief of
the Council. These elders are either Shîns or Yashkun. No Kamìn can be elected an elder, though he
may become a Mullá, but a Mulla-kamìn also cannot be admitted to the Council.
The reigning Council of Gabriál consists of 12 persons, of whom 9 are Shins and 3 Yashkuns. Patshé
Khân is the present chief of the Council. The post of Chief of the Council is not hereditary, but the
wisest and the most influential of the elders is elected to that post. Justice is administered by the
Mullahs without the interference of the Council, whose operation is limited to inter-tribal feuds.
Customs and Manners.—Hockey on horseback, which is called “lughât” in Gabriál, is played on holidays;
and the place where they meet for the sport is called “lughât-kárin-jha.”
Guns are called “nâli” in Gabriál, and are manufactured in the town by blacksmiths.
Dancing is not practised generally, as in the other Shin countries. Only “Doms” dance and sing, as this
is their profession; they play on the “surúi” (pipe), rabáb (harp), and shaṇdo (drum).
The “purdá” system, or “veiling” women, is prevalent among the gentry, but it is only lately that the
system was introduced into this country.
When a son is born, a musket is fired off, and the father of the newborn son gives an ox as a present
to the people, to be slaughtered for a general festival.
Infanticide is wholly unknown.
Marriage.—The father of the boy does not go himself, as in Gilgit, to the father of the girl, but sends a
man with 5 or 6 rupees, which he offers as a present. If the present is accepted, the betrothal (lóli) is
arranged. As far as the woman is concerned the “lóli” is inviolable. The usual sum of dowry paid in cash
is 80 rupees.
A bride is called “zhiyán,” and the bridegroom “zhiyán lo.”
Language.—On account of the want of intercourse between the tribes the language of Kohistan is
broken into numerous dialects; thus the structure of the dialects spoken in Kandiá, i.e., in Gabriál and
Patan, differs from that of the language spoken in Chilás and Palus, i.e., in the countries situated on this
side of the Indus. Harifullah, a Gabriáli, did not understand any language except his own; but Ahmad
Shah, an inhabitant of Jalkôt (situated in the southern part of Chilás), understood Gabriáli, as he had
been there for a time. Ghulam Mohammad, our Gilgiti man, who had been captured in an excursion,
and had lived as a slave in Chilás, also thoroughly understood Jalkóti.
The language of Kohistan (as Chilás, Kandiá, etc., are also called) is divided into two dialects, called
Shéná and Shúthun respectively. In the countries situated on that side of the Indus, that is in Kandiá,
Shúthun is spoken.
The following pages are devoted to Ballads, Proverbs, Riddles, and Dialogues in the Shúthun dialect.
Songs = Gíla. Meshón gíla = men’s songs; Gharón gíla = female songs.

1. An Elegy.

Fifteen years ago a battle was fought between Arslán Khan of Kali, and Qamar Ali Khan of Pálus, in
which 300 men were killed on both sides. Phaju, on whose death the elegy is written by his sister, was
one of the killed. The inhabitants of Palus are called “Sikhs,” in reproach.
i.
Rugé níle, jimátyán-kachh-dúkánt,
In a green place, next a mosque, in a sitting (resting) place,
Chá chápár gála mazé, shahzada marégil
In a surrounded fort within, the prince was killed
Rugé níle, jimátyán kachh, dúkánt
In a green place, next a mosque, in a resting place
Sheú wále, bathrí, sóh viráti walégil.
Bring the bier, lay it down, (so that) that heirless one may be brought to his home.
ii.
Rúge níle, wo Shérkot shar hogaé,
In the green place, that Sherkot, where the halting-places of guests
Diri Sikáno qatle karégil.
Are deserted, the Sikhs (infidels, that is the Pálusis) slaughter committed (did).
Rúge níle, Shérkot, barí bigá hojowo,
In the green place, in Sherkot, a great fight happened to be,
Kali Khel, Phajú dasgír marégil.
O Kalikhel (a tribe of Kohistan) Phajú is captured and killed.

Translation.

1. In a green place, next the mosque, in a place of rest.


Within an enclosure the prince was killed.
In a green place, next a mosque, in a spot of rest,
Bring the bier and lay it down, to bring him home who has no heir.
2. In the green place, that Sherkôt, where the halting-place of guests
Is deserted, the Sikhs committed slaughter.
In the green place, in Sherkot, a great fight took place,
Oh, Kalikhel tribe, Phajú was captured and killed.

2. The following song is a chârbait, or quatrain, composed by Qamrán, a Gabriali poet. The song
treats of the love between Saif-ul-mulk, a prince of Rúm, and Shahparì (the Fairy-queen).
The first line of a charbait is called Sarnâmâh, and the remaining poem is divided into stanzas or
“Khhàṛáo,” consisting each of four lines. At the end of every stanza the burden of the song is repeated:
Sarnamah.—Ma húga musfar, mi safár hugâe Hindustan waín
I became a stranger, my travel became towards Hindustan.
Mí duâ’ salám, duâ’ salámi ahl Kohistan waín
My prayer-compliments, prayer-compliments, to the inhabitants of Kohistan (may go
forth).
Malá Malúkh thû, O Badrái tou ínê haragilua
I myself am Malukh (name of the Prince Saif-ul-mulk), O Badra, thou didst lose me.
Burden.—Hái, Malá Malúkh thû, O Badrái, ché Malúkh tîṇ tâó bar zíthu
Woe, I am Malukh, O Badra, now thy Malukh from thy sorrow has lost his senses.
i.
Stanzas.—1. Mala Malukh thu, O Badrai, Malúkh tîṇ, tâó thú dazélo
I myself am Malukh, O Badra, thy Malukh burnt has been from thy heat.
2. Hyó níeṇ nidhéto qarâré, Malúkh Badré wátbe thú harzélo
In the heart there is no ease, which Malukh after Badra has lost.
3. Be tí áṇs yârâúâ, mah pai-mukhé á’ṇs soh wéloṇ
Ours, yours, was friendship, I beardless at that time.
4. Gini kirí thi, háê háê, mi Azli qalam zikzithu
Why dost thou ... woe! woe! the pen of Eternity wrote so.
Burden.—5. Hái, Malá Malúkh thu, O Badrai, Ché Malukh tîṇ tâó harzi thu.
Woe, I am Malukh, O Badra, etc., etc.
ii.
1. Gini kiri the, hae hae, mi azló mazé lìkh taqdîr thú
Why dost thou ... woe, woe! in Eternity did Fate write so.
2. Darwázoṇ mazá galáchhe dhuî Mato tiṇ daráṇ faqîr thu
On thy gate I lit fire (like Jôgís), I a boy was the beggar of thy door.
3. To hikmat biu báz-shâî thi kishéu lûṇgo maza zanzîr thu
By thy stratagem thou takest the eagle a prisoner in the chain of thy black locks.
4. Kisheu lûngá, narai narai, panar mûṇla bé the zetdu
Black locks, in strings, on thy bright face are twined.
5. Hae Mala Malukh thu....
Woe, I am Malukh, etc....
iii.
1. Kisheu lûngá narai narai, panar mûṇ la âwizâṇ thu
Black locks in strings on thy bright face are hanging.
2. Mi laṛmûṇ mazá karáé, tiu makhchúe gi mi armâṇ thu
In my body is the knife, thine is this deed which was my desire.
3. A’khir dhar héṇti nímgaré shoṇ fáni na, malá rawâṇ thu
At length will remain unfinished this waning (world), I now depart.
4. Hyó mi kir súraí súraí, Jandun giná thu, ma mari thu
My heart didst thou pierce in holes, where is my life, I am dead.
5. Hae Hae....
Woe, I am Malukh, etc.
iv.
1. Hyó mi kir súraí súraí térubir, teṇ shon niázah ghiu
My heart didst thou pierce throughout, by this thy spear.
2. Mála thu muṛé, ti dalbaráṇ, lailo bá mi janázah ghiu
I am thy dead boy, thy lover, O dearest, go off from my bier.
3. Khún tiu gḥaṛ hoga, ghi tulá nibháé ansi khévah ghiu
My blood is on thy neck, alas! thou didst not sit with me, being engaged in thy
toilet.
4 Khévah kirethi zhare tin soh khiyál mudá chaizbithú
4. Khévah kirethi zhare tin soh khiyál mudá chaizbithú
Thy toilet do now, now that thy remembrance of me is slackened by Time.
Matal (Masl = Proverbs).

Proverbs.—(1) Zánda chapélo razan bhiyáṇt.


One who is struck by all, fears even a rope.
(2) Zoṛoṇ waé nhálé k hurá zhiká.
Looking towards (the length of) the sheet, extend your feet.
(3) Háte ché rachhélú darwáze aṛat kara.
Elephant if you keep, make your door wide.
(4) Kaṛotál ghutágir, láwáṇ na hol kir.
The Lion attacks, the Jackal makes water.
(5) Qá mil tillu gûṇ kaáṇt, báz mil tillu máséu khánt.
With crow went, ate dung; with eagle went, ate flesh.
i.e. In the company of the crow you will learn to eat dung and in that of the eagle,
you will eat flesh.
(6) Taṇgá gatam karé rupaé balyúṇ.
A penny, for collecting went, lost rupee.
(7) Aíṇ tale kaṇwalé déthé, mazé háṛ shárá túṇ.
Big mouth flattery does, inwardly (in mind) breaks bones.
(8) Dúṇí lawáṇo karú márch.
Two Jackals a lion kill.
(9) Dhon mazé ek bakrí budi agalu, bûtoṇ bakroṇ ethi.
In a flock, if a contagious disease to one goat come, it comes to all goats.
(10) Gúṇ khuch táṇt soṇ, gháṇo cháí hont.
Dung is spread out however much, bad smell so much more becomes.
(11) Zhá zhui dárú.
Brother’s remedy is brother.
(12) Tálaiṇ uthi, kozá dishál, tiu dú boṇdi.
A sieve rose, to pot said, “You have two holes.”
(13) Zar bádshah tamam hotoṇ, hiyá bandgár shilát.
Money of the king is spent, heart of the treasurer pains.

Isholá (Question).

Riddles.—(1) Shúṇ ghélá chíz thuṇ, che naháláṇt tasi wáiṇ pasháṇt amá?
Such what thing is, which they see towards it, they see themselves in it?
Answer: Mirror. Shúṇ áhan thi. = Such mirror is.

(2) Shúṇ gheḷá chíz thúṇ che surat záné thi, tilháṇt nai?
Such what thing is, whose figure serpent-like is, does not move?
Answer: Rope. Shúṇ rás thi. = Such rope is.

(3) Shúṇ ghelá chíz thúṇ, aṇgár dheráni gellú, dhúaṇ darya bau nikáṇt?
Such what thing is, fire is applied to dry grass, the river of smoke flows from it.
Answer: Hookâh.

(4) Shúṇ ghélá chíz thúṇ, che mut surté waré nahále? hasáṇt, khuroṇ we nahále roṇt?
Such what thing is, who seeing towards other body laughs, seeing towards feet,
weeps?
Answer: Peacock.
SHÚTHUN.
WORDS AND DIALOGUES.

Words.

God, Khávaṇd.
fairy, kháperé.
demon, div.
female demon, balái.
paradise, janat.
fire, aṇgár.
earth, uzmuk.
water, wí.
heaven, asmán.
moon, yúṇ.
star, tará.
darkness, tamáí.
shadow, chhoṇl.
day, des.
light, láwar.
night, rál.
midday, mazardi.
midnight, áṛ-rál.
evening, nosháṇ.
to-day, ázuk des.
yesterday, bayaluk des.
to-morrow, rályaṇk des.
heat, taó, tát.
cold, hewán.
flame, lám.
smoke, dhúáṇ.
thunder, hagá-dazi-gé.
lightning, mili.
rain, ájo.
drop, ájo-tìpo.
rainbow, bijonṛ.
snow, hiṇ yúṇ.
ice, kambuk.
hail, mékh.
dew, palús.
earthquake, bhúnál.
dust, udhún.
pebbles, lakh-bato.
sand, sighál.
mud, chichál.
plain, maidán, meráh.
valley, dará.
mount, kháu.
foot of mountain, múndh.
river, sín.
wooden bridge, síú.
rivulet, uchhu.
streamlet, kháṛ.
avalanche, hiṇál.
lake, dhám.
pond, dhamkalú.
confluence, milil.
banks, sin-kaí.
yonder bank, pír sinkai.
this bank, ár sinkai.
a well, kohi.
country, watau.
village, gáụ.

place, zhaí.
army, kauár.
leader, kauár sardár.
lumberdár, malak.
tax-gatherer, jám kai.
policeman, zeitú.
cannon, tof.
gun, náli.
sword, tarwál.
dagger, karái.
lance, naizá, shel.
powder, náláṇ daru.
ball, goli.
ditches, kahe.
war, kali.
thief, lú.
sentinel, ráth.
guard, chár.
guide, pan-pasháṇtuk.
coward, khiá to.
traitor, fatandár.
bribe, baṛi.
prisoner, bandi.
slave, dim.
master, maulá.
servant, naukar.
drum, shaudo.
sheath, káti.
grip, kauzá.
bottom of sheath, kundi.
hatchet, ckháí.
file, soán.
smoothing iron, rambi.
scythe, liṇzh.
tongs, ochhúṇ.
razor, chhúr.
mirror, áhin.
plough, hól.
oar, phiyá.
yoke, úṇ.
ladle, tagú.
kneading roller, chhagór.
kettle, chati.
little kettle, chedin.
stone kettle, botá-bháṇ.
pan, to.
coal, phúthe.
key, kunji.
lion, khará.
shawl, shíyúṇ.
bedding, bathár.
lock, sáṛ.
bolt, hul.
vineyard, dháṇgá.
stable, ghozai.
” for cattle, gáṇ zai.
” for sheep, bakroṇ-ghuzál.
water mill, yáṇzh.
iron peg, kili.
bullet-bag, koti.
powder-flask, darú kothi.
iron and flint, tíz.
tinder, khú.
bow, sháe.
arrow, káṇó.
quiver, káṇó bhaṇ.
ship, jaház.
boat, heṛi.

century, shol kála.


year, kála.
half-year, aṛa-kála.
three months, sha-yúṇ.
week, sát-dés.
spring, basáṇ.
summer, barish.
autumn, sharal.

Lunar Muhammadan Months.

Khudá tálá yúṇ, Rajab.


Shahqadar, Shaaban.
Rozoṇ yúṇ, Ramazan.
Lukut (smaller) eed yúṇ, Shawal.
Kháli yúṇ, Zi Qáad.
Gháíṇ eed yúṇ, Zi Haj.
Hasan Husain yúṇ, Muharram.
Chár bheyáṇ (four sisters), four months of Rabiulawwal: Rabi 2, Jamadi 1, Jamadi 2.

man, máṇsho.
male, mésh.
woman, gharoṇ.
new-born child, chinot.
girl, mati.
virgin, bikra-mati.
bachelor, cháur.
old man, zárá.
old woman, zírí.
puberty, zuáni.
life, zhigi.
death, máreg.
sickness, ráṇs.
sick, najúr.
health, mith ráhat.
relation, zhává.
brotherhood, sak zhá.
friend, yár.
aunt, máfi.
father, abá.
paternal uncle, pichá.
mother, yá.
brother, zhá.
sister, bhiyúṇ.
son, púsh.
daughter, dhí.
daughter’s husband, zamá zhú.
grandson, pázho.
granddaughter, pozhi.
nephew, zhá-lichh.
husband, baryú.
wife’s brother, shábri.
wife’s mother, ichosh.
wife’s father, shor.
pregnancy, ghaleíṇ.
nurse, razáí mahal.
priest, moláṇ.
mosque, jamáat.
pupil, shágar.
sportsman, dháuzír.
goldwasher, keryáṇ.
peasant, déqán.
horse-stealer, gálwáṇ.
robber, lú.
brick-baker, ustá kár.
butcher, qasábi.
shepherd, payál.
cowherd, go-chár.
groom, kharbal.

body, surté adúmá.


skin, chám.
bones, hár.
marrow, métho.
flesh, maséṇ.
fat, miyún.
blood, rát.
veins, rage.
head, shish.
occiput, shisháṇ-kokar.
brain, metho.
curls, chaṇdú.
tresses, pétú.
forehead, tál.
eyes, aṇchhi.
eyebrow, ruzí.
eyelids, papáíṇ.
pupil, machhá.
tears, áṇchhe.
ears, kaná.
hearing, shúoṇ.
cheeks, hargel.
chin, dáí.
nose, nathúr.
nostrils, shúli.
odour, gháṇ.
sneezing, zhitá.
upper lip, bul-dhút.
nether lip, múṇ-dhút.
mouth, áiṇ.
taste, khoṇd.
licking, chara.

You might also like