0% found this document useful (0 votes)
11 views56 pages

Parallel Computers Architecture and Programming V. Rajaraman

The document is an overview of the ebook collection available at textbookfull.com, featuring titles related to parallel computing and programming. It includes links to various ebooks, such as 'Parallel Computers: Architecture and Programming' by V. Rajaraman and C. Siva Ram Murthy, and highlights the importance of parallel computing in modern technology. The content also emphasizes the educational value of these resources for students and professionals in the field.

Uploaded by

tebbithachu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views56 pages

Parallel Computers Architecture and Programming V. Rajaraman

The document is an overview of the ebook collection available at textbookfull.com, featuring titles related to parallel computing and programming. It includes links to various ebooks, such as 'Parallel Computers: Architecture and Programming' by V. Rajaraman and C. Siva Ram Murthy, and highlights the importance of parallel computing in modern technology. The content also emphasizes the educational value of these resources for students and professionals in the field.

Uploaded by

tebbithachu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Explore the full ebook collection and download it now at textbookfull.

com

Parallel Computers Architecture and Programming V.


Rajaraman

https://textbookfull.com/product/parallel-computers-
architecture-and-programming-v-rajaraman/

OR CLICK HERE

DOWLOAD EBOOK

Browse and Get More Ebook Downloads Instantly at https://textbookfull.com


Click here to visit textbookfull.com and download textbook now
Your digital treasures (PDF, ePub, MOBI) await
Download instantly and pick your perfect format...

Read anywhere, anytime, on any device!

Parallel Programming Concepts and Practice 1st Edition


Bertil Schmidt

https://textbookfull.com/product/parallel-programming-concepts-and-
practice-1st-edition-bertil-schmidt/

textbookfull.com

Parallel Programming with Co-Arrays Robert W. Numrich

https://textbookfull.com/product/parallel-programming-with-co-arrays-
robert-w-numrich/

textbookfull.com

Parallel programming for modern high performance computing


systems Czarnul

https://textbookfull.com/product/parallel-programming-for-modern-high-
performance-computing-systems-czarnul/

textbookfull.com

Programming Quantum Computers Essential Algorithms and


Code Samples 1st Edition Eric R. Johnston

https://textbookfull.com/product/programming-quantum-computers-
essential-algorithms-and-code-samples-1st-edition-eric-r-johnston/

textbookfull.com
Fortran 2018 with Parallel Programming 1st Edition Subrata
Ray (Author)

https://textbookfull.com/product/fortran-2018-with-parallel-
programming-1st-edition-subrata-ray-author/

textbookfull.com

Digital Architecture Beyond Computers Fragments of a


Cultural History of Computational Design Roberto Bottazzi

https://textbookfull.com/product/digital-architecture-beyond-
computers-fragments-of-a-cultural-history-of-computational-design-
roberto-bottazzi/
textbookfull.com

Mathematica Functional and procedural programming 2nd


Edition V. Aladjev

https://textbookfull.com/product/mathematica-functional-and-
procedural-programming-2nd-edition-v-aladjev/

textbookfull.com

Concurrency in C Cookbook Asynchronous Parallel and


Multithreaded Programming 2nd Edition Stephen Cleary

https://textbookfull.com/product/concurrency-in-c-cookbook-
asynchronous-parallel-and-multithreaded-programming-2nd-edition-
stephen-cleary/
textbookfull.com

Assembly Programming and Computer Architecture for


Software Engineers Brian Hall

https://textbookfull.com/product/assembly-programming-and-computer-
architecture-for-software-engineers-brian-hall/

textbookfull.com
PARALLEL COMPUTERS
Architecture and Programming
SECOND EDITION

V. RAJARAMAN
Honorary Professor
Supercomputer Education and Research Centre
Indian Institute of Science Bangalore
C. SIVA RAM MURTHY
Richard Karp Institute Chair Professor
Department of Computer Science and Engineering
Indian Institute of Technology Madras
Chennai

Delhi-110092
2016
PARALLEL COMPUTERS: Architecture and Programming, Second Edition
V. Rajaraman and C. Siva Ram Murthy

© 2016 by PHI Learning Private Limited, Delhi. All rights reserved. No part of this book may be reproduced in any form, by
mimeograph or any other means, without permission in writing from the publisher.
ISBN-978-81-203-5262-9

The export rights of this book are vested solely with the publisher.

Eleventh Printing (Second Edition) ... ... ... July, 2016

Published by Asoke K. Ghosh, PHI Learning Private Limited, Rimjhim House, 111, Patparganj Industrial Estate, Delhi-
110092 and Printed by Mohan Makhijani at Rekha Printers Private Limited, New Delhi-110020.
To
the memory of my dear nephew Dr. M.R. Arun
— V. Rajaraman
To
the memory of my parents, C. Jagannadham and C. Subbalakshmi
— C. Siva Ram Murthy
Table of Contents
Preface
1. Introduction
1.1 WHY DO WE NEED HIGH SPEED COMPUTING?
1.1.1 Numerical Simulation
1.1.2 Visualization and Animation
1.1.3 Data Mining
1.2 HOW DO WE INCREASE THE SPEED OF COMPUTERS?
1.3 SOME INTERESTING FEATURES OF PARALLEL COMPUTERS
1.4 ORGANIZATION OF THE BOOK
EXERCISES
Bibliography
2. Solving Problems in Parallel
2.1 UTILIZING TEMPORAL PARALLELISM
2.2 UTILIZING DATA PARALLELISM
2.3 COMPARISON OF TEMPORAL AND DATA PARALLEL PROCESSING
2.4 DATA PARALLEL PROCESSING WITH SPECIALIZED PROCESSORS
2.5 INTER-TASK DEPENDENCY
2.6 CONCLUSIONS
EXERCISES
Bibliography
3. Instruction Level Parallel Processing
3.1 PIPELINING OF PROCESSING ELEMENTS
3.2 DELAYS IN PIPELINE EXECUTION
3.2.1 Delay Due to Resource Constraints
3.2.2 Delay Due to Data Dependency
3.2.3 Delay Due to Branch Instructions
3.2.4 Hardware Modification to Reduce Delay Due to Branches
3.2.5 Software Method to Reduce Delay Due to Branches
3.3 DIFFICULTIES IN PIPELINING
3.4 SUPERSCALAR PROCESSORS
3.5 VERY LONG INSTRUCTION WORD (VLIW) PROCESSOR
3.6 SOME COMMERCIAL PROCESSORS
3.6.1 ARM Cortex A9 Architecture
3.6.2 Intel Core i7 Processor
3.6.3 IA-64 Processor Architecture
3.7 MULTITHREADED PROCESSORS
3.7.1 Coarse Grained Multithreading
3.7.2 Fine Grained Multithreading
3.7.3 Simultaneous Multithreading
3.8 CONCLUSIONS
EXERCISES
BIBLIOGRAPHY
4. Structure of Parallel Computers
4.1 A GENERALIZED STRUCTURE OF A PARALLEL COMPUTER
4.2 CLASSIFICATION OF PARALLEL COMPUTERS
4.2.1 Flynn’s Classification
4.2.2 Coupling Between Processing Elements
4.2.3 Classification Based on Mode of Accessing Memory
4.2.4 Classification Based on Grain Size
4.3 VECTOR COMPUTERS
4.4 A TYPICAL VECTOR SUPERCOMPUTER
4.5 ARRAY PROCESSORS
4.6 SYSTOLIC ARRAY PROCESSORS
4.7 SHARED MEMORY PARALLEL COMPUTERS
4.7.1 Synchronization of Processes in Shared Memory Computers
4.7.2 Shared Bus Architecture
4.7.3 Cache Coherence in Shared Bus Multiprocessor
4.7.4 MESI Cache Coherence Protocol
4.7.5 MOESI Protocol
4.7.6 Memory Consistency Models
4.7.7 Shared Memory Parallel Computer Using an Interconnection Network
4.8 INTERCONNECTION NETWORKS
4.8.1 Networks to Interconnect Processors to Memory or Computers to
Computers
4.8.2 Direct Interconnection of Computers
4.8.3 Routing Techniques for Directly Connected Multicomputer Systems
4.9 DISTRIBUTED SHARED MEMORY PARALLEL COMPUTERS
4.9.1 Cache Coherence in DSM
4.10 MESSAGE PASSING PARALLEL COMPUTERS
4.11 Computer Cluster
4.11.1 Computer Cluster Using System Area Networks
4.11.2 Computer Cluster Applications
4.12 Warehouse Scale Computing
4.13 Summary and Recapitulation
EXERCISES
BIBLIOGRAPHY
5. Core Level Parallel Processing
5.1 Consequences of Moore’s law and the advent of chip multiprocessors
5.2 A generalized structure of Chip Multiprocessors
5.3 MultiCore Processors or Chip MultiProcessors (CMPs)
5.3.1 Cache Coherence in Chip Multiprocessor
5.4 Some commercial CMPs
5.4.1 ARM Cortex A9 Multicore Processor
5.4.2 Intel i7 Multicore Processor
5.5 Chip Multiprocessors using Interconnection Networks
5.5.1 Ring Interconnection of Processors
5.5.2 Ring Bus Connected Chip Multiprocessors
5.5.3 Intel Xeon Phi Coprocessor Architecture [2012]
5.5.4 Mesh Connected Many Core Processors
5.5.5 Intel Teraflop Chip [Peh, Keckler and Vangal, 2009]
5.6 General Purpose Graphics Processing Unit (GPGPU)
EXERCISES
BIBLIOGRAPHY
6. Grid and Cloud Computing
6.1 GRID COMPUTING
6.1.1 Enterprise Grid
6.2 Cloud computing
6.2.1 Virtualization
6.2.2 Cloud Types
6.2.3 Cloud Services
6.2.4 Advantages of Cloud Computing
6.2.5 Risks in Using Cloud Computing
6.2.6 What has Led to the Acceptance of Cloud Computing
6.2.7 Applications Appropriate for Cloud Computing
6.3 CONCLUSIONS
EXERCISES
BIBLIOGRAPHY
7. Parallel Algorithms
7.1 MODELS OF COMPUTATION
7.1.1 The Random Access Machine (RAM)
7.1.2 The Parallel Random Access Machine (PRAM)
7.1.3 Interconnection Networks
7.1.4 Combinational Circuits
7.2 ANALYSIS OF PARALLEL ALGORITHMS
7.2.1 Running Time
7.2.2 Number of Processors
7.2.3 Cost
7.3 PREFIX COMPUTATION
7.3.1 Prefix Computation on the PRAM
7.3.2 Prefix Computation on a Linked List
7.4 SORTING
7.4.1 Combinational Circuits for Sorting
7.4.2 Sorting on PRAM Models
7.4.3 Sorting on Interconnection Networks
7.5 SEARCHING
7.5.1 Searching on PRAM Models
Analysis
7.5.2 Searching on Interconnection Networks
7.6 MATRIX OPERATIONS
7.6.1 Matrix Multiplication
7.6.2 Solving a System of Linear Equations
7.7 PRACTICAL MODELS OF PARALLEL COMPUTATION
7.7.1 Bulk Synchronous Parallel (BSP) Model
7.7.2 LogP Model
7.8 CONCLUSIONS
EXERCISES
BIBLIOGRAPHY
8. Parallel Programming
8.1 MESSAGE PASSING PROGRAMMING
8.2 MESSAGE PASSING PROGRAMMING WITH MPI
8.2.1 Message Passing Interface (MPI)
8.2.2 MPI Extensions
8.3 SHARED MEMORY PROGRAMMING
8.4 SHARED MEMORY PROGRAMMING WITH OpenMP
8.4.1 OpenMP
8.5 HETEROGENEOUS PROGRAMMING WITH CUDA AND OpenCL
8.5.1 CUDA (Compute Unified Device Architecture)
8.5.2 OpenCL (Open Computing Language)
8.6 PROGRAMMING IN BIG DATA ERA
8.6.1 MapReduce
8.6.2 Hadoop
8.7 CONCLUSIONS
EXERCISES
BIBLIOGRAPHY
9. Compiler Transformations for Parallel Computers
9.1 ISSUES IN COMPILER TRANSFORMATIONS
9.1.1 Correctness
9.1.2 Scope
9.2 TARGET ARCHITECTURES
9.2.1 Pipelines
9.2.2 Multiple Functional Units
9.2.3 Vector Architectures
9.2.4 Multiprocessor and Multicore Architectures
9.3 DEPENDENCE ANALYSIS
9.3.1 Types of Dependences
9.3.2 Representing Dependences
9.3.3 Loop Dependence Analysis
9.3.4 Subscript Analysis
9.3.5 Dependence Equation
9.3.6 GCD Test
9.4 TRANSFORMATIONS
9.4.1 Data Flow Based Loop Transformations
9.4.2 Loop Reordering
9.4.3 Loop Restructuring
9.4.4 Loop Replacement Transformations
9.4.5 Memory Access Transformations
9.4.6 Partial Evaluation
9.4.7 Redundancy Elimination
9.4.8 Procedure Call Transformations
9.4.9 Data Layout Transformations
9.5 FINE-GRAINED PARALLELISM
9.5.1 Instruction Scheduling
9.5.2 Trace Scheduling
9.5.3 Software Pipelining
9.6 Transformation Framework
9.6.1 Elementary Transformations
9.6.2 Transformation Matrices
9.7 PARALLELIZING COMPILERS
9.8 CONCLUSIONS
EXERCISES
BIBLIOGRAPHY
10. Operating Systems for Parallel Computers
10.1 RESOURCE MANAGEMENT
10.1.1 Task Scheduling in Message Passing Parallel Computers
10.1.2 Dynamic Scheduling
10.1.3 Task Scheduling in Shared Memory Parallel Computers
10.1.4 Task Scheduling for Multicore Processor Systems
10.2 PROCESS MANAGEMENT
10.2.1 Threads
10.3 Process Synchronization
10.3.1 Transactional Memory
10.4 INTER-PROCESS COMMUNICATION
10.5 MEMORY MANAGEMENT
10.6 INPUT/OUTPUT (DISK ARRAYS)
10.6.1 Data Striping
10.6.2 Redundancy Mechanisms
10.6.3 RAID Organizations
10.7 CONCLUSIONS
EXERCISES
BIBLIOGRAPHY
11. Performance Evaluation of Parallel Computers
11.1 BASICS OF PERFORMANCE EVALUATION
11.1.1 Performance Metrics
11.1.2 Performance Measures and Benchmarks
11.2 SOURCES OF PARALLEL OVERHEAD
11.2.1 Inter-processor Communication
11.2.2 Load Imbalance
11.2.3 Inter-task Synchronization
11.2.4 Extra Computation
11.2.5 Other Overheads
11.2.6 Parallel Balance Point
11.3 SPEEDUP PERFORMANCE LAWS
11.3.1 Amdahl’s Law
11.3.2 Gustafson’s Law
11.3.3 Sun and Ni’s Law
11.4 SCALABILITY METRIC
11.4.1 Isoefficiency Function
11.5 PERFORMANCE ANALYSIS
11.6 CONCLUSIONS
EXERCISES
BIBLIOGRAPHY
Appendix
Index
Preface

There is a surge of interest today in parallel computing. A general consensus is emerging


among professionals that the next generation of processors as well as computers will work in
parallel. In fact, all new processors are multicore processors in which several processors are
integrated in one chip. It is therefore essential for all students of computing to understand the
architecture and programming of parallel computers. This book is an introduction to this
subject and is intended for the final year undergraduate engineering students of Computer
Science and Information Technology. It can also be used by students of MCA who have an
elective subject in parallel computing. Working IT professionals will find this book very
useful to update their knowledge about parallel computers and multicore processors.
Chapter 1 is introductory and explains the need for parallel computers. Chapter 2 discusses
at length the idea of partitioning a job into many tasks which may be carried out in parallel by
several processors. The concept of job partitioning, allocating and scheduling and their
importance when attempting to solve problems in parallel is explained in this chapter. In
Chapter 3 we deal with instruction level parallelism and how it is used to construct modern
processors which constitute the heart of parallel computers as well as multicore processors.
Starting with pipelined processors (which use temporal parallelism), we describe superscalar
pipelined processors and multithreaded processors.
Chapter 4 introduces the architecture of parallel computers. We start with Flynn’s
classification of parallel computers. After a discussion of vector computers and array
processors, we present in detail the various implementation procedures of MIMD
architecture. We also deal with shared memory, CC-NUMA architectures, and the important
problem of cache coherence. This is followed by a section on message passing computers and
the design of Cluster of Workstations (COWs) and Warehouse Scale parallel computers used
in Cloud Computing.
Chapter 5 is a new chapter in this book which describes the use of “Core level parallelism”
in the architecture of current processors which incorporate several processors on one
semiconductor chip. The chapter begins by describing the develop-ments in both
semiconductor technology and processor design which have inevitably led to multicore
processors. The limitations of increasing clock speed, instruction level parallelism, and
memory size are discussed. This is followed by the architecture of multicore processors
designed by Intel, ARM, and AMD. The variety of multicore processors and their application
areas are described. In this chapter we have also introduced the design of chips which use
hundreds of processors.
Chapter 6 is also new. It describes Grid and Cloud Computing which will soon be used by
most organizations for their routine computing tasks. The circumstances which have led to
the emergence of these new computing environments, their strengths and weaknesses, and the
major differences between grid computing and cloud computing are discussed.
Chapter 7 starts with a discussion of various theoretical models of parallel computers such
as PRAM and combinational circuits, which aid in designing and analyzing parallel
algorithms. This is followed by parallel algorithms for prefix computation, sorting, searching,
and matrix operations. Complexity issues have been always kept in view while developing
parallel algorithms. It also presents some practical models of parallel computation such as
BSP, Multi-BSP, and LogP.
Chapter 8 is about programming parallel computers. It presents in detail the development
of parallel programs for message passing parallel computers using MPI, shared memory
parallel computers using OpenMP, and heterogeneous (CPU-GPU) systems using CUDA and
OpenCL. This is followed by a simple and powerful MapReduce programming model that
enables easy development of scalable parallel programs to process big data on large clusters
of commodity machines.
In Chapter 9 we show the importance of compiler transformations to effectively use
pipelined processors, vector processors, superscalar processors, multicore processors, and
SIMD and MIMD computers. The important topic of dependence analysis is discussed at
length. It ends with a discussion of parallelizing compilers.
Chapter 10 deals with the key issues in parallel operating systems—resource (processor)
management, process/thread management, synchronization mechanisms including
transactional memory, inter-process communication, memory management, and input/output
with particular reference to RAID secondary storage system.
The last chapter is on performance evaluation of parallel computers. This chapter starts
with a discussion of performance metrics. Various speedup performance laws, namely,
Amdahl’s law, Gustafson’s law and Sun and Ni’s law are explained. The chapter ends with a
discussion of issues involved in developing tools for measuring the performance of parallel
computers.
Designed as a textbook with a number of worked examples, and exercises at the end of
each chapter; there are over 200 exercises in all. The book has been classroom tested at the
Indian Institute of Science, Bangalore and the Indian Institute of Technology Madras,
Chennai. The examples and exercises, together with the References at the end of each
chapter, have been planned to enable students to have an extensive as well as an intensive
study of parallel computing.
In writing this book, we gained a number of ideas from numerous published papers and
books on this subject. We thank all those authors, too numerous to acknowledge individually.
Many of our colleagues and students generously assisted us by reading drafts of the book and
suggested improvements. Among them we thank Prof. S.K. Nandy and Dr. S. Balakrishnan
of Supercomputer Education and Research Centre, Indian Institute of Science, Bangalore,
Prof. Mainak Chaudhuri of IIT, Kanpur, and Arvind, Babu Shivnath, Bharat Chandra,
Manikantan, Rajkarn Singh, Sudeepta Mishra and Sumant Kowshik of Indian Institute of
Technology Madras, Chennai. We thank Ms. T. Mallika of Indian Institute of Science,
Bangalore, and Mr. S. Rajkumar, a former project staff member of Indian Institute of
Technology Madras, Chennai, for word processing.
The first author thanks the Director, and the Chairman, Supercomputer Education and
Research Centre, Indian Institute of Science, Bangalore, for providing the facilities for
writing this book. He also thanks his wife Dharma for proofreading the book and for her
support which enabled him to write this book. The second author thanks the members of his
family—wife Sharada, son Chandrasekhara Sastry and daughter Sarita—for their love and
constant support of his professional endeavors.
We have taken reasonable care in eliminating any errors that might have crept into the
book. We will be happy to receive comments and suggestions from readers at our respective
email addresses: [email protected], [email protected].
V. Rajaraman
C. Siva Ram Murthy
Introduction

Of late there has been a lot of interest generated all over the world on parallel processors and
parallel computers. This is due to the fact that all current micro-processors are parallel
processors. Each processor in a microprocessor chip is called a core and such a
microprocessor is called a multicore processor. Multicore processors have an on-chip
memory of a few megabytes (MB). Before trying to answer the question “What is a parallel
computer?”, we will briefly review the structure of a single processor computer (Fig. 1.1). It
consists of an input unit which accepts (or reads) the list of instructions to solve a problem (a
program) and data relevant to that problem. It has a memory or storage unit in which the
program, data and intermediate results are stored, a processing element which we will
abbreviate as PE (also called a Central Processing Unit (CPU)) which interprets and executes
instructions, and an output unit which displays or prints the results.

Figure 1.1 Von Neumann architecture computer.


This structure of a computer was proposed by John Von Neumann in the mid 1940s and is
known as the Von Neumann Architecture. In this architecture, a program is first stored in the
memory. The PE retrieves one instruction of this program at a time, interprets it and executes
it. The operation of this computer is thus sequential. At a time, the PE can execute only one
instruction. The speed of this sequential computer is thus limited by the speed at which a PE
can retrieve instructions and data from the memory and the speed at which it can process the
retrieved data. To increase the speed of processing of data one may increase the speed of the
PE by increasing the clock speed. The clock speed increased from a few hundred kHz in the
1970s to 3 GHz in 2005. Processor designers found it difficult to increase the clock speed
further as the chip was getting overheated. The number of transistors which could be
integrated in a chip could, however, be doubled every two years. Thus, processor designers
placed many processing “cores” inside the processor chip to increase its effective throughput.
The processor retrieves a sequence of instructions from the main memory and stores them in
an on-chip memory. The “cores” can then cooperate to execute these instructions in parallel.
Even though the speed of single processor computers is continuously increasing, problems
which are required to be solved nowadays are becoming more complex as we will see in the
next section. To further increase the processing speed, many such computers may be inter-
connected to work cooperatively to solve a problem. A computer which consists of a number
of inter-connected computers which cooperatively execute a single program to solve a
problem is called a parallel computer. Rapid developments in electronics have led to the
emergence of processors which can process over 5 billion instructions per second. Such
processors cost only around $100. It is thus possible to economically construct parallel
computers which use around 4000 such multicore processors to carry out ten trillion (1013)
instructions per second assuming 50% efficiency.
The more difficult problem is to perceive parallelism in algorithms and develop a software
environment which will enable application programs to utilize this potential parallel
processing power.
1.1 WHY DO WE NEED HIGH SPEED COMPUTING?
There are many applications which can effectively use computing speeds in the trillion
operations per second range. Some of these are:

Numerical simulation to predict the behaviour of physical systems.


High performance graphics—particularly visualization, and animation.
Big data analytics for strategic decision making.
Synthesis of molecules for designing medicines.

1.1.1 Numerical Simulation


Of late numerical simulation has emerged as an important method in scientific research and
engineering design complementing theoretical analysis and experimental observations.
Numerical simulation has many advantages. Some of these are:

1. Numerical modelling is versatile. A wide range of problems can be simulated on a


computer.
2. It is possible to change many parameters and observe their effects when a system is
modelled numerically. Experiments do not allow easy change of many parameters.
3. Numerical simulation is interactive. The results of simulation may be visualized
graphically. This facilitates refinement of models. Such refinement provides a
better understanding of physical problems which cannot be obtained from
experiments.
4. Numerical simulation is cheaper than conducting experiments on physical systems
or building prototypes of physical systems.

The role of experiments, theoretical models, and numerical simulation is shown in Fig.
1.2. A theoretically developed model is used to simulate the physical system. The results of
simulation allow one to eliminate a number of unpromising designs and concentrate on those
which exhibit good performance. These results are used to refine the model and carry out
further numerical simulation. Once a good design on a realistic model is obtained, it is used
to construct a prototype for experimentation. The results of experiments are used to refine the
model, simulate it and further refine the system. This repetitive process is used until a
satisfactory system emerges. The main point to note is that experiments on actual systems are
not eliminated but the number of experiments is reduced considerably. This reduction leads to
substantial cost saving. There are, of course, cases where actual experiments cannot be
performed such as assessing damage to an aircraft when it crashes. In such a case simulation
is the only feasible method.
Figure 1.2 Interaction between theory, experiments and computer simulation.
With advances in science and engineering, the models used nowadays incorporate more
details. This has increased the demand for computing and storage capacity. For example, to
model global weather, we have to model the behaviour of the earth’s atmosphere. The
behaviour is modelled by partial differential equations in which the most important variables
are the wind speed, air temperature, humidity and atmospheric pressure. The objective of
numerical weather modelling is to predict the status of the atmosphere at a particular region
at a specified future time based on the current and past observations of the values of
atmospheric variables. This is done by solving the partial differential equations numerically
in regions or grids specified by using lines parallel to the latitude and longitude and using a
number of atmospheric layers. In one model (see Fig. 1.3), the regions are demarcated by
using 180 latitudes and 360 longitudes (meridian circles) equally spaced around the globe. In
the vertical direction 12 layers are used to describe the atmosphere. The partial differential
equations are solved by discretizing them to difference equations which are in turn solved as
a set of simultaneous algebraic equations. For each region one point is taken as representing
the region and this is called a grid point. At each grid point in this problem, there are 5
variables (namely air velocity, temperature, pressure, humidity, and time) whose values are
stored. The simultaneous algebraic equations are normally solved using an iterative method.
In an iterative method several iterations (100 to 1000) are needed for each grid point before
the results converge. The calculation of each trial value normally requires around 100 to 500
floating point arithmetic operations. Thus, the total number of floating point operations
required for each simulation is approximately given by:
Number of floating point operations per simulation
= Number of grid points × Number of values per grid point × Number of trials × Number
of operations per trial
Figure 1.3 Grid for numerical weather model for the Earth.
In this example we have:
Number of grid points = 180 × 360 × 12 = 777600
Number of values per grid point = 5
Number of trials = 500
Number of operations per trial = 400
Thus, the total number of floating point operations required per simulation = 777600 × 5 ×
500 × 400 = 7.77600 × 1011. If each floating point operation takes 100 ns the total time taken
for one simulation = 7.8 × 104 s = 21.7 h. If we want to predict the weather at the intervals of
6 h there is no point in computing for 21.7 h for a prediction! If we want to simulate this
problem, a floating point arithmetic operation on 64-bit operands should be complete within
10 ns. This time is too short for a computer which does not use any parallelism and we need a
parallel computer to solve such a problem. In general the complexity of a problem of this
type may be described by the formula:
Problem complexity = G × V × T × A
where
G = Geometry of the grid system used
V = Variables per grid point
T = Number of steps per simulation for solving the problem
A = Number of floating point operations per step
For the weather modelling problem,
G = 777600, V = 5, T = 500 and A = 400 giving problem complexity = 7.8 × 1011.
There are many other problems whose complexity is of the order of 1012 to 1020. For
example, the complexity of numerical simulation of turbulent flows around aircraft wings and
body is around 1015. Some other areas where numerically intensive simulation is required
are:

Charting ocean currents


Exploring greenhouse effect and ozone depletion
Exploration geophysics, in particular, seismology
Simulation of fusion and fission reactions to design hydrogen and atomic devices
Designing complex civil and mechanical structures
Design of drugs by simulating the effect of drugs at the molecular level
Simulations in solid state and structural chemistry
Simulation in condensed matter physics
Analyzing data obtained from the large hadron collider experiment
Protein folding
Plate tectonics to forecast earthquakes

The range of applications is enormous and increasing all the time.


The use of computers in numerical simulation is one of the earliest applications of high
performance computers. Of late two other problems have emerged whose complexity is in the
range of 1015 to 1018 arithmetic operations. They are called petascale and exascale
computing. We describe them as follows:

1.1.2 Visualization and Animation


In visualization and animation, the results of computation are to be realistically rendered on a
high resolution terminal. In this case, the number of area elements where the picture is to be
rendered is represented by G. The number of picture elements (called pixels) to be processed
in each area element is represented by R and the time to process· a pixel by T. The
computation should be repeated at least 60 times a second for animation. Thus, GR pixels
should be processed in 1/60 s. Thus, time to process a pixel = 1/(60 × G × R). Typically G =
105, R = 107. Thus, a pixel should be processed within 10–14 s. If N instructions are required
to process a pixel then the computer should be able to carry out N × 1014 instructions per
second. In general the computational complexity in this case is:
G×R×P×N
where G represents the complexity of the geometry (i.e., number of area elements in the
picture), R the number of pixels per area element, P the number of repetitions per second (for
animation) and N the number of instructions needed to process a pixel. This problem has a
complexity exceeding 1015.
The third major application requiring intensive computation is data analytics or data
mining which we describe next.

1.1.3 Data Mining


There are large databases of the order of peta bytes (1015 bytes) which are in the data archives
of many organizations. Some experiments such as the Large Hadron Collider (LHC)
generates peta and exa bytes of data which are to be analyzed. With the availability of high
capacity disks and high speed computers, organizations have been trying to analyze data in
the data archive to discover some patterns or rules. Consumer product manufacturers may be
able to find seasonal trends in sales of some product or the effect of certain promotional
advertisements on the sale of related products from archival data. In general, the idea is to
hypothesize a rule relating data elements and test it by retrieving these data elements from the
archive. The complexity of this processing may be expressed by the formula:
PC = S × P × N
where S is the size of the database, P the number of instructions to be executed to check a
rule and N the number of rules to be checked. In practice the values of these quantities are:
S = 1015
P = 100, N = 10
giving a value of PC (Problem Complexity) of 1018. This problem can be solved effectively
only if a computer with speeds of 1015 instructions per second is available.
These are just three examples of compute intensive problems. There are many others
which are emerging such as realistic models of the economy, computer generated movies,
and video database search, which require computers which can carry out tens of tera
operations per second for their solution.
1.2 HOW DO WE INCREASE THE SPEED OF COMPUTERS?
There are two methods of increasing the speed of computers. One method is to build
processing elements using faster semiconductor components and the other is to improve the
architecture of computers using the increasing number of transistors available in processing
chips. The rate of growth of speed using better device technology has been slow. For
example, the basic clock of high performance processors in 1980 was 50 MHz and it reached
3 GHz by 2003. The clock speed could not be increased further without special cooling
methods as the chips got overheated. However, the number of transistors which could be
packed in a microprocessor continued to double every two years. In 1972, the number of
transistors in a microprocessor chip was 4000 and it increased to more than a billion in 2014.
The extra transistors available have been used in many ways. One method has put more than
one arithmetic unit in a processing unit. Another has increased the size of on-chip memory.
The latest is to place more than one processor in a microprocessor chip. The extra processing
units are called processing cores. The cores share an on-chip memory and work in parallel to
increase the speed of the processor. There are also processor architectures which have a
network of “cores”, each “core” with its own memory that cooperate to execute a program.
The number of cores has been increasing in step with the increase in the number of transistors
in a chip. Thus, the number of cores is doubling almost every two years. Soon (2015) there
will be 128 simple “cores” in a processor chip. The processor is one of the units of a
computer. As we saw at the beginning of this chapter, a computer has other units, namely,
memory, and I/O units. We can increase the speed of a computer by increasing the speed of
its units and also by improving the architecture of the computer. For example, while the
processor is computing, data which may be needed later could be fetched from the main
memory and simultaneously an I/O operation can be initiated. Such an overlap of operations
is achieved by using both software and hardware features.
Besides overlapping operations of various units of a computer, each processing unit in a
chip may be designed to overlap operations of successive instructions. For example, an
instruction can be broken up into five distinct tasks as shown in Fig. 1.4. Five successive
instructions can be overlapped, each doing one of these tasks (in an assembly line model)
using different parts of the CPU. The arithmetic unit itself may be designed to exploit
parallelism inherent in the problem being solved. An arithmetic operation can be broken
down into several tasks, for example, matching exponents, shifting mantissas and aligning
them, adding them, and normalizing. The components of two arrays to be added can be
streamed through the adder and the four tasks can be performed simultaneously on four
different pairs of operands thereby quadrupling the speed of addition. This method is said to
exploit temporal parallelism and will be explained in greater detail in the next chapter.
Another method is to have four adders in the CPU and add four pairs of operands
simultaneously. This type of parallelism is called data parallelism. Yet another method of
increasing the speed of computation is to organize a set of computers to work simultaneously
and cooperatively to carry out tasks in a program.
Figure 1.4 The tasks performed by an instruction and overlap of successive instructions.
All the methods described above are called architectural methods which are the ones
which have contributed to ten billions fold increase in the speed of computations in the last
two decades. We summarize these methods in Table 1.1.
TABLE 1.1 Architectural Methods Used to Increase the Speed of Computers

* use parallelism in a single processor computer


— Overlap execution of a number of instructions by pipelining, or by using multiple functional units, or multiple
processor “cores”.
— Overlap operation of different units of a computer.
— Increase the speed of arithmetic logic unit by exploiting data and/or temporal parallelism.
* use parallelism in the problem to solve it on a parallel computer.
— Use number of interconnected computers to work cooperatively to solve the problem.
1.3 SOME INTERESTING FEATURES OF PARALLEL COMPUTERS
Even though higher speed obtainable with parallel computers is the main motivating force for
building parallel computers, there are some other interesting features of parallel computers
which are not obvious but nevertheless important. These are:
Better quality of solution. When arithmetic operations are distributed to many computers,
each one does a smaller number of arithmetic operations. Thus, rounding errors are lower
when parallel computers are used.
Better algorithms. The availability of many computers that can work simultaneously leads
to different algorithms which are not relevant for purely sequential computers. It is possible
to explore different facets of a solution simultaneously using several processors and these
give better insight into solutions of several physical problems.
Better storage distribution. Certain types of parallel computing systems provide much
larger storage which is distributed. Access to the storage is faster in each computer. This
feature is of special interest in many applications such as information retrieval and computer
aided design.
Greater reliability. In principle a parallel computer will work even if a processor fails. We
can build a parallel computer’s hardware and software for better fault tolerance.
1.4 ORGANIZATION OF THE BOOK
This book is organized as follows. In the next chapter we describe various methods of solving
problems in parallel. In Chapter 3 we examine the architecture of processors and how
instruction level parallelism is exploited in the design of modern microprocessors. We
explain the structure of parallel computers and examine various methods of interconnecting
processors and how they influence their cooperative functioning in Chapter 4. This is
followed by a chapter titled Core Level Parallel Processing. With the improvement of
semiconductor technology, it has now become feasible to integrate several billion transistors
in an integrated circuit. Consequently a large number of processing elements, called “cores”,
may be integrated in a chip to work cooperatively to solve problems. In this chapter we
explain the organization of multicore processors on a chip including what are known as
General Purpose Graphics Processing Units (GPGPU). Chapter 6 is on the emerging area of
Grid and Cloud Computing. We explain how these computer environments in which
computers spread all over the world are interconnected and cooperate to solve problems
emerged and how they function. We also describe the similarities and differences between
grid and cloud computing. The next part of the book concentrates on the programming
aspects of parallel computers. Chapter 7 discusses parallel algorithms including prefix
computation algorithms, sorting, searching, and matrix algorithms for parallel computers.
These problems are natural candidates for use of parallel computers. Programming parallel
computers is the topic of the next chapter. We discuss methods of programming different
types of parallel machines. It is important to be able to port parallel programs across
architectures. The solution to this problem has been elusive. In Chapter 8 four different
explicit parallel programming models (a programming model is an abstraction of a computer
system that a user sees and uses when developing programs) are described. These are: MPI
for programming message passing parallel computers, OpenMP for programming shared
memory parallel computers, CUDA and OpenCL for programming GPGPUs, and
MapReduce programming for large scale data processing on clouds. Compilers for high level
languages are corner stones of computing. It is necessary for compilers to take cognizance of
the underlying parallel architecture. Thus in Chapter 9 the important topics of dependence
analysis and compiler transformations for parallel computers are discussed. Operating
Systems for parallel computers is the topic of Chapter 10. The last chapter is on the
evaluation of the performance of parallel computers.
EXERCISES
1.1 The website http://www.top500.org lists the 500 fastest computers in the world. Find
out the top 5 computers in this list. How many processors do each of them use and what
type of processors do they use?
1.2 LINPACK benchmarks that specify the speed of parallel computers for solving 1000 ×
1000 linear systems of equations can be found in the website http://
performance.netlib.org and are updated by Jack Dongarra at the University of
Tennessee, [email protected]. Look this up and compare peak speed of parallel
computers listed with their speed.
1.3 SPEC marks that specify individual processor’s performance are listed at the web site
http:// www.specbench.org. Compare the SPEC marks of individual processors of the
top 5 fastest computers (distinct processors) which you found from the website
mentioned in Exercise 1.1.
1.4 Estimate the problem complexity for simulating turbulent flows around the wings and
body of a supersonic aircraft. Assume that the number of grid points are around 1011.
1.5 What are the different methods of increasing the speed of computers? Plot the clock
speed increase of Intel microprocessors between 1975 and 2014. Compare this with the
number of transistors in Intel microprocessors between 1975 and 2014. From these
observations, can you state your own conclusions?
1.6 List the advantages and disadvantages of using parallel computers.
1.7 How do parallel computers reduce rounding error in solving numeric intensive
problems?
1.8 Are parallel computers more reliable than serial computers? If yes explain why.
1.9 Find out the parallel computers which have been made in India by CDAC, NAL, CRL,
and BARC by searching the web. How many processors are used by them and what are
the applications for which they are used?
BIBLIOGRAPHY
Barney, B., Introduction to Parallel Computing, Lawrence Livermore National Laboratory,
USA, 2011.
(A short tutorial accessible from the web).
Computing Surveys published by the Association for Computing Machinery (ACM), USA is
a rich source of survey articles about parallel computers.
Culler, D.E., Singh, J.P. and Gupta, A., Parallel Computer Architecture and Programming,
Morgan Kauffman, San Francisco, USA, 1999.
(A book intended for postgraduate Computer Science students has a wealth of information).
DeCegama, A.L., The Technology of Parallel Processing, Vol. l: Parallel Processing,
Architecture and VLSI Hardware, Prentice-Hall Inc., Englewood Cliffs, NJ, USA, 1989.
(A good reference for parallel computer architectures).
Denning, P.J., “Parallel Computing and its Evolution”, Communications of the ACM, Vol. 29,
No. 12, Dec. 1988, pp. 1363–1367.
Dubois, M., Annavaram, M., and Stenstrom, P., Parallel Computer Organization and Design,
Cambridge University Press, UK, 2010. (A good textbook).
Gama, A., Gupta, A., Karpys, G., and Kumar, V., An Introduction to Parallel Computing,
2nd ed., Pearson, Delhi, 2004.
Hennesy, J.L., and Patterson, D.A., Computer Architecture—A Quantitative Approach, 5th
ed., Morgan Kauffman-Elsevier, USA, 2012.
(A classic book which describes both latest parallel processors and parallel computers).
Hwang, K. and Briggs, F.A., Computer Architecture and Parallel Processing, McGraw-Hill,
New York, 1984.
(This is an 846 page book which gives a detailed description of not only parallel computers
but also high performance computer architectures).
Keckler, S.W., Kundle, O. and Hofstir, H.P., (Eds.), Multicore Processors and Systems,
Springer, USA, 2009.
(A book with several authors discussing recent developments in single chip multicore
systems).
Lipovski, G.J. and Malak, M., Parallel Computing: Theory and Practice, Wiley, New York,
USA, 1987.
(Contains descriptions of some commercial and experimental parallel computers).
Satyanarayanan, M., “Multiprocessing: An Annotated Bibliography,” IEEE Computer, Vol.
13, No. 5, May 1980, pp. 101–116.
Shen, J.P., and Lipasti, M.H., Modern Processor Design, Tata McGraw-Hill, Delhi, 2010.
(Describes the design of superscalar processors).
The Magazines: Computer, Spectrum and Software published by the Institute of Electrical
and Electronics Engineers (IEEE), USA, contain many useful articles on parallel
computing.
Wilson, G.V., “The History of the Development of Parallel Computing”, 1993.
webdocs.cs.ualberta.ca/~paullu/c681/parallel.time.line.html.
Solving Problems in Parallel

In this chapter we will explain with examples how simple jobs can be solved in parallel in
many different ways. The simple examples will illustrate many important points in perceiving
parallelism, and in allocating tasks to processors for getting maximum efficiency in solving
problems in parallel.
2.1 UTILIZING TEMPORAL PARALLELISM
Suppose 1000 candidates appear in an examination. Assume that there are answers to 4
questions in each answer book. If a teacher is to correct these answer books, the following
instructions may be given to him:
Procedure 2.1 Instructions given to a teacher to correct an answer book
Step 1: Take an answer book from the pile of answer books.
Step 2: Correct the answer to Q1 namely, A1.
Step 3: Repeat Step 2 for answers to Q2, Q3, Q4, namely, A2, A3, A4.
Step 4: Add marks given for each answer.
Step 5: Put answer book in a pile of corrected answer books.
Step 6: Repeat Steps 1 to 5 until no more answer books are left in the input.
A teacher correcting 1000 answer books using Procedure 2.1 is shown in Fig. 2.1. If a
paper takes 20 minutes to correct, then 20,000 minutes will be taken to correct 1000 papers.
If we want to speedup correction, we can do it in the following ways:

Figure 2.1 A single teacher correcting answer books.

Method 1: Temporal Parallelism


Ask four teachers to co-operatively correct each answer book. To do this the four teachers sit
in one line. The first teacher corrects answer to Q1, namely, A1 of the first paper and passes
the paper to the second teacher who starts correcting A2. The first teacher immediately takes
the second paper and corrects A1 in it. The procedure is shown in Fig. 2.2.

Figure 2.2 Four teachers working in a pipeline or assembly line.


Exploring the Variety of Random
Documents with Different Content
Latin was chiefly taught as a written language (witness the totally
different manner in which Latin was pronounced in the different
countries, the consequence being that as early as the sixteenth
century French and English scholars were unable to understand each
other’s spoken Latin). This led to the almost exclusive occupation
with letters instead of sounds. The fact that all language is primarily
spoken and only secondarily written down, that the real life of
language is in the mouth and ear and not in the pen and eye, was
overlooked, to the detriment of a real understanding of the essence
of language and linguistic development; and very often where the
spoken form of a language was accessible scholars contented
themselves with a reading knowledge. In spite of many efforts,
some of which go back to the sixteenth century, but which did not
become really powerful till the rise of modern phonetics in the
nineteenth century, the fundamental significance of spoken as
opposed to written language has not yet been fully appreciated by
all linguists. There are still too many writers on philological questions
who have evidently never tried to think in sounds instead of thinking
in letters and symbols, and who would probably be sorely puzzled if
they were to pronounce all the forms that come so glibly to their
pens. What Sweet wrote in 1877 in the preface to his Handbook of
Phonetics is perhaps less true now than it was then, but it still
contains some elements of truth. “Many instances,” he said, “might
be quoted of the way in which important philological facts and laws
have been passed over or misrepresented through the observer’s
want of phonetic training. Schleicher’s failing to observe the
Lithuanian accents, or even to comprehend them when pointed out
by Kurschat, is a striking instance.” But there can be no doubt that
the way in which Latin has been for centuries made the basis of all
linguistic instruction is largely responsible for the preponderance of
eye-philology to ear-philology in the history of our science.
We next come to a point which to my mind is very important,
because it concerns something which has had, and has justly had,
enduring effects on the manner in which language, and especially
grammar, is viewed and taught to this day. What was the object of
teaching Latin in the Middle Ages and later? Certainly not the purely
scientific one of imparting knowledge for knowledge’s own sake,
apart from any practical use or advantage, simply in order to widen
the spiritual horizon and to obtain the joy of pure intellectual
understanding. For such a purpose some people with scientific
leanings may here and there take up the study of some out-of-the-
way African or American idiom. But the reasons for teaching and
learning Latin were not so idealistic. Latin was not even taught and
learnt solely with the purpose of opening the doors to the old
classical or to the more recent religious literature in that language,
but chiefly, and in the first instance, because Latin was a practical
and highly important means of communication between educated
people. One had to learn not only to read Latin, but also to write
Latin, if one wanted to maintain no matter how humble a position in
the republic of learning or in the hierarchy of the Church.
Consequently, grammar was not (even primarily) the science of how
words were inflected and how forms were used by the old Romans,
but chiefly and essentially the art of inflecting words and of using
the forms yourself, if you wanted to write correct Latin. This you
must say, and these faults you must avoid—such were the lessons
imparted in the schools. Grammar was not a set of facts observed
but of rules to be observed, and of paradigms, i.e. of patterns, to be
followed. Sometimes this character of grammatical instruction is
expressly indicated in the form of the precepts given, as in such
memorial verses as this: “Tolle -me, -mi, -mu, -mis, Si declinare
domus vis!” In other words, grammar was prescriptive rather than
descriptive.
The current definition of grammar, therefore, was “ars bene dicendi
et bene scribendi,” “l’art de bien dire et de bien écrire,” the art of
speaking and writing correctly. J. C. Scaliger said, “Grammatici unus
finis est recte loqui.” To attain to correct diction (‘good grammar’)
and to avoid faulty diction (‘bad grammar’), such were the two
objects of grammatical teaching. Now, the same point of view, in
which the two elements of ‘art’ and of ‘correctness’ entered so
largely, was applied not only to Latin, but to other languages as well,
when the various vernaculars came to be treated grammatically.
The vocabulary, too, was treated from the same point of view. This
is especially evident in the case of the dictionaries issued by the
French and Italian Academies. They differ from dictionaries as now
usually compiled in being not collections of all and any words their
authors could get hold of within the limits of the language
concerned, but in being selections of words deserving the
recommendations of the best arbiters of taste and therefore fit to be
used in the highest literature by even the most elegant or fastidious
writers. Dictionaries thus understood were less descriptions of actual
usage than prescriptions for the best usage of words.
The normative way of viewing language is fraught with some great
dangers which can only be avoided through a comprehensive
knowledge of the historic development of languages and of the
general conditions of linguistic psychology. Otherwise, the tendency
everywhere is to draw too narrow limits for what is allowable or
correct. In many cases one form, or one construction, only is
recognized, even where two or more are found in actual speech; the
question which is to be selected as the only good form comes to be
decided too often by individual fancy or predilection, where no
scientific tests can yet be applied, and thus a form may often be
proscribed which from a less narrow point of view might have
appeared just as good as, or even better than, the one preferred in
the official grammar or dictionary. In other instances, where two
forms were recognized, the grammarian wanted to give rules for
their discrimination, and sometimes on the basis of a totally
inadequate induction he would establish nice distinctions not really
warranted by actual usage—distinctions which subsequent
generations had to learn at school with the sweat of their brows and
which were often considered most important in spite of their intrinsic
insignificance. Such unreal or half-real subtle distinctions are the
besetting sin of French grammarians from the ‘grand siècle’ onwards,
while they have played a much less considerable part in England,
where people have been on the whole more inclined to let things
slide as best they may on the ‘laissez faire’ principle, and where no
Academy was ever established to regulate language. But even in
English rules are not unfrequently given in schools and in newspaper
offices which are based on narrow views and hasty generalizations.
Because a preposition at the end of a sentence may in some
instances be clumsy or unwieldy, this is no reason why a final
preposition should always and under all circumstances be considered
a grave error. But it is of course easier for the schoolmaster to give
an absolute and inviolable rule once and for all than to study
carefully all the various considerations that might render a
qualification desirable. If the ordinary books on Common Faults in
Writing and Speaking English and similar works in other languages
have not even now assimilated the teachings of Comparative and
Historic Linguistics, it is no wonder that the grammarians of the
seventeenth and eighteenth centuries, with whom we are here
concerned, should be in many ways guided by narrow and
insufficient views on what ought to determine correctness of speech.
Here also the importance given to the study of Latin was sometimes
harmful; too much was settled by a reference to Latin rules, even
where the modern languages really followed rules of their own that
were opposed to those of Latin. The learning of Latin grammar was
supposed to be, and to some extent really was, a schooling in logic,
as the strict observance of the rules of any foreign language is
bound to be; but the consequence of this was that when questions
of grammatical correctness were to be settled, too much importance
was often given to purely logical considerations, and scholars were
sometimes apt to determine what was to be called ‘logical’ in
language according to whether it was or was not in conformity with
Latin usage. This disposition, joined with the unavoidable
conservatism of mankind, and more particularly of teachers, would
in many ways prove a hindrance to natural developments in a living
speech. But we must again take up the thread of the history of
linguistic theory.
I.—§ 3. Eighteenth-century Speculation. Herder.

The problem of a natural origin of language exercised some of the


best-known thinkers of the eighteenth century. Rousseau imagined
the first men setting themselves more or less deliberately to frame a
language by an agreement similar to (or forming part of) the contrat
social which according to him was the basis of all social order. There
is here the obvious difficulty of imagining how primitive men who
had been previously without any speech came to feel the want of
language, and how they could agree on what sound was to
represent what idea without having already some means of
communication. Rousseau’s whole manner of putting and of viewing
the problem is evidently too crude to be of any real importance in
the history of linguistic science.
Condillac is much more sensible when he tries to imagine how a
speechless man and a speechless woman might be led quite
naturally to acquire something like language, starting with instinctive
cries and violent gestures called forth by strong emotions. Such cries
would come to be associated with elementary feelings, and new
sounds might come to indicate various objects if produced
repeatedly in connexion with gestures showing what objects the
speaker wanted to call attention to. If these two first speaking
beings had as yet very little power to vary their sounds, their child
would have a more flexible tongue, and would therefore be able to,
and be impelled to, produce some new sounds, the meaning of
which his parents would guess at, and which they in their turn would
imitate; thus gradually a greater and greater number of words would
come into existence, generation after generation working painfully to
enrich and develop what had been already acquired, until it finally
became a real language.
The profoundest thinker on these problems in the eighteenth
century was Johann Gottfried Herder, who, though he did little or
nothing in the way of scientific research, yet prepared the rise of
linguistic science. In his prize essay on the Origin of Language
(1772) Herder first vigorously and successfully attacks the orthodox
view of his age—a view which had been recently upheld very
emphatically by one Süssmilch—that language could not have been
invented by man, but was a direct gift from God. One of Herder’s
strongest arguments is that if language had been framed by God
and by Him instilled into the mind of man, we should expect it to be
much more logical, much more imbued with pure reason than it is as
an actual matter of fact. Much in all existing languages is so chaotic
and ill-arranged that it could not be God’s work, but must come from
the hand of man. On the other hand, Herder does not think that
language was really ‘invented’ by man—although this was the word
used by the Berlin Academy when opening the competition in which
Herder’s essay gained the prize. Language was not deliberately
framed by man, but sprang of necessity from his innermost nature;
the genesis of language according to him is due to an impulse
similar to that of the mature embryo pressing to be born. Man, in
the same way as all animals, gives vent to his feelings in tones, but
this is not enough; it is impossible to trace the origin of human
language to these emotional cries alone. However much they may
be refined and fixed, without understanding they can never become
human, conscious language. Man differs from brute animals not in
degree or in the addition of new powers, but in a totally different
direction and development of all powers. Man’s inferiority to animals
in strength and sureness of instinct is compensated by his wider
sphere of attention; the whole disposition of his mind as an
unanalysable entity constitutes the impassable barrier between him
and the lower animals. Man, then, shows conscious reflexion when
among the ocean of sensations that rush into his soul through all the
senses he singles out one wave and arrests it, as when, seeing a
lamb, he looks for a distinguishing mark and finds it in the bleating,
so that next time when he recognizes the same animal he imitates
the sound of bleating, and thereby creates a name for that animal.
Thus the lamb to him is ‘the bleater,’ and nouns are created from
verbs, whereas, according to Herder, if language had been the
creation of God it would inversely have begun with nouns, as that
would have been the logically ideal order of procedure. Another
characteristic trait of primitive languages is the crossing of various
shades of feeling and the necessity of expressing thoughts through
strong, bold metaphors, presenting the most motley picture. “The
genetic cause lies in the poverty of the human mind and in the
flowing together of the emotions of a primitive human being.”
Another consequence is the wealth of synonyms in primitive
language; “alongside of real poverty it has the most unnecessary
superfluity.”
When Herder here speaks of primitive or ‘original’ languages, he is
thinking of Oriental languages, and especially of Hebrew. “We should
never forget,” says Edward Sapir,[1] “that Herder’s time-perspective
was necessarily very different from ours. While we unconcernedly
take tens or even hundreds of thousands of years in which to allow
the products of human civilization to develop, Herder was still
compelled to operate with the less than six thousand years that
orthodoxy stingily doled out. To us the two or three thousand years
that separate our language from the Old Testament Hebrew seems a
negligible quantity, when speculating on the origin of language in
general; to Herder, however, the Hebrew and the Greek of Homer
seemed to be appreciably nearer the oldest conditions than our
vernaculars—hence his exaggeration of their ursprünglichkeit.”
Herder’s chief influence on the science of speech, to my mind, is not
derived directly from the ideas contained in his essay on the actual
origin of speech, but rather indirectly through the whole of his life’s
work. He had a very strong sense of the value of everything that had
grown naturally (das naturwüchsige); he prepared the minds of his
countrymen for the manysided receptiveness of the Romanticists,
who translated and admired the popular poetry of a great many
countries, which had hitherto been terræ incognitæ; and he was one
of the first to draw attention to the great national value of his own
country’s medieval literature and its folklore, and thus was one of
the spiritual ancestors of Grimm. He sees the close connexion that
exists between language and primitive poetry, or that kind of
spontaneous singing that characterizes the childhood or youth of
mankind, and which is totally distinct from the artificial poetry of
later ages. But to him each language is not only the instrument of
literature, but itself literature and poetry. A nation speaks its soul in
the words it uses. Herder admires his own mother-tongue, which to
him is perhaps inferior to Greek, but superior to its neighbours. The
combinations of consonants give it a certain measured pace; it does
not rush forward, but walks with the firm carriage of a German. The
nice gradation of vowels mitigates the force of the consonants, and
the numerous spirants make the German speech pleasant and
endearing. Its syllables are rich and firm, its phrases are stately, and
its idiomatic expressions are emphatic and serious. Still in some
ways the present German language is degenerate if compared with
that of Luther, and still more with that of the Suabian Emperors, and
much therefore remains to be done in the way of disinterring and
revivifying the powerful expressions now lost. Through ideas like
these Herder not only exercised a strong influence on Goethe and
the Romanticists, but also gave impulses to the linguistic studies of
the following generation, and caused many younger men to turn
from the well-worn classics to fields of research previously
neglected.

I.—§ 4. Jenisch.

Where questions of correct language or of the best usage are dealt


with, or where different languages are compared with regard to their
efficiency or beauty, as is done very often, though more often in
dilettante conversation or in casual remarks in literary works than in
scientific linguistic disquisitions, it is no far cry to the question, What
would an ideal language be like? But such is the matter-of-factness
of modern scientific thought, that probably no scientific Academy in
our own days would think of doing what the Berlin Academy did in
1794 when it offered a prize for the best essay on the ideal of a
perfect language and a comparison of the best-known languages of
Europe as tested by the standard of such an ideal. A Berlin pastor, D.
Jenisch, won the prize, and in 1796 brought out his book under the
title Philosophisch-kritische vergleichung und würdigung von
vierzehn ältern und neuern sprachen Europens—a book which is
even now well worth reading, the more so because its subject has
been all but completely neglected in the hundred and twenty years
that have since intervened. In the Introduction the author has the
following passage, which might be taken as the motto of Wilhelm v.
Humboldt, Steinthal, Finck and Byrne, who do not, however, seem to
have been inspired by Jenisch: “In language the whole intellectual
and moral essence of a man is to some extent revealed. ‘Speak, and
you are’ is rightly said by the Oriental. The language of the natural
man is savage and rude, that of the cultured man is elegant and
polished. As the Greek was subtle in thought and sensuously refined
in feeling—as the Roman was serious and practical rather than
speculative—as the Frenchman is popular and sociable—as the
Briton is profound and the German philosophic—so are also the
languages of each of these nations.”
Jenisch then goes on to say that language as the organ for
communicating our ideas and feelings accomplishes its end if it
represents idea and feeling according to the actual want or need of
the mind at the given moment. We have to examine in each case the
following essential qualities of the languages compared, (1) richness,
(2) energy or emphasis, (3) clearness, and (4) euphony. Under the
head of richness we are concerned not only with the number of
words, first for material objects, then for spiritual and abstract
notions, but also with the ease with which new words can be formed
(lexikalische bildsamkeit). The energy of a language is shown in its
lexicon and in its grammar (simplicity of grammatical structure,
absence of articles, etc.), but also in “the characteristic energy of the
nation and its original writers.” Clearness and definiteness in the
same way are shown in vocabulary and grammar, especially in a
regular and natural syntax. Euphony, finally, depends not only on the
selection of consonants and vowels utilized in the language, but on
their harmonious combination, the general impression of the
language being more important than any details capable of being
analysed.
These, then, are the criteria by which Greek and Latin and a number
of living languages are compared and judged. The author displays
great learning and a sound practical knowledge of many languages,
and his remarks on the advantages and shortcomings of these are
on the whole judicious, though often perhaps too much stress is laid
on the literary merits of great writers, which have really no intrinsic
connexion with the value of a language as such. It depends to a
great extent on accidental circumstances whether a language has
been or has not been used in elevated literature, and its merits
should be estimated, so far as this is possible, independently of the
perfection of its literature. Jenisch’s prejudice in that respect is
shown, for instance, when he says (p. 36) that the endeavours of
Hickes are entirely futile, when he tries to make out regular
declensions and conjugations in the barbarous language of Wulfila’s
translation of the Bible. But otherwise Jenisch is singularly free from
prejudices, as shown by a great number of passages in which other
languages are praised at the expense of his own. Thus, on p. 396,
he declares German to be the most repellent contrast to that most
supple modern language, French, on account of its unnatural word-
order, its eternally trailing article, its want of participial constructions,
and its interminable auxiliaries (as in ‘ich werde geliebt werden, ich
würde geliebt worden sein,’ etc.), with the frequent separation of
these auxiliaries from the main verb through extraneous
intermediate words, all of which gives to German something
incredibly awkward, which to the reader appears as lengthy and
diffuse and to the writer as inconvenient and intractable. It is not
often that we find an author appraising his own language with such
severe impartiality, and I have given the passage also to show what
kind of problems confront the man who wishes to compare the
relative value of languages as wholes. Jenisch’s view here forms a
striking contrast to Herder’s appreciation of their common mother-
tongue.
Jenisch’s book does not seem to have been widely read by
nineteenth-century scholars, who took up totally different problems.
Those few who read it were perhaps inclined to say with S. Lefmann
(see his book on Franz Bopp, Nachtrag, 1897, p. xi) that it is difficult
to decide which was the greater fool, the one who put this problem
or the one who tried to answer it. This attitude, however, towards
problems of valuation in the matter of languages is neither just nor
wise, though it is perhaps easy to see how students of comparative
grammar were by the very nature of their study led to look down
upon those who compared languages from the point of view of
æsthetic or literary merits. Anyhow, it seems to me no small merit to
have been the first to treat such problems as these, which are
generally answered in an off-hand way according to a loose general
judgement, so as to put them on a scientific footing by examining in
detail what it is that makes us more or less instinctively prefer one
language, or one turn or expression in a language, and thus lay the
foundation of that inductive æsthetic theory of language which has
still to be developed in a truly scientific spirit.
CHAPTER II
BEGINNING OF NINETEENTH CENTURY
§ 1. Introduction. Sanskrit. § 2. Friedrich von Schlegel. §
3. Rasmus Rask. § 4. Jacob Grimm. § 5. The Sound Shift.
§ 6. Franz Bopp. § 7. Bopp continued. § 8. Wilhelm von
Humboldt. § 9. Grimm once more.

II.—§ 1. Introduction. Sanskrit.

The nineteenth century witnessed an enormous growth and


development of the science of language, which in some respects
came to present features totally unknown to previous centuries. The
horizon was widened; more and more languages were described,
studied and examined, many of them for their own sake, as they
had no important literature. Everywhere a deeper insight was gained
into the structures even of such languages as had been for centuries
objects of study; a more comprehensive and more incisive
classification of languages was obtained with a deeper
understanding of their mutual relationships, and at the same time
linguistic forms were not only described and analysed, but also
explained, their genesis being traced as far back as historical
evidence allowed, if not sometimes further. Instead of contenting
itself with stating when and where a form existed and how it looked
and was employed, linguistic science now also began to ask why it
had taken that definite shape, and thus passed from a purely
descriptive to an explanatory science.
The chief innovation of the beginning of the nineteenth century was
the historical point of view. On the whole, it must be said that it was
reserved for that century to apply the notion of history to other
things than wars and the vicissitudes of dynasties, and thus to
discover the idea of development or evolution as pervading the
whole universe. This brought about a vast change in the science of
language, as in other sciences. Instead of looking at such a
language as Latin as one fixed point, and instead of aiming at fixing
another language, such as French, in one classical form, the new
science viewed both as being in constant flux, as growing, as
moving, as continually changing. It cried aloud like Heraclitus “Pánta
reî,” and like Galileo “Eppur si muove.” And lo! the better this
historical point of view was applied, the more secrets languages
seemed to unveil, and the more light seemed also to be thrown on
objects outside the proper sphere of language, such as ethnology
and the early history of mankind at large and of particular countries.
It is often said that it was the discovery of Sanskrit that was the real
turning-point in the history of linguistics, and there is some truth in
this assertion, though we shall see on the one hand that Sanskrit
was not in itself enough to give to those who studied it the true
insight into the essence of language and linguistic science, and on
the other hand that real genius enabled at least one man to grasp
essential truths about the relationships and development of
languages even without a knowledge of Sanskrit. Still, it must be
said that the first acquaintance with this language gave a mighty
impulse to linguistic studies and exerted a lasting influence on the
way in which most European languages were viewed by scholars,
and it will therefore be necessary here briefly to sketch the history of
these studies. India was very little known in Europe till the mighty
struggle between the French and the English for the mastery of its
wealth excited a wide interest also in its ancient culture. It was but
natural that on this intellectual domain, too, the French and the
English should at first be rivals and that we should find both nations
represented in the pioneers of Sanskrit scholarship. The French
Jesuit missionary Cœurdoux as early as 1767 sent to the French
Institut a memoir in which he called attention to the similarity of
many Sanskrit words with Latin, and even compared the flexion of
the present indicative and subjunctive of Sanskrit asmi, ‘I am,’ with
the corresponding forms of Latin grammar. Unfortunately, however,
his work was not printed till forty years later, when the same
discovery had been announced independently by others. The next
scholar to be mentioned in this connexion is Sir William Jones, who
in 1796 uttered the following memorable words, which have often
been quoted in books on the history of linguistics: “The Sanscrit
language, whatever be its antiquity, is of a wonderful structure;
more perfect than the Greek, more copious than the Latin and more
exquisitely refined than either; yet bearing to both of them a
stronger affinity, both in the roots of verbs and in the forms of
grammar, than could possibly have been produced by accident; so
strong, indeed, that no philologer could examine them all three
without believing them to have sprung from some common source,
which, perhaps, no longer exists. There is a similar reason, though
not quite so forcible, for supposing that both the Gothic and the
Celtic ... had the same origin with the Sanscrit; and the old Persian
might be added to the same family.” Sir W. Jones, however, did
nothing to carry out in detail the comparison thus inaugurated, and
it was reserved for younger men to follow up the clue he had given.

II.—§ 2. Friedrich von Schlegel.

One of the books that exercised a great influence on the


development of linguistic science in the beginning of the nineteenth
century was Friedrich von Schlegel’s Ueber die sprache und weisheit
der Indier (1808). Schlegel had studied Sanskrit for some years in
Paris, and in his romantic enthusiasm he hoped that the study of the
old Indian books would bring about a revolution in European thought
similar to that produced in the Renaissance through the revival of
the study of Greek. We are here concerned exclusively with his
linguistic theories, but to his mind they were inseparable from Indian
religion and philosophy, or rather religious and philosophic poetry.
He is struck by the similarity between Sanskrit and the best-known
European languages, and gives quite a number of words from
Sanskrit found with scarcely any change in German, Greek and Latin.
He repudiates the idea that these similarities might be accidental or
due to borrowings on the side of the Indians, saying expressly that
the proof of original relationship between these languages, as well
as of the greater age of Sanskrit, lies in the far-reaching
correspondences in the whole grammatical structure of these as
opposed to many other languages. In this connexion it is noticeable
that he is the first to speak of ‘comparative grammar’ (p. 28), but,
like Moses, he only looks into this promised land without entering it.
Indeed, his method of comparison precludes him from being the
founder of the new science, for he says himself (p. 6) that he will
refrain from stating any rules for change or substitution of letters
(sounds), and require complete identity of the words used as proofs
of the descent of languages. He adds that in other cases, “where
intermediate stages are historically demonstrable, we may derive
giorno from dies, and when Spanish so often has h for Latin f, or
Latin p very often becomes f in the German form of the same word,
and c not rarely becomes h [by the way, an interesting
foreshadowing of one part of the discovery of the Germanic sound-
shifting], then this may be the foundation of analogical conclusions
with regard to other less evident instances.” If he had followed up
this idea by establishing similar ‘sound-laws,’ as we now say,
between Sanskrit and other languages, he would have been many
years ahead of his time; as it is, his comparisons are those of a
dilettante, and he sometimes falls into the pitfalls of accidental
similarities while overlooking the real correspondences. He is also led
astray by the idea of a particularly close relationship between
Persian and German, an idea which at that time was widely
spread[2]—we find it in Jenisch and even in Bopp’s first book.
Schlegel is not afraid of surveying the whole world of human
languages; he divides them into two classes, one comprising
Sanskrit and its congeners, and the second all other languages. In
the former he finds organic growth of the roots as shown by their
capability of inner change or, as he terms it, ‘flexion,’ while in the
latter class everything is effected by the addition of affixes (prefixes
and suffixes). In Greek he admits that it would be possible to believe
in the possibility of the grammatical endings (bildungssylben) having
arisen from particles and auxiliary words amalgamated into the word
itself, but in Sanskrit even the last semblance of this possibility
disappears, and it becomes necessary to confess that the structure
of the language is formed in a thoroughly organic way through
flexion, i.e. inner changes and modifications of the radical sound,
and not composed merely mechanically by the addition of words and
particles. He admits, however, that affixes in some other languages
have brought about something that resembles real flexion. On the
whole he finds that the movement of grammatical art and perfection
(der gang der bloss grammatischen kunst und ausbildung, p. 56)
goes in opposite directions in the two species of languages. In the
organic languages, which represent the highest state, the beauty
and art of their structure is apt to be lost through indolence; and
German as well as Romanic and modern Indian languages show this
degeneracy when compared with the earlier forms of the same
languages. In the affix languages, on the other hand, we see that
the beginnings are completely artless, but the ‘art’ in them grows
more and more perfect the more the affixes are fused with the main
word.
As to the question of the ultimate origin of language, Schlegel thinks
that the diversity of linguistic structure points to different
beginnings. While some languages, such as Manchu, are so
interwoven with onomatopœia that imitation of natural sounds must
have played the greatest rôle in their formation, this is by no means
the case in other languages, and the perfection of the oldest organic
or flexional languages, such as Sanskrit, shows that they cannot be
derived from merely animal sounds; indeed, they form an additional
proof, if any such were needed, that men did not everywhere start
from a brutish state, but that the clearest and intensest reason
existed from the very first beginning. On all these points Schlegel’s
ideas foreshadow views that are found in later works; and it is
probable that his fame as a writer outside the philological field gave
to his linguistic speculations a notoriety which his often loose and
superficial reasonings would not otherwise have acquired for them.
Schlegel’s bipartition of the languages of the world carries in it the
germ of a tripartition. On the lowest stage of his second class he
places Chinese, in which, as he acknowledges, the particles denoting
secondary sense modifications consist in monosyllables that are
completely independent of the actual word. It is clear that from
Schlegel’s own point of view we cannot here properly speak of
‘affixes,’ and thus Chinese really, though Schlegel himself does not
say so, falls outside his affix languages and forms a class by itself.
On the other hand, his arguments for reckoning Semitic languages
among affix languages are very weak, and he seems also somewhat
inclined to say that much in their structure resembles real flexion. If
we introduce these two changes into his system, we arrive at the
threefold division found in slightly different shapes in most
subsequent works on general linguistics, the first to give it being
perhaps Schlegel’s brother, A. W. Schlegel, who speaks of (1) les
langues sans aucune structure grammaticale—under which
misleading term he understands Chinese with its unchangeable
monosyllabic words; (2) les langues qui emploient des affixes; (3)
les langues à inflexions.
Like his brother, A. W. Schlegel places the flexional languages
highest and thinks them alone ‘organic.’ On the other hand, he
subdivides flexional languages into two classes, synthetic and
analytic, the latter using personal pronouns and auxiliaries in the
conjugation of verbs, prepositions to supply the want of cases, and
adverbs to express the degrees of comparison. While the origin of
the synthetic languages loses itself in the darkness of ages, the
analytic languages have been created in modern times; all those that
we know are due to the decomposition of synthetic languages.
These remarks on the division of languages are found in the
Introduction to the book Observations sur la langue et la littérature
provençale (1818) and are thus primarily meant to account for the
contrast between synthetic Latin and analytic Romanic.

II.—§ 3. Rasmus Rask.


We now come to the three greatest names among the initiators of
linguistic science in the beginning of the nineteenth century. If we
give them in their alphabetical order, Bopp, Grimm and Rask, we
also give them in the order of merit in which most subsequent
historians have placed them. The works that constitute their first
claims to the title of founder of the new science came in close
succession, Bopp’s Conjugationssystem in 1816, Rask’s Undersøgelse
in 1818, and the first volume of Grimm’s Grammatik in 1819. While
Bopp is entirely independent of the two others, we shall see that
Grimm was deeply influenced by Rask, and as the latter’s
contributions to our science began some years before his chief work
just mentioned (which had also been finished in manuscript in 1814,
thus two years before Bopp’s Conjugationssystem), the best order in
which to deal with the three men will perhaps be to take Rask first,
then to mention Grimm, who in some ways was his pupil, and finally
to treat of Bopp: in this way we shall also be enabled to see Bopp in
close relation with the subsequent development of Comparative
Grammar, on which he, and not Rask, exerted the strongest
influence.
Born in a peasant’s hut in the heart of Denmark in 1787, Rasmus
Rask was a grammarian from his boyhood. When a copy of the
Heimskringla was given him as a school prize, he at once, without
any grammar or dictionary, set about establishing paradigms, and
so, before he left school, acquired proficiency in Icelandic, as well as
in many other languages. At the University of Copenhagen he
continued in the same course, constantly widened his linguistic
horizon and penetrated into the grammatical structure of the most
diverse languages. Icelandic (Old Norse), however, remained his
favourite study, and it filled him with enthusiasm and national pride
that “our ancestors had such an excellent language,” the excellency
being measured chiefly by the full flexional system which Icelandic
shared with the classical tongues, partly also by the pure, unmixed
state of the Icelandic vocabulary. His first book (1811) was an
Icelandic grammar, an admirable production when we consider the
meagre work done previously in this field. With great lucidity he
reduces the intricate forms of the language into a consistent system,
and his penetrating insight into the essence of language is seen
when he explains the vowel changes, which we now comprise under
the name of mutation or umlaut, as due to the approximation of the
vowel of the stem to that of the ending, at that time a totally new
point of view. This we gather from Grimm’s review, in which Rask’s
explanation is said to be “more astute than true” (“mehr scharfsinnig
als wahr,” Kleinere schriften, 7. 515). Rask even sees the reason of
the change in the plural blöð as against the singular blað in the
former having once ended in -u, which has since disappeared. This
is, so far as I know, the first inference ever drawn to a prehistoric
state of language.
In 1814, during a prolonged stay in Iceland, Rask sent down to
Copenhagen his most important work, the prize essay on the origin
of the Old Norse language (Undersøgelse om det gamle nordiske
eller islandske sprogs oprindelse) which for various reasons was not
printed till 1818. If it had been published when it was finished, and
especially if it had been printed in a language better known than
Danish, Rask might well have been styled the founder of the modern
science of language, for his work contains the best exposition of the
true method of linguistic research written in the first half of the
nineteenth century and applies this method to the solution of a long
series of important questions. Only one part of it was ever translated
into another language, and this was unfortunately buried in an
appendix to Vater’s Vergleichungstafeln, 1822. Yet Rask’s work even
now repays careful perusal, and I shall therefore give a brief résumé
of its principal contents.
Language according to Rask is our principal means of finding out
anything about the history of nations before the existence of written
documents, for though everything may change in religion, customs,
laws and institutions, language generally remains, if not unchanged,
yet recognizable even after thousands of years. But in order to find
out anything about the relationship of a language we must proceed
methodically and examine its whole structure instead of comparing
mere details; what is here of prime importance is the grammatical
system, because words are very often taken over from one language
to another, but very rarely grammatical forms. The capital error in
most of what has been written on this subject is that this important
point has been overlooked. That language which has the most
complicated grammar is nearest to the source; however mixed a
language may be, it belongs to the same family as another if it has
the most essential, most material and indispensable words in
common with it; pronouns and numerals are in this respect most
decisive. If in such words there are so many points of agreement
between two languages that it is possible to frame rules for the
transitions of letters (in other passages Rask more correctly says
sounds) from the one language to the other, there is a fundamental
kinship between the two languages, more particularly if there are
corresponding similarities in their structure and constitution. This is a
most important thesis, and Rask supplements it by saying that
transitions of sounds are naturally dependent on their organ and
manner of production.
Next Rask proceeds to apply these principles to his task of finding
out the origin of the Old Icelandic language. He describes its position
in the ‘Gothic’ (Gothonic, Germanic) group and then looks round to
find congeners elsewhere. He rapidly discards Greenlandic and
Basque as being too remote in grammar and vocabulary; with regard
to Keltic languages he hesitates, but finally decides in favour of
denying relationship. (He was soon to see his error in this; see
below.) Next he deals at some length with Finnic and Lapp, and
comes to the conclusion that the similarities are due to loans rather
than to original kinship. But when he comes to the Slavonic
languages his utterances have a different ring, for he is here able to
disclose so many similarities in fundamentals that he ranges these
languages within the same great family as Icelandic. The same is
true with regard to Lithuanian and Lettic, which are here for the first
time correctly placed as an independent sub-family, though closely
akin to Slavonic. The comparisons with Latin, and especially with
Greek, are even more detailed; and Rask in these chapters really
presents us with a succinct, but on the whole marvellously correct,
comparative grammar of Gothonic, Slavonic, Lithuanian, Latin and
Greek, besides examining numerous lexical correspondences. He
does not yet know any of the related Asiatic languages, but throws
out the hint that Persian and Indian may be the remote source of
Icelandic through Greek. Greek he considers to be the ‘source’ or
‘root’ of the Gothonic languages, though he expresses himself with a
degree of uncertainty which forestalls the correct notion that these
languages have all of them sprung from the same extinct and
unknown language. This view is very clearly expressed in a letter he
wrote from St. Petersburg in the same year in which his
Undersøgelse was published; he here says: “I divide our family of
languages in this way: the Indian (Dekanic, Hindostanic), Iranic
(Persian, Armenian, Ossetic), Thracian (Greek and Latin), Sarmatian
(Lettic and Slavonic), Gothic (Germanic and Skandinavian) and Keltic
(Britannic and Gaelic) tribes” (SA 2. 281, dated June 11, 1818).
This is the fullest and clearest account of the relationships of our
family of languages found for many years, and Rask showed true
genius in the way in which he saw what languages belonged
together and how they were related. About the same time he gave a
classification of the Finno-Ugrian family of languages which is
pronounced by such living authorities on these languages as Vilhelm
Thomsen and Emil Setälä to be superior to most later attempts.
When travelling in India he recognized the true position of Zend,
about which previous scholars had held the most erroneous views,
and his survey of the languages of India and Persia was thought
valuable enough in 1863 to be printed from his manuscript, forty
years after it was written. He was also the first to see that the
Dravidian (by him called Malabaric) languages were totally different
from Sanskrit. In his short essay on Zend (1826) he also incidentally
gave the correct value of two letters in the first cuneiform writing,
and thus made an important contribution towards the final
deciphering of these inscriptions.
His long tour (1816-23) through Sweden, Finland, Russia, the
Caucasus, Persia and India was spent in the most intense study of a
great variety of languages, but unfortunately brought on the illness
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like