0% found this document useful (0 votes)
10 views7 pages

A Survey On Reflective Memory Systems - Reflective - Memory - Multiprocessor

This paper surveys various reflective memory systems (RMS), highlighting their differences from shared memory systems and providing an overview of their architectures, advantages, disadvantages, and recent research. It compares RMS products based on complexity, scalability, and compatibility, and suggests methods for improving RMS performance. The document serves as a guideline for users and developers in selecting and enhancing RMS technologies.

Uploaded by

a.bb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views7 pages

A Survey On Reflective Memory Systems - Reflective - Memory - Multiprocessor

This paper surveys various reflective memory systems (RMS), highlighting their differences from shared memory systems and providing an overview of their architectures, advantages, disadvantages, and recent research. It compares RMS products based on complexity, scalability, and compatibility, and suggests methods for improving RMS performance. The document serves as a guideline for users and developers in selecting and enhancing RMS technologies.

Uploaded by

a.bb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Proceedings on the 15th CISL Winter Workshop · Kushu, Japan · February 2002

A survey on Reflective Memory Systems

Il Joo Baek

Control Information Systems Lab., School of Electrical Engr.


and Computer Science, Seoul National University, Seoul, 151-742, Korea

Abstract: In this paper, several reflective memory systems (RMS) are surveyed. Differ-
ences between the reflective memory systems and the shared memory systems are described. A
brief overview of architectures, advantages, disadvantages, history, and general features of each
system is provided. The various RMS products are compared by complexity, scalability, and
compatibility. Recent researches of each systems are also studied. This paper suggested methods
to improve performance of RMS.

Keywords: Reflective Memory Systems; Distributed Shared Memory;

1 Introduction other. At this point, the investigation of present RMS


technologies and comparison of each system is very
Research and development to achieve high comput- important to give helpful guideline for user decision-
ing power with multiple computers over network have making. Also, this survey suggests proper direction of
shown significant progress recently. However, an ef- the RMSs to developers that propose better solutions
ficient interconnection among message-passing multi- and improve performance.
computers is still hard to achieve for several reasons. In this paper, a brief overview of architectures, ad-
First, the overhead of operating-system increases the vantages, disadvantages, history, and general features
communication latency. Second, the layered protocol of following systems is provided. Encore RMS [2],
software further consumes the processor time. Finally, reflective memory / memory channel (RMMC+) [1],
predicting the communication latency is difficult. As mirror memory multiprocessor (MMM) [3], University
the number of processors and the speed of network in- of Tokyo’s RMS [4], memory channel for peripheral
creases, these problems become more critical. The re- component interconnection (MC for PCI) [5], network
flective memory system (RMS) is one of well known shared memory (NSM) [6], shared common RAM net-
solutions to these problems. It is based on automatic work (SCRAMNET) [7], VME Microsystems Interna-
updates of remote shared-memory copies. In the RMS, tional corporation (VMIC)’s VMIC RM [8], finally scal-
writes to memory are automatically distributed to other able high performance really inexpensive multiproces-
connected systems. Memory reads are executed on the sor (SHRIMP) [9]. Those RMSs are compared by
local memory [1]. complexity, scalability, and compatibility. Recent re-
For the past 10 years, many RMS systems have been searches of each systems are also studied. Finally it
proposed, developed, and commercialized. Each of suggested methods to improve performance of RMS.
them was designed to satisfy specific requirements and This paper is organized as follow. Section 2 describes
purposes. Therefore, it is difficult for users to choose the conceptual overview of distributed shared memory
suitable and efficient RMS system for their purpose. (DSM) and RMS. It also provides a brief overview of
Also, researchers have wasted their time to get the in- architectures, general features, advantages, disadvan-
formation of many RMS systems and to compare each tages, and history of each system. In section 3, RMSs
are compared by complexity, scalability, and compati-
bility. Section 4 discusses and predicts possible area for 
future work. Section 5 presents concluding remarks.

     
2 Reflective Memory System
       
2.1 Overview  


  
  
       
  
Multiprocessors systems fall into two large classifica-
tions: shared-memory systems and distributed-memory
systems. Shared-memory systems is a tightly coupled Figure 1: A Reflective Memory/Memory Channel Node
multiprocessor system, consisting of multiple CPUs
and a single global physical memory. This mem-
ory systems offer a general and simple programming connection networks: bus, bus hierarchy, ring, mesh, or
model. Users can readily emulate other programming crossbar. The form of sharing is various: page, word,
models on these system. However a shared-memory segment or block. Also they can map shared-memory
multiprocessors typically suffer from increased con- regions dynamically or statically. An RM’s memory
tention and longer latencies in accessing the shared consistency model (MCM) can be strict, sequential,
memory, which degrades peak performance and lim- processor, release, or entry [15][16].
its scalability compared to distributed-memory mul- There are several advantages of RMS over other DSM
tiprocessors. In contrast, a distributed-memory sys- systems. First, computation typically overlaps with
tems(often called a multi-computer) consist of mul- communication. Second, memory access time is usu-
tiple independent processing nodes with local mem- ally constant and deterministic. Third, RM supports
ory module, connected by a general interconnection a multiple-reader/ multiple-writer algorithm. Fourth,
network. Consequently, it makes systems with very it has good fault tolerance. Finally, RM systems have
high computing power possible. Also, its hardware been commercially implemented for decades now.
implementation is easer. However, the implement of On the other hand, there are also disadvantages.
software is more complex and it requires explicit use First, RM system might produce unnecessary update
of send/recive primitives because communication be- traffic for application characterized with longer se-
tween nodes involves a message-passing model. To re- quences of writes to the same word. Second, the in-
cover these shortcomings, Distributed-Shared-Memory terconnection medium might suffer from a bottleneck
(DSM) is invented combining the advantages of the because of often data transfers. Third, one-to-all broad-
two approaches. A DSM system logically implements cast communication must be supported. Fourth, its ac-
the shared-memory model on physically distributed- cess time is slightly longer than local memory because
memory system [10][11][12][13][4]. RM is typically implemented on different board.
RMS is a branch of distributed shared-memory
(DSM). Therefore it has advantages of both shared- 2.2 RM/MC
memory and distributed-memory. The RMS is defined
as a distributed shared-memory system based on auto- The first RMS was designed and patented by Gould
matic updates using remote share-memory copies. In Electronics in 1985 [17]. After Encore Computer Cor-
other words, when a shared data need to be reused, all poration acquired Gould in 1989, RMS for real time
processor’s local memory should keep an exact copy of application was implemented in 1990 [14] [1].
it. Therefore, the shared reads are always satisfied from The Reflective Memory/Memory Channel sys-
the local memory. RMS is sometimes called mirror sys- tem(RM/MC) [2] is an improved system compared to
tem, replicated memory system but reflective memory previous RMSs and was introduced in 1993. RM/MC
systems are being used more commonly [14]. was initially designed to satisfy the Online Transactions
The characteristic feature of RMS is that each com- Processing application [18] needs.
puter physically has its own local memory, the results RM/MC is bus-based and consists of up to eight
are the same as if all the computer were attached to processing nodes connected by the multiplexed, syn-
a large common memory [2]. The reflective memory chronous 64 bits RM/MC bus. RM/MC bus arbitra-
is composed of a dual-ported memory physically dis- tion is centralized and uses a round-robin synchronous
tributed and logically mapped into a global, shared ad- arbitration algorithm. Local memory pages are config-
dress space. ured as reflective (shared) or private (nonshared) us-
RM updates can occur over different types of inter- ing translation windows on the transmit and receive
           
      
    

      
          
  

   
 
       
     
 
   

         
     
       
      

Figure 3: A Widely Distributed RMS


Figure 2: The Mirror Memory Multiprocessor

2.3 MMM
sides. The RM/MC provides receive and transmit
MMM stands for the mirror memory multiprocessor.
FIFO buffers to make asynchronous transfers between
This system was proposed by MODCOMP in 1997 [3].
the RM/MC bus and the host.
It is designed for time-critical applications which re-
Advantages of RM/MC system are as follows. While quire both the high computational performance and
previous RMS supports word updates only, RM/MC real-time performance.
supports both word and block updates. RM/MC com- MMM consists of a host computer that controls the
bines potentially high bandwidth with low latency time. over all system operation and up to eight nodes. Each
The broadcast mechanism is easy to implement and up- node consists of a single-board computer and a mem-
date messages are small. Besides, propagation time for ory board. These boards are connected to two buses,
word update messages is small because the intercon- the VERSA module eurocard bus (VME) and VME
nection medium spends only two bus cycles to trans- subsystem bus (VSB). This system uses data and in-
fer an update messages to all nodes [14]. Finally, the terrupt broadcast mechanisms by using VME bus slave
RM/MC tolerates node failures without service disrup- function and location monitors. The bus slave function
tion. Therefore RM/MC supports numerous real-time is created by logic in a dual port memory and it sup-
applications. ports broadcast mechanism. The location monitors is
Disadvantages of RM/MC system are as follows. By contained in memory and generates interrupts across
a structural defect, RM/MC doesn’t scale well and dis- the VSB bus to the target computer.
able cache. Also it has high cable complexity. When This system is strong for hard and soft real-time ap-
short messages (word) and long messages (block) share plications; all tasks complete within their deadlines. In
the same FIFO buffers, short messages must wait until addition, it supports a variety of high-performance in-
long ones are sent. Therefore nodes might suffer from dustry standard I/O buses, interfaces, and protocols.
starvation. While, MMM does not scale well compared to other
A University of Belgrade group and Encore intro- bus-based multiprocessors. Architectural weakness is
duced upgraded RMS for personal-computer (PC) net- VME bus’s dual-fuction; VME bus serves as a medium
work. RMS for PC has better scalability than RM/MC. for maintaining the RM coherence and also as the sys-
It reduces RM bus traffic and alleviates the memory- tem bus.
contention problem.[19] MMM supports numerous hard and soft real-time ap-
RM/MC++ is another project in cooperation with plications. For example, factory automation, process
Encore and a University of Belgrade group[19]. It control, Supervisory control and data acquisition, data
was proposed to enhance RM/MC system performance. communication, and simulation and trainer application.
The main idea of RM/MC+ is to minimize the delays
that occurs when short messages wait until long ones 2.4 A Widely Distributed RMS
are sent in the same FIFO buffers. A University of Tokyo group has proposed widely dis-
This system supports numerous real-time applica- tributed replicated shared memory [4] in 1994. It is
tions including vehicle simulation, telemetry, instru- designed for a small number of nodes distributed over
mentation, and nuclear power plant simulation and a wide area.
control. Especially, RM/MC is designed to satisfy Each node maintains a copy of the global shared-
the online transactions processing applications (OTPL) memory space on local memory. A multi-cast server
needs. broadcasts write accesses to all nodes.

            
   
     
  
             
 

          
 

 
   
        
   
     


   
  
Figure 5: Network Shared Memory

2.6 Network Shared Memory


Figure 4: The Memory Channel Network for PCI ar-
chitecture Network shared-memory (NSM) was proposed by archi-
tecture technology corporation (ATC) in 1994 [6]. It is
a low-cost approach for clustering workstations into a
This RSM’s Memory Consistency Model (MCM) is single, shared memory mid-range parallel computer.
looser than that in typical RM system. Nodes are connected in a unidirectional slotted ring
In this system, multicast server is completely realized by high speed optical links. Network Memory Interface
in software, it will create a software overhead problem. (NMI) is used to interconnect the processing elements
of the workstations into a parallel computer. Memory
It is especially suitable for applications that impose
update size is a word and its MCM is sequential con-
real-time operations in a widely distributed environ-
sistency. The system is successful up to 60 processors.
ment. This is because other latency hiding techniques
The main advantage of this system is to provide hard-
such as context switching or prefetching are not always
ware support for synchronization by implementing a
effective for real time operation.
separate synchronization ring. However, NSM does not
support overlapping computation with communication
to ensure memory consistency. It is being used in CAD,
2.5 Memory Channel Network for PCI weather data processing, and parallel databases.

Digital Equipment Corporation designed a Memory 2.7 SCRAMNET+


Channel for peripheral component interconnect (PCI)
in 1996 [5]. It was designed to enhance a cluster’s par- Shared common RAM network (SCRAMNET)[21] is
allel performance and high availability. developed by Systran Corporation in 1989 and is ad-
vanced into SCRAMNET+ [7].
The basic network primitive is a memory-mapped
It is ring based RM products for realtime applica-
circuit that provides a write-only connection between
tions. Up to 256 nodes can be connected via a single
a page of virtual-address space on a transmitting node
fiber optic ring. Each node keeps its own copy of the
and a page of physical memory on a receiving node [20].
entire RM space. Data can be transmitted up to 3500
Also, it uses crossbar interconnection network and sup-
meters over fiber optic cable and 30 meters over coaxial
ports page-level connection granularity.
cable, respectively [22].
MC supports several connection models, including This system has various advantages. To improve the
point-to-point, multicast, and broadcast. In MC, a effective network throughput, SCRAMNET+ provides
sever can transmit data directly to the requesting nodes variable length packets. Also the network board is de-
without affecting its local memory. It also lets receiver signed to be medium independent. Another advantage
nodes send acknowledgments and implements an in- is data filtering in which only data-value changed writes
novative remote-read primitive as two write transfers broadcast to other nodes.
without software intervention. While it has many advantages, there are a few disad-
MC was designed with homogeneous clusters, it dose vantages. Its reflective memory is limited to 8Mbytes
not support heterogeneous computing, in spite of the because its standard specifies a fixed number of address
fact that all computers incorporating the PCI bus can lines. Moreover each node keeps a copy of the entire RM
be connected using this approach. Also cross bar inter- space.
connection network limits the number of system nodes. SCRAMNET+ is being applied for many realtime
     
system, scalable eagerly shared memory (SESAME) is
proposed in 1991 [14].
SESAME’s type of interconnection is not fixed. How-
  
ever, the prototype implements a fiber-optic 2D mesh.
The path of an update is not carried with data, but
    rather determined by the global virtual address. An
update size is a word. The system permits multiple
writes within a multicast group. Also, sequence num-
  
bers are assigned to packets to enforce arrival in the
original write order [23].
   
This system is designed for heterogeneous networks.
  
Only 10 percents of the system’s node interface is pro-
cessor dependent. Therefore, it can easily be ported
          
to numerous different processors. To gain higher band-
width, the system can merge single word packets with
Figure 6: A VMIC RM Network consecutive addresses. SESAME determines routes for
update messages statically. However, based on a com-
pile analysis, it can dynamically disable sharing for tem-
application including aircraft, land vehicle, missile sim-
porary data changes.
ulation, robotics, data acquisition, and virtual reality.
SESAME does not support real block transfers gen-
erated by direct memory access (DMA) unit. Also it
2.8 VMIC RM Network cannot correct transmission errors successfully because
data is transformed into bit-serial streams for fiber op-
VME microsystems international corporation (VMIC) tic channel.
has produced VMIC RM network family since 1995 [8].
The VMIC network consists of five product families -
2.10 SHRIMP
the 5550 family, the 5560 family, the 5570 family, the
5580 family, and 5590 family. In 1994, Princeton University introduced scalable
Each RM board is configured with on-board high performance really inexpensive multiprocessor
SDRAM. Writes are stored in local SDRAM and broad- (SHRIMP) [9]. SHRIMP is based on a custom-designed
cast over a high-speed fiber-optic data path to other virtual-memory-mapping network interface. The net-
RM nodes. It allows data to be shared between up to work interface maps the virtual memory of a sending
256 independent nodes at rates up to 174 Mbyte/s. process to the virtual memory of a receiving process. A
The VMIC RM network supports a number of pop- network-interface page table holds information about
ular system buses. Programmable Logic Controller the mapping [24][25].
(PLC) can networked in this system. It also uses The system supports both automatic and deliberate
static random access memory to implement onboard updates. Consecutive writes buffers in the transmit
RM. Therefore, it can provide fast read access time to FIFO are merged in the same automatically updated
stored data. block packet. Also caching is supported in the system.
There are a few disadvantages with this system. The main disadvantage to SHRIMP is that does not
First, each node keeps its own copy of the entire RM support broadcast and multicast communication mod-
space. Second, the VMIC RM network does not pro- els [26][27].
vide data filtering. Third, the system interrupt to read
just-received data increases software overhead. 3 Comparison
This RM network is designed for many realtime appli-
cations including aircraft, ship, submarine, power plant Complexity of system means how hard it is to imple-
simulators, and data acquisition. ment system in both hardware and software. This fac-
tor is very important for commercializing because it
2.9 SESAME is directly involved in cost. Kinds of interconnection
network, shape of cable connection, and the number
MERLIN stands for memory routed, logic interconnec- of necessary chips on the board were considered to de-
tion network which is developed by Sandia National cide complexity of each system. Scalability influences
Laboratories and the State University of New York, on the limit of possible application area both directly
Stony Brook. To improve performance of MERLIN and indirectly. In this paper, low scalability means the
Table 1: Complexity, Scalability and Compatibility of Table 2: Reflective Memory Systems
RMSs
System Network Application Company
System Complexity Scalability Compatibility RM/MC++ Bus OLT P 1 Encore
RM/M C++ high low high MMM Bus Real time Modcomp
MMM low low low RSM Bus Real time Tokyo
RSM low low med MC Crossbar Client- DEC
MC low low low server
N SM high med high NSM Ring S&E 2 ATC
ScramN et+ low high high ScramNet+ Ring Real time Systran
V M IC low high high VMIC Ring Real time VMIC
Shrimp med high high Shrimp Mesh Client- Princeton
server
1
OLT P online transaction processing
system can use less than 8 processors, medium scala- 2
S&E scientific and engineering
bility means more than 32 processors can be attached
to network, and over 256 processors in system repre-
sents high scalability. Compatibility is also critical fac- recent RMS technologies. Also, complexity, scalability,
tor when users adapt the system to their environment. and compatibility of each RMS was compared. The re-
It can reduce extra cost for adapting other system to sult of this paper could be used by users who need set
each different purpose and system environment. Table up the RMS for their purpose and by researchers who
1 compares complexity, scalability, and compatibility of study about the RMS. Since the RMS designed for in-
each RMS. dustrial environment had short history, this paper did
not provide comparison of each RMS’s performance.
The survey on RMS would be used to propose im-
4 Discussion proved RM model for the industrial environment.
Table 2 shows each RMS’s interconnection network and
main application area. Also, it identifies what com- References
pany or university has involved proposing and develop-
ing the RMSs. As surveyed so far, various RMSs have [1] “Reflective Memory Specifics,” ENCORE
proposed and commercialized for decade now. How- www.encore.com/products/hardware/reflecive.
ever there are still topics deserve further investigation.
[2] I. Lucci, S.; Gertner, “Reflective-memory multi-
Data filtering will reduce update traffic and memory
processor,” System Sciences. Vol. II., Proceedings
contention increasing interconnection network utiliza-
of the Twenty-Eighth Hawaii International Con-
tion. Node prioritization can lower transient overload
ference on, vol. 1, pp. 85 –94, 1995.
on both transmit side and receive side, unnecessary
waiting. Therefore it is able to increase system perfor- [3] B. Furht, “Architecture and Performance Evalu-
mance and usability. Hierarchy of RM buses increases tion of the MMM,” IEEE IC Computer Architec-
system scalability. Dynamic RM mapping can get rid ture Newsletter, Mar., vol. 1, pp. 66–75, 1997.
of the need for a copy of the entire RM space and will
bring better utilization of available memory. Caching [4] H.; Oguchi, M.; Aida, “A proposal for a DSM
RM region might significantly decrease memory access architecture suitable for a widely distributed en-
time. The hardware support for heterogeneous com- vironment and its evaluation,” High Performance
puting improves system usability. Hardware broadcast Distributed Computing, 1995., Proceedings of the
mechanism is able to improve system performance and Fourth IEEE International Symposium on, vol. 7,
alleviate MCM implementation. Separating system bus pp. 32 –39, 1995.
for RM updates only might be able to reduce propaga-
[5] Gillett R.; Collins M.; Pimm D., “Overview of
tion time of update message.
memory channel network for PCI,” Compcon ’96.
’Technologies for the Information Superhighway’,
5 Conclusion pp. 244–249, 1996.

This paper surveyed history, architecture, general fea- [6] Ramanjuan R.S.; Bonney J.C.; Thurber K.J.,
tures, advantage, disadvantage, and application area of “Network shared memory: a new approach for
clustering workstations for parallel processing,” [18] C. Leff, A.; Pu, “A classification of transaction
High Performance Distributed Computing, 1995., processing systems,” Computer, vol. 24, pp. 63
Proceedings of the Fourth IEEE International –76, 1991.
Symposium, pp. 48–56, 1995.
[19] Jovanovic M.; Tomasevic M.; Milutinovic V.,
[7] Systran Corporation, “SCRAMNet+ Shared “A simulation-based comparison of two reflective
Memory – Speed, Determinism, Reliability, and memory approaches,” System Sciences. Vol. II.,
Flexibility for Distributed Real-Time Systems,” Proceedings of the Twenty-Eighth Hawaii Interna-
www.systran.com. tional Conference, vol. 1, pp. 140 –149, 1995.

[8] VME Microsystems Int’l Corp., “VMIC’s Reflec- [20] Gillett R.B., “Memory channel network for PCI,”
tive Memory Network,” www.vmic.com. IEEE Micro, vol. 1, pp. 12–18, 1996.

[21] Systran Corporation, “SShared-Memory Net-


[9] et al. M. A. Blumrich, “Virtual memory mapped
working Architectures – Simplicity and Elegance,”
network interface for the SHRIMP multicom-
www.systran.com.
puter,” in Proc. of the 21th Annual Int’l Symp.
on Computer Architecture, 1994, pp. 142–153. [22] Systran Corporation, “SCRAMNet+ Overview ,”
www.systran.com.
[10] V. Nitzberg, B.; Lo, “Distributed shared memory:
a survey of issues and algorithms,” Computer, vol. [23] C. Maples, “A high-performance memory-based
24, pp. 52 –60, 1991. interconnection system for multicomputer environ-
ments,” 1992.
[11] M. Protic, J.; Tomasevic, “Distributed shared
memory: concepts and systems,” IEEE Parallel [24] M. A. Blumrich and R. D. Albert, “Design choices
and Distributed Technology: Systems and Applica- in the SHRIMP system: An empirical study,” in
tions, vol. 4, pp. 63 –71, 1996. Proc. of the 25th Annual Int’l Symp. on Computer
Architecture, 1998.
[12] Brendan Tangney Alan Judge, Paddy Nixon,
Vinny Cahill and Stefan Weber, “Overview of dis- [25] Angelos Bilas and Edward W. Felten, “Fast RPC
tributed shared memory,” Tech. Rep., 1998. on the SHRIMP virtual memory mapped network
interface,” Journal of Parallel and Distributed
[13] C. Amza, A. L. Cox, S. Dwarkadas, P. Keleher, Computing, vol. 40, no. 1, pp. 138–146, 1997.
H. Lu, R. Rajamony, W. Yu, and W. Zwaenepoel,
“Treadmarks: Shared memory computing on net- [26] E. W. Felten, R. D. Alpert, A. Bilas, M. A. Blum-
works of workstations,” IEEE Computer, vol. 29, rich, D. W. Clark, S. M. Damianakis, C. Dub-
no. 2, pp. 18–28, 1996. nicki, L. Iftode, and K. Li, “Early experience with
message-passing on the SHRIMP multicomputer,”
[14] V. Jovanovic, M.; Milutinovic, “An overview of in Proc. of the 23rd Annual Int’l Symp. on Com-
reflective memory systems,” IEEE Concurrency, puter Architecture (ISCA’96), 1996, pp. 296–307.
vol. 7, pp. 56 –64, 1999.
[27] Stefanos N. Damianakis, Cezary Dubnicki, and
[15] V. Protic, J.; Milutinovic, “Reflective Memory Edward W. Felten, “Stream sockets on SHRIMP,”
System Based on a Grid of Buses,” 21st inter- in Communication, Architecture, and Applications
national conference on microeledtronics Sep. 1997, for Network-Based Parallel Computing, 1997, pp.
vol. 2, 1997. 16–30.

[16] Kai Li and Paul Hudak, “Memory coherence in


shared virtual memory systems,” in Proceedings
of the 5th ACM Symposium on Principles of Dis-
tributed Computing (PODC), New York, NY, 1986,
pp. 229–239, ACM Press.

[17] Christopher Wilks, “SCI-Clone/32 - A Distributed


Real Time Simulation System,” Computting in
High Energy Physics, edited by Hertzberger and
Hoogland, North Holland Press, vol. 1, pp. 416–
422, 1986.

You might also like