0% found this document useful (0 votes)
16 views10 pages

Lock-Free Linked Lists and Skip Lists

Lock-free shared data structures implement distributed ob- jects without the use of mutual exclusion, thus providing robustness and reliability. We present a new lock-free im- plementation of singly-linked lists. We prove that the worst- case amortized cost of the operations on our linked lists is linear in the length of the list plus the contention, which is better than in previous lock-free implementations of this data structure. Our implementation uses backlinks that are set when a node is de

Uploaded by

jenana4059
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views10 pages

Lock-Free Linked Lists and Skip Lists

Lock-free shared data structures implement distributed ob- jects without the use of mutual exclusion, thus providing robustness and reliability. We present a new lock-free im- plementation of singly-linked lists. We prove that the worst- case amortized cost of the operations on our linked lists is linear in the length of the list plus the contention, which is better than in previous lock-free implementations of this data structure. Our implementation uses backlinks that are set when a node is de

Uploaded by

jenana4059
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Lock-Free Linked Lists and Skip Lists

Mikhail Fomitchev Eric Ruppert


Department of Computer Science, Department of Computer Science
York University York University

ABSTRACT part. Thus, a delay of one process can cause performance


Lock-free shared data structures implement distributed ob- degradation and priority inversion. When halting failures
jects without the use of mutual exclusion, thus providing can occur, this becomes particularly important, because the
robustness and reliability. We present a new lock-free im- entire system can stop making progress if one process fails.
plementation of singly-linked lists. We prove that the worst- By contrast, an implementation of a shared-memory object
case amortized cost of the operations on our linked lists is is lock-free (or non-blocking) if a finite number of steps taken
linear in the length of the list plus the contention, which by any process guarantees the completion of some operation.
is better than in previous lock-free implementations of this If an implementation is lock-free, delays or failures of indi-
data structure. Our implementation uses backlinks that are vidual processes do not block the progress of other processes
set when a node is deleted so that concurrent operations in the system. Lock-free data structures also have the po-
visiting the deleted node can recover. To avoid performance tential to have better performance, because several processes
problems that would arise from traversing long chains of are allowed to modify a data structure at the same time.
backlink pointers, we introduce flag bits, which indicate that Herlihy [4, 5] introduced the first universal construc-
a deletion of the next node is underway. We then give a lock- tions for designing lock-free data structures using the Com-
free implementation of a skip list dictionary data structure pare&Swap (C&S) synchronization primitive. Others fol-
that uses the new linked list algorithms to implement in- lowed, but they suffer from several flaws, such as inefficiency,
dividual levels. Our algorithms use the single-word C&S low parallelism, excessive copying, and generally high over-
synchronization primitive. head, which often make them impractical. To achieve ade-
quate performance, original algorithms, specific to a partic-
ular data structure, are usually required.
Categories and Subject Descriptors Implementing linked lists efficiently is very important,
E.1 [Data]: Data Structures—Distributed Data Struc- as they act as building blocks for many other data struc-
tures; D.1.3 [Software]: Programming Techniques—Con- tures. We present a new lock-free implementation of a sorted
current Programming; F.2.2 [Theory of Computation]: singly-linked list, which handles all dictionary operations
Analysis of Algorithms and Problem Complexity with a better average complexity than any prior implemen-
tation. Most recent implementations of lock-free linked lists
General Terms [3, 8] were evaluated only by doing experimental testing. We
believe that there exists a certain lack of theoretical devel-
Algorithms, Performance, Design, Reliability, Theory
opment in this area, and our work addresses this problem. A
skip list [12] is a dictionary data structure, that provides ran-
Keywords domized algorithms for searches, insertions, and deletions
distributed, fault-tolerant, lock-free, linked list, skip list, that run in O(log n) expected time, where n is the number
efficient, analysis, amortized analysis. of elements in the skip-list. The expectation is taken over
random choices made by the algorithms. We also give a lock-
1. INTRODUCTION free implementation of a skip list that is based on using our
linked list algorithms to maintain each level of the skip list.
A common way to implement shared data structures in
Recently, other lock-free skip list designs have been given
distributed systems is to use mutual exclusion locks. How-
independently of this work [2, 14, 15].
ever, this approach has a major weakness: when one pro-
Our model is an asynchronous shared-memory distributed
cess holds a lock, no other processes can modify the locked
system of several processes, where an arbitrary number of
process halting failures are allowed. Our algorithms use
atomic single-word C&S synchronization primitives. The
Permission to make digital or hard copies of all or part of this work for implementations that we present are linearizable [6].
personal or classroom use is granted without fee provided that copies are Lock-free implementations allow individual operations to
not made or distributed for profit or commercial advantage and that copies take arbitrarily many steps, so one generally cannot evaluate
bear this notice and the full citation on the first page. To copy otherwise, to their worst-case cost. It is natural to analyze the average
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
cost of operations instead, because this evaluates the per-
PODC’04, July 25–28, 2004, St. Johns, Newfoundland, Canada. formance of the system as a whole. To calculate the average
Copyright 2004 ACM 1-58113-802-4/04/0007 ...$5.00.

50
cost of operations in our linked list implementation, we use cost of an operation in the execution is O(n̄E + c̄E ), where
an amortized analysis that relies on a fairly complex tech- n̄E and c̄E were defined in the introduction. To compare,
nique of billing part of the cost of each operation S to con- the average cost per operation in Valois’s implementation
current operations that slow S down by modifying the data can be Ω(mE ), where mE is the total number of operations
structure. The amortized cost of an operation S, denoted invoked during E. This is possible even when n̄E and c̄E are
t̂(S), is equal to the actual cost of S plus the total cost billed O(1) [17]. It is not hard to see that n̄E + c̄E ≤ mE (because
to S from other operations minus the total cost billed from S mE includes both completed operations and operations that
to other operations. We measure the cost of operations as a are currently in progress), and the difference can be quite
function of the size of the list and the contention. The point significant. As we show in Section 3.1, the average cost
contention at time T is the number of processes running of operations in Harris’s implementation can be Ω(n̄E c̄E ),
concurrently at T . We define the contention of operation S, which is also strictly worse than in our implementation.
denoted c(S), to be the maximum point contention during Pugh’s skip list data structure, originally designed for
the execution of S. We prove that t̂(S) ∈ O(n(S) + c(S)), sequential accesses [12], is a natural candidate for concur-
where n(S) is the number of elements in the list when S is rent dictionary implementations, since it has good expected
invoked and c(S) is the contention of S. The O(n(S)) term performance without requiring any explicit, centralized bal-
comes from the cost of traversing the list, while the over- ancing. Lock-based concurrent implementations have been
head that comes from concurrency is bounded by O(c(S)). given by Pugh [11] and by Lotan and Shavit [13]. Valois
It then follows that for any execution E, the average cost of claimed that his lock-free linked list can easily be used to
an operation in E is obtain a lock-free skip lists [17], but it is not clear how: for
„P « example, a process traversing his linked list must maintain
S∈E (n(S) + c(S)) a collection of pointers called a cursor, and it is difficult to
t̄E ∈ O = O(n̄E + c̄E ),
mE do so when one descends through the levels of a skip list.
where the sum is taken over all operations S invoked during Sundell and Tsigas recently gave the first lock-free imple-
E, mE is the total number of these operations. The values mentation of a skip list [14]. Their implementation supports
the Insert, Update and DeleteMin operations. They
n̄E and c̄E are the average number of elements in the list dur-
ing E and the average operation later extended it to implement the full range of dictionary
P contention duringP E, which
n(S) S∈E c(S)
operations [15]. Another recent implementation of lock-free
are defined as follows: n̄E = S∈E mE
; c̄E = mE
. skip lists using single-word C&S’s was presented by Fraser
The rest of the paper is organized as follows. In Section 2 [2]. Although both of these designs were done independently
we discuss related work. We give our implementation of of ours and of each other, there are some similarities between
lock-free linked lists, including a sketch of the proof of cor- the three resulting skip list algorithms. All use the marking
rectness and analysis, in Section 3. We briefly present our technique [3] to implement deletions on the individual levels
implementation of lock-free skip lists in Section 4. of the skip list. Fraser’s algorithms use Harris’s design style
where an operation restarts if it detects interference from
2. RELATED WORK a concurrent operation. Sundell and Tsigas’s design allows
processes to overcome the interference in some cases by us-
The first implementation designed for lock-free linked lists
ing backlink pointers [11, 17]. Our design employs backlink
was presented by Valois [17]. The main idea of his approach
pointers and flag bits in order to ensure that processes can
was to maintain auxiliary nodes in between normal nodes
always recover efficiently from such interference. All im-
of the list in order to resolve the problems that arise be-
plementations use helping (in different ways) to complete
cause of interference between concurrent operations. Also,
deletions that could block the progress of other operations.
each node in his list had a backlink pointer which was set to
Sundell and Tsigas incorporate a reference counting scheme
point to the predecessor when the node was deleted. These
to handle memory management.
backlinks were then used to backtrack through the list when
Fraser gives other skip list designs that use more powerful
there was interference from a concurrent deletion. (A simi-
primitives, such as multi-word C&S and software transac-
lar idea was used in an earlier, lock-based implementation of
tional memory [2]. Experimental results on lock-free linked
linked lists by Pugh [11].) Another lock-free implementation
lists [3, 8] and skip lists [2, 14, 15] suggest that they can be
of linked lists was given by Harris [3]. His main idea was to
a practical alternative to lock-based implementations.
mark a node before deleting it in order to prevent concur-
rent operations from changing its right pointer. We look at
this implementation in detail in Section 3.1. Harris’s algo- 3. LINKED LISTS
rithms are simpler than Valois’s and his experimental results We now present our singly-linked list implementation.
show that generally they also perform better. Yet another Our algorithms use the C&S primitive, which atomically
implementation of a lock-free linked list was proposed by executes the following code.
Michael [8]. He used Harris’s design to implement the un- C&S (Word* address, Word old val, Word new val ) : Word
derlying data structure, but his algorithms, unlike Harris’s, 1 value = ∗address
were compatible with efficient memory management tech- 2 if (value == old val)
niques, such as IBM freelists [7, 16] and the safe memory 3 ∗address = new val
reclamation method [9]. 4 return value
Our linked lists are built combining the techniques of
marking nodes [3] and using backlink pointers [11, 17],
and also new ideas, such as the flag bits described in Sec- 3.1 Linked List Design
tion 3.1, which are introduced to improve the worst-case per- The basic problem in designing a lock-free linked list is
formance. We show that for any execution E, the average that when a process is deleting a node X by performing a

51
A B C A B C A B C
Initial configuration Step 1: Marking Step 2: Physical deletion

Figure 1: Harris’s two-step deletion of a node.

A B C A B C
Initial configuration Step 1: Marking

A B C A B C
Step 2: Setting the backlink and marking Step 3: Physical deletion

Figure 2: Three-step deletion of a node used in our implementation.

C&S on X’s predecessor, there must be a guarantee that done by the system is Ω(q · (n + (n − 1) + . . . + 1)) = Ω(qn2 ).
X’s right pointer is not changed by a concurrent operation. If we make n > q, then the average cost of an operation in
Otherwise, incorrect executions can be constructed (see [17] this execution is Ω(qn) = Ω(n̄E c̄E ). (The variables n̄E and
or [3]). One of the ways to deal with this issue was given c̄E were defined in the introduction.)
by Harris [3]. Our linked list implementation uses a simi- Our implementation achieves better worst-case perfor-
lar technique, so we will look at Harris’s implementation in mance by making processes recover from failures instead of
more detail. restarting. We augment each node of our data structure
Harris replaced the right pointer of each node with a com- with an additional pointer field called backlink. When a
posite field, which we will call a successor field. The succes- node X gets deleted, its backlink is set to X’s predecessor.
sor field consists of a right pointer and a mark bit. 1 When If some process P then fails a C&S because X is marked,
a process needs to change the right pointer of a node, it ap- P follows X’s backlink to X’s predecessor. If the predeces-
plies a C&S’s to the successor field of that node. A mark sor is also marked, P follows the predecessor’s backlink, and
bit acts as a toggle that is used to control when the right so on, until it reaches an unmarked node U . Then P re-
pointer of the node can be changed. Normally, the mark bit sumes its operation from U rather than from the beginning
is 0. To delete a node B, a process uses two C&S’s: the of the list. The sequence of backlinks that P traverses before
first marks B’s successor field by setting its mark bit to 1, reaching U is called a chain of backlinks. The introduction
and the second removes B from the list, as illustrated in of backlinks alone, however, does not guarantee the desired
Figure 1, where marked successor fields are crossed. A node operation complexity. The problem is that long chains of
is logically deleted after the first step, and physically deleted backlinks can be traversed by the same process many times.
after the second step. All of the C&S’s performed by the This happens when these chains grow towards the right, i.e.
algorithms modify only unmarked successor fields. There- when backlink pointers are set to marked nodes, and thus
fore, once the successor field of a node is marked, it never nodes are linked to the right end of the chains. We eliminate
changes. this possibility by introducing flag bits.
Harris’s approach, however, has certain performance- The flag bit can be thought of as a warning that a dele-
related problems. Consider two processes P1 and P2 per- tion of the next node is in progress. Like the mark bit, the
forming concurrent operations: P1 attempts to insert a new flag bit is part of the successor field, and is initially set to 0.
node after node X, and P2 attempts to delete node X. Sup- When a node is flagged (i.e. when its flag bit is set to 1), its
pose that, just before P1 is about to execute a C&S, P2 successor field is fixed and cannot be marked or otherwise
marks node X, and so P1’s C&S fails. When this happens, changed until the flag is removed. Also, a marked node can
Harris’s algorithms require P1 to restart from the beginning never get flagged, and therefore no node can be both flagged
of the list, which can lead to poor performance. Consider and marked. Before marking a node B, a process flags the
an execution E in a system of q processes. First insert n predecessor node A, thus ensuring that when B’s backlink is
keys into the list. Then make one process Pq repeatedly set to point to A, it will not be pointing to a marked node.
delete the last node of the list, while the rest of the processes Figure 2 illustrates how deletions are performed in our data
P1 , . . . , Pq−1 attempt to insert new nodes at the end of the structure. Shaded boxes denote flagged successor fields, and
list. In each round of the execution, Pq marks a node right crossed boxes denote marked successor fields. The deletion
after processes P1 , . . . , Pq−1 have located the correct inser- of node B consists of three steps. (1) Flagging the predeces-
tion position, but before any of them perform a C&S. Each sor node A by applying C&S to its successor field (Figure 2,
time P1 , . . . , Pq−1 attempt to insert the keys at the end of the Step 1). (2) Setting B’s backlink to point to its predecessor
list, they have to search through the whole list to locate the A and then marking B by applying C&S to its successor
appropriate insertion position, and therefore the total work field (Figure 2, Step 2). (3) Performing a physical deletion
of node B and removing A’s flag by applying C&S to A’s
1
In many modern architectures, a 32-bit word that stores a successor field (Figure 2, Step 3).
pointer has two unused bits. One of those can be used to To preserve the lock-freedom property, we allow processes
store the mark bit and the other can be used to store the to help one another with deletions. For example, if a process
flag bit that we introduce later.

52
Search (Key k ) : Node Delete (Key k ) : Node
// Searches for a node with the supplied key. // Attempts to delete a node with the supplied key.
1 (curr node, next node) = SearchFrom(k, head) 1 (prev node , del node) = SearchFrom(k − $, head)
2 if (curr node.key == k) 2 if (del node .key = k) // k is not found in the list.
3 return curr node 3 return NO_SUCH_KEY
4 else 4 (prev node , result ) = TryFlag(prev node, del node)
5 return NO_SUCH_KEY 5 if (prev node = null)
6 HelpFlagged(prev node, del node)
SearchFrom (Key k, Node *curr node) : (Node, Node) 7 if ( result == false)
// Finds two consecutive nodes n1 and n2 8 return NO_SUCH_KEY
// such that n1.key ≤ k < n2.key. 9 return del node
1 next node = curr node.right
2 while (next node.key ≤ k) HelpFlagged (Node *prev node, Node *del node)
// Ensure that either next node is unmarked, // Attempts to mark and physically delete node del node,
// or both curr node and next node are // which is the successor of the flagged node prev node.
// marked and curr node was marked earlier. 1 del node . backlink = prev node
3 while (next node.mark == 1 and 2 if (del node .mark == 0)
(curr node.mark == 0 or 3 TryMark(del node)
curr node.right = next node)) 4 HelpMarked(prev node, del node)
4 if (curr node. right == next node)
5 HelpMarked(curr node, next node) TryMark (Node del node)
6 next node = curr node.right // Attempts to mark the node del node.
7 if (next node.key ≤ k) 1 repeat
8 curr node = next node 2 next node = del node. right
9 next node = curr node.right 3 result = c&s(del node.succ, (next node , 0, 0) ,
10 return (curr node, next node) (next node , 1, 0) )
4 if ( result == (∗, 0, 1)) // failure due to flagging
HelpMarked (Node *prev node, Node *del node) 5 HelpFlagged(del node, result. right )
// Attempts to physically delete the marked 6 until (del node .mark == 1)
// node del node and unflag prev node.
1 next node = del node. right
Figure 4: Delete, HelpFlagged, and TryMark.
2 c&s(prev node.succ, (del node , 0, 1) , ( next node , 0, 0) )

Figure 3: Search, SearchFrom, and HelpMarked.


use SearchFrom(k − $, n) to denote SearchFrom2(k, n).
The two nodes that SearchFrom(k − $, head) returns sat-
isfy n1.key < k ≤ n2.key (and n1.right = n2).
cannot complete its operation because of a flagged node, it The Search(k) routine simply uses SearchFrom to find
will try to complete the corresponding deletion, thus remov- the node with key k in the list, if it exists. The Insert
ing the flag, and then continue with its own operation. routine starts by calling SearchFrom to find where to in-
sert the new key. Then it verifies that the new key is not a
3.2 Algorithms duplicate, creates a new node, and enters the loop in lines
The nodes in our linked list are ordered by their keys, 5–22, from which it can exit only if it successfully inserts the
and for simplicity our data structure does not allow users new node or another process inserts a node with the same
to insert duplicate keys. Each node has the following fields: key (lines 20–22). In each iteration of the loop, it attempts
key, element, backlink, and successor. The successor field is to insert the new node between prev node and next node by
denoted succ in our pseudocode, and it is composed of three performing a C&S in line 11. If the C&S fails, Insert de-
parts: a right pointer, a mark bit, and a flag bit. So, for tects the reason, recovers from the failure, and enters the
each node n, n.succ = (n.right, n.mark, n.f lag). The head next iteration. The reason for the failure can only be the
node and the tail node of the list contain dummy keys −∞ change of prev node’s successor field. There are several pos-
and +∞, and are referenced by the shared variables head sible ways in which this successor field can change: it can
and tail respectively. The pseudocode for our algorithms is get redirected to another node, flagged, marked, or any two
shown in Figures 3 to 5. The routines Search, Insert, and of the above, except that it cannot be both marked and
Delete implement the corresponding dictionary operations. flagged. If prev node got flagged, it means that another pro-
The SearchFrom routine is used to perform searches in cess was performing a deletion of the successor node. In this
our data structure. It traverses the list starting from the case Insert calls the HelpFlagged routine (lines 15-16),
specified node, and returns pointers to two nodes n1 and which helps to complete that deletion and remove the flag
n2, that satisfy the following condition at some time dur- from prev node. If prev node got marked, Insert traverses
ing the execution of SearchFrom: n1.right = n2 and the backlinks until it finds an unmarked node and then sets
n1.key ≤ k < n2.key. SearchFrom also deletes any prev node to point to it (lines 17-18). In any case, in line
marked nodes that it sees by calling the HelpMarked rou- 19 Insert invokes SearchFrom starting from prev node to
tine (line 5). We could also write a SearchFrom2 routine, find the correct location for the insertion in the updated list,
identical to the SearchFrom, except that “≤” in lines 2 and updates its prev node and next node pointers. Then In-
and 7 would be replaced with “<”. In our pseudocode, we sert enters the next iteration of the loop.

53
TryFlag (Node *prev node, Node *target node) : (Node, Boolean)
// Attempts to flag the predecessor of target node. P rev node is the last node known to be the predecessor.
1 while (true)
2 if (prev node.succ == (target node , 0, 1) ) // Predecessor is already flagged. Report
3 return (prev node, false) // the failure, return a pointer to prev node.
4 result = c&s(prev node.succ, (target node , 0, 0) , ( target node , 0, 1) ) // Flagging attempt
5 if ( result == (target node , 0, 0) ) // Successful flagging. Report the success,
6 return (prev node, true) // return a pointer to prev node.
7 if ( result == (target node , 0, 1) ) // Failure due to flagging by a concurrent operation.
8 return (prev node, false) // Report the failure, return a pointer to prev node.
9 while (prev node.mark == 1) // Possibly a failure due to marking. Traverse
10 prev node = prev node.backlink // a chain of backlinks to reach an unmarked node.
11 (prev node , del node) = SearchFrom(target node.key − $, prev node)
12 if ( del node = target node) // target node got deleted.
13 return (null, false) // Report the failure, return no pointer.

Insert (Key k, Element e) : Node


// Attempts to insert a new node with the supplied key.
1 (prev node , next node) = SearchFrom(k, head) // prev node.key ≤ k < next node.key
2 if (prev node.key == k)
3 return DUPLICATE_KEY
4 newNode = new Node(key = k, element = e)
5 while (true)
6 prev succ = prev node.succ
7 if ( prev succ . flag == 1) // If the predecessor is flagged, help
8 HelpFlagged(prev node, prev succ.right) // the corresponding deletion to complete.
9 else
10 newNode.succ = (next node, 0, 0)
11 result = c&s(prev node.succ, (next node , 0, 0) , (newNode, 0, 0)) // Insertion attempt.
12 if ( result == (next node, 0, 0)) // Successful insertion.
13 return newNode
14 else // Failure.
15 if ( result == (∗, 0, 1)) // Failure due to flagging.
16 HelpFlagged(prev node, result.right ) // Help complete the corresponding deletion.
17 while (prev node.mark == 1) // Possibly a failure due to marking. Traverse a
18 prev node = prev node.back link // chain of backlinks to reach an unmarked node.
19 (prev node , next node) = SearchFrom(k, prev node) // prev node.key ≤ k < next node.key
20 if (prev node.key == k)
21 free newNode
22 return DUPLICATE_KEY

Figure 5: TryFlagand Insert.

The Delete routine performs a three-step deletion of the true, Delete returns a pointer to the deleted node in line 9
node, as discussed in Section 3.1. Delete starts by calling (i.e. reports success). If result = false, it means that either
SearchFrom, and then calls TryFlag to perform the first del node got deleted, or another process flagged del node’s
deletion step (flagging the predecessor). TryFlag repeat- predecessor (and is going to report success). In this case
edly attempts to flag del node’s predecessor, until the flag is Delete returns NO_SUCH_KEY.
placed or del node gets deleted. TryFlag returns two val-
ues: a node pointer prev node and a boolean result value. 3.3 Correctness
There can be three ways the TryFlag routine can return. We will now present a sketch of the proof of correctness.
If TryFlag itself flags del node’s predecessor, it returns a The complete proof is available in [1]. We first prove several
pointer to the predecessor and result = true. If TryFlag invariants. To state these invariants we classify the nodes
detects that another process flagged del node’s predecessor into three categories as follows.
(which means that another process is performing a dele-
tion of del node), it returns a pointer to the predecessor Def 1. A node is regular if it is was inserted into the
and result = false. If TryFlag detects that del node got list, and it is unmarked.
deleted from the list, it returns null and result = false. If
prev node returned by TryFlag is not null, Delete pro- Def 2. A node is logically deleted if it is marked and
ceeds by calling the HelpFlagged routine, which performs has a regular node linked to it, i.e. n is logically deleted if
the second and the third deletion steps by calling TryMark n.mark = 1 and there exists a regular node m such that
and HelpMarked. If TryFlag also returned result = m.right = n.

54
Def 3. A node is physically deleted if it is marked and • Each successful insertion is linearized when it success-
there is no regular node linked to it. fully performs a C&S (line 11 in the Insert routine)
that inserts the node created in line 4. Each unsuc-
At any time, each node that was ever inserted into the cessful insertion is linearized at time T when the third
list fits into exactly one of these three categories. We prove postcondition holds for the last SearchFrom routine
that the following invariants apply to all regular, logically it invokes (line 1 or 19 in Insert routine). At that
deleted, and physically deleted nodes of the list. time there is a regular node with the same key in the
Inv 1. Keys are strictly sorted: for any two nodes n1, list.
n2, if n1.right = n2, then n1.key < n2.key. • We linearize a successful deletion when the node
Inv 2. The union of regular and logically deleted nodes it returns becomes marked (and therefore logically
forms a linked list structure, i.e. if n is a regular or a logically deleted). Unsuccessful deletions are linearized as fol-
deleted node and n = head, then there is exactly one regular lows. If the SearchFrom called by Delete in line 2
or logically deleted node m such that m.right = n. Node found no node with key k, linearize the deletion at the
m is called n’s predecessor. If n = tail, then node n.right time T specified by postcondition (3) for that Search-
is regular or logically deleted, and it is called n’s successor. From. If the TryFlag called by Delete returned in
The head node has no predecessor, and the tail node has no line 3, 8, or 13 (which means that another process was
successor. executing a concurrent deletion of the same node, and
performed at least the first step of the deletion — flag-
Inv 3. For any logically deleted node, its predecessor is ging the predecessor), then we linearize the deletion
flagged (and unmarked), and its successor is not marked, i.e. immediately after del node gets marked. Note that
if n is logically deleted, and m is a node of the list such that lines 5–6 of Delete ensure that del node gets marked
m is not physically deleted and m.right = n, then m.succ = (and then physically deleted) before Delete returns
(n, 0, 1) and (n.right).mark = 0. in line 8, so this linearization is valid. Also note that
Inv 4. For any logically deleted node, its backlink is the concurrent deletion that flagged del node’s prede-
pointing to its predecessor, i.e if n is logically deleted, and cessor reports success when it returns.
m is a node of the list such that m is not physically deleted 3.4 Performance Analysis
and m.right = n, then n.backlink = m.
Here we present a sketch of the amortized analysis of our
Inv 5. No node can be both marked and flagged at the linked list data structure. We start by explaining our billing
same time. scheme, first giving a general intuition behind it, and then
defining it formally using the mapping β in Def 4. We then
It follows from Inv 3, that if two marked nodes are adja-
explain how we use this billing scheme to prove the bound
cent, then at least one of them is physically deleted.
on the amortized cost of operations. The full version of our
The proof of the invariants goes as follows. Inv 5 is triv-
amortized analysis is available in [1].
ial. Inv 1–3 are proved by induction on the number of suc-
It is not hard to show that in order to calculate the cost of
cessful C&S’s. This proof is lengthy, but fairly straight-
our algorithms, it is only essential to calculate the number
forward. After this we use the proved invariants to show
of C&S attempts, the number of backlink pointer traversals
that once a node’s backlink is set, it never changes. This
(line 10 in TryFlag and line 18 in Insert), and the num-
fact is used to prove Inv 4 by induction on the number
ber of next node and curr node pointer updates by searches
of successful C&S’s. We then prove two important prop-
(lines 6 and 8 in SearchFrom respectively). Counting these
erties of our algorithms. First, we show that deletions in
steps gives an accurate picture of the required time (up to
our data structure work as intended, i.e. they are performed
a constant factor), and therefore we ignore other steps in
in three steps: first flagging the predecessor, then mark-
our amortized analysis. When, later on, we talk about steps
ing the node, and finally physically deleting the node. The
taken by the processes, we mean one of these essential steps.
second proposition states SearchFrom postconditions: if
We classify the (essential) steps of each operation S into
SearchFrom(k, n) returns (n1, n2) and if n.key ≤ k, then
three categories: successful C&S’s, necessary steps, and ex-
(1) n1.key ≤ k < n2.key, (2) there exists a time during
tra steps. The necessary steps are the (non-C&S) steps that
the execution of SearchFrom when n1.right = n2, and
S normally has to perform in order to complete (e.g. in order
(3) if n is unmarked at some time T  before SearchFrom
to complete a search for key k, S has to traverse all nodes
is invoked, then there exists time T between T  and the
with keys smaller than k). Intuitively, the necessary steps
moment SearchFrom returns, when n1 is unmarked and
are the steps that an operation needs to perform even if it is
n1.right = n2.
executing on a sequential linked list. By contrast, the extra
Finally, we use all these facts to prove the correctness of
steps are the steps that S has to take because of interference
our implementation. At any time, we say that the set of
from other operations (e.g. when S fails a C&S because of
elements currently stored in the dictionary is the set of the
a change performed by a concurrent operation). The cost
elements contained in the regular nodes, and we show that
of the necessary steps of S is called the necessary cost of S,
all operations can be linearized so that their return values
and the cost of the extra steps of S is called the extra cost
are consistent with this definition. Specifically,
of S. In our analysis, we show that the necessary cost of S
• The searches are linearized at time T specified by post- is always O(n(S)) (n(S) and c(S) were defined in the intro-
condition (3) of the SearchFrom routine they invoke. duction), and we use a mapping to bill all of the extra cost
If the search is successful, the node it returns is a reg- of S to successful C&S’s that are part of operations con-
ular node at time T ; if the search is unsuccessful there current with S. We say that a C&S is part of operation S
are no regular nodes with key k in the list at T . if it is successful, and it logically belongs to that operation.

55
Specifically, each successful C&S that inserts a new node is physically deleted the node that C was trying to delete.
part of the corresponding successful insertion, and success- (We show that such a C&S had to be performed.)
ful C&S’s that flag, mark, and physically delete nodes are
part of the corresponding successful deletions. A (success- • Backlink traversals: A backlink pointer traversal
ful) C&S that is part of a given operation is not necessarily from node n to node m is mapped to the C&S that
performed by the process that is executing this operation, marked node n.
because processes help one another with deletions.
We define the amortized cost of a successful • Next node pointer updates: Suppose the update
C&S C, denoted t̂(C), to be (actual cost of C) + changes next node from m to m . If m is physically
(total cost billed to C). Note that the first term is 1. deleted before the update, we map the update to the
We define the amortized cost of S, denoted t̂(S), to be C&S that physically deleted m. (Note that even though
(actual cost of S) − (total cost billed from S to successful this C&S could be performed by HelpMarked called
C&S’s) + (total cost billed to successful C&S’s that are part from this SearchFrom routine in line 5, it is part of
of S). The second term is the extra cost of S, so another operation.) Otherwise we map the update to
the C&S that inserted m .
t̂(S) = ((necessary cost of S) + (extra cost of S) +
(cost of successful C&S’s performed by S)) − • Curr node pointer updates: Suppose the update sets
curr node pointer to node n. If n was inserted into the
(extra cost of S) + (total cost billed to list after operation S was invoked, then the update is
successful C&S’s that are part of S) mapped to the C&S that inserted n. Otherwise, the
= (necessary cost of S) + (cost of successful update is mapped to itself.
C&S’s performed by S) + (total cost billed to
To prove our bound on the amortized cost of operations,
successful C&S’s that are part of S) we need to show that the amortized cost of each C&S that is
= (necessary cost of S) + (amortized cost of part of an operation S, is O(c(S)). This is the most impor-
successful C&S’s that are part of S). tant and the most technical part of our amortized analysis.
Below we briefly describe this proof.
We prove that the first term is O(n(S)) and that, for any There are four types of steps that Def 4 bills to successful
C&S C that is part of operation S, the total cost billed to C&S’s. For each of them we prove that if a step of that type
C is O(c(S)). Since at most three C&S’s can be part of performed by operation S  is mapped by β to a (successful)
any given operation, we conclude that the second term is C&S C, then (1) no other steps of the same type performed
O(c(S)). Therefore, t̂(S) = O(n(S) + c(S)). Note that here by S  are mapped to C, and (2) C was performed during the
the O(n(S)) term comes purely from the cost of the steps execution of S  . It then follows that no more than c(S) steps
that even a sequential algorithm needs to perform, while of each type can be mapped to C, where S is the operation
the overhead that comes from concurrency is limited by an C is part of. Proving (1) and (2) for next node updates is
additive term of O(c(S)). We now describe all of the steps fairly straightforward. For curr node pointer updates, we
outlined above in more detail. first show that no operation can set curr node pointer (in
To define our billing scheme formally, we introduce a map- line 8 of a SearchFrom) to a given node more than once,
ping function β, given below. This mapping also formally and then (1) and (2) follow. For backlink traversals, we show
defines the set of the extra steps and the set of the necessary that if operation S traverses a backlink from node n, then
steps for every operation. Function β will map successful n got marked during S, and S never traversed a backlink
C&S’s to themselves. All other steps mapped to themselves from n before, which leads to (1) and (2). In this part of
are necessary steps. The remaining steps are extra steps. the proof we rely on the fact that chains of backlinks never
The logic behind the design of this mapping function is that grow towards the right (see Section 3.1). For unsuccessful
each extra step is mapped to the successful C&S that per- C&S’s, we prove two lemmas. The first one states that if C 
formed the change that causes this extra step to be taken. is an unsuccessful C&S of type four on the successor field
For example, the step of traversing node n that was inserted of node n performed by operation S  , then there exists a
after S was invoked is mapped to the C&S that inserted n. time T during S  when n.succ was such that C  would have
To make it easier to define β, we categorize C&S’s performed succeeded, and S  performed no C&S’s on n.succ between
by our algorithms into four types: (1) insertion C&S (line T and C  . The second lemma states a similar, but slightly
11 in Insert), (2) flagging C&S (line 4 in TryFlag), (3) weaker claim for the C&S’s of the first three types. Using
marking C&S (line 3 in TryMark), and (4) physical dele- these two lemmas, we show that (1) and (2) hold for unsuc-
tion C&S (line 2 in HelpMarked). cessful C&S’s as well.
Since no more than c(S) steps of each type can be mapped
Def 4. Let Q be the set of essential steps in the entire
by β to a successful C&S that is part of S, it follows that
execution E. Function β maps Q to itself. If some operation
the amortized cost of a successful C&S is O(c(S)). Since
S performs step s ∈ Q, β maps this step either to itself,
at most three successful C&S’s can be part of S, it follows
or to a successful C&S that is part of another operation as
that the amortized cost of successful C&S’s that are part
described below.
of S is O(c(S)). To prove that the amortized cost of S is
• C&S’s: Suppose a C&S C on the successor field of O(n(S)+c(S)) we now only need to show that the total cost
node n was executed. If C is successful, then we map of the steps of S that are not mapped by β to the successful
it to itself. If C fails, and it is not of the fourth type, we C&S’s (i.e. the necessary cost of S) is O(n(S)).
map it to the C&S that last modified n.succ. If C is of First, note that the only steps of S that are not mapped
the fourth type and it fails, we map it to the C&S that to the successful C&S’s are the curr node pointer updates

56
in line 8 of SearchFrom routines called by S. Further- new linked list algorithms, so that a tight analysis of the av-
more, by the definition of β such an update is mapped to erage expected complexity of the skip list operations would
itself (and not to a successful C&S) only if the node n to be feasible. However, new difficulties arise when attempting
which the curr node pointer is set to by this update is in- to do this, as explained in more detail below. Thus, the
serted before the invocation of S. It is also not hard to show problem of proving a good upper bound on the complexity
that n must be unmarked at some moment during the exe- of a lock-free skip list implementation remains open.
cution of S, which means that n is a regular node when S In our data structure, a node Q that is not a node of the
is invoked (since nodes never get unmarked). Also, as men- head or tail tower has the following fields: key, backlink,
tioned above, no operation can set the curr node pointer succ, down, and tower root. The first three fields are the
(in line 8 of a SearchFrom routine) to a given node more same as in our lock-free linked lists, down is a pointer to the
than once. Consequently, the total number of steps of S node one level lower than Q (or null if Q is a root node),
that are not mapped to the successful C&S’s cannot be and tower root is a pointer to the root node of Q’s tower. If
greater than the number of regular nodes when S is invoked, Q is a root node, it also has an element field. Nodes of the
i.e. n(S). This concludes our amortized analysis, yielding head tower do not have elements, backlinks or tower root
t̂(S) = O(n(S)) + O(c(S)) (where the O(n(S)) term comes pointers, but each of them has an up pointer, pointing to
from the necessary cost of S, and the O(c(S)) term comes the node above. The top node of the head tower has its up
from the concurrency overhead). pointer set to itself. Nodes of the tail tower contain only the
key +∞. A pointer to the bottom node of the head tower
4. SKIP LISTS is referred to by a shared variable head.
We now give a high-level overview of our algorithms. An
In this section we briefly discuss our lock-free implemen-
insertion builds the tower from bottom to top, i.e. first it
tation of a skip list data structure and give a sketch of the
inserts the root node, then, if necessary, the node at level
proof of correctness. The algorithms and the complete proof
two, and so on. An insertion is linearized when the root
of correctness are available in [1].
node is inserted, since after that moment, all the searches
A skip list [12] is a sequential dictionary data structure, in
are able to find the key. A deletion first deletes the root
which searches, insertions, and deletions have an expected
node of the tower, and then deletes the rest of the nodes
cost of O(log(n)) (and worst-case cost of O(n)), where n
of the tower from top to bottom. A deletion is linearized
is the number of elements in the dictionary. The expecta-
when the root node gets marked. A tower whose root node
tion is taken over the random numbers generated inside the
is marked is called superfluous; all the nodes of such a tower
algorithms. Our lock-free skip list architecture has some
are called superfluous as well.
differences from Pugh’s original design to make it easier to
Regardless of whether deletions delete the towers from top
reuse our linked list algorithms. As shown in Figure 6, we
to bottom, or from bottom to top, superfluous nodes can still
represent each key by a tower of nodes. A tower that has
exist, because while a process P is constructing a tower Q,
H nodes in it is said to have height H. The height of each
Q’s root node can get marked by another process, and P
tower is chosen randomly by coin flips. The bottom node of
can add a new node to Q before it notices the marking. It is
a tower is called the root node, and it acts as a representa-
possible to solve this problem by marking uninserted nodes if
tive of the whole tower. The head tower and the tail tower
Harris’s design is used to implement individual levels of the
store dummy keys −∞ and +∞ respectively. Horizontally,
skip list [2], but with our design this is not feasible because
the nodes of the skip list are arranged in levels: the root
of flags.
nodes are on level one, the nodes immediately above them
The searches in our skip list help deletions by physically
are on level two, and so on. Nodes of the same level form a
deleting superfluous nodes they encounter in order to avoid
singly-linked list, sorted according to their keys.
traversing superfluous towers. Our decision to implement
In the original skip list design [12], Pugh uses a single
searches this way was motivated by the observation that
node with an array of H forward pointers to represent a
if searches traverse superfluous towers without physically
tower of height H. The difference between our architecture
deleting or marking their nodes, it is possible to construct
and Pugh’s architecture is not very significant, but it makes
an execution E where the average cost of operations would
it easier to explain our algorithms in terms of the linked
be Ω(mE ) by forcing operations to repeatedly traverse a
list algorithms already described. For convenience, we use
chain of backlinks of length Ω(mE ) on the lowest level of
the same terminology when we compare our skip list imple-
the skip list (mE was defined in the Introduction). Sun-
mentation with others [2, 15], even though they use Pugh’s
dell and Tsigas [15] use a different method to deal with this
architecture.
problem: their searches can enter superfluous towers via un-
Other recent lock-free skip list designs [2, 15] implement
marked nodes, but if a search detects a marked node in a
individual levels using linked list algorithms that can exhibit
tower it is traversing, it marks all the nodes of this tower.
bad worst-case behaviour, as described in Section 3.1. (Fur-
Subsequent searches physically delete these marked nodes if
thermore, although Sundell and Tsigas incorporate back-
they encounter them (assuming the main Delete operation
links in their implementation, a backlink is not guaranteed
has not already done so), thus making numerous traversals
to be set when it is needed, and their backlink is useful on
of the same chain of backlinks impossible.
a given level only if the tower it is pointing to is sufficiently
Even though searches in our implementation delete super-
high.) Because of the randomization used by the algorithm,
fluous nodes whenever they encounter them, and therefore
it is unclear whether an adversary could exploit the worst-
they cannot be forced so to traverse the same chain of back-
case behaviour on individual levels to force the skip list as a
links repeatedly, there exist scenarios when an operation can
whole to experience bad worst-case behaviour. Our design
be forced to traverse backlinks of the nodes that were deleted
was driven by an effort to ensure that individual levels of
before the operation started (something that never happens
the skip list have good worst-case complexity by using our

57
Head Tail
Tower Tower

Level 4 H4 T4
Tower
B

Level 3 H3 B3 T3
Tower
E
Level 2 H2 B2 E2 T2
Tower Tower Tower
A C D

Level 1 H1 A1 B1 C1 D1 E1 T1

head
null

Figure 6: Lock-free skip list design.

in our linked list implementation). These scenarios can only the nodes in the tower superfluous. It then calls Search-
be constructed by a very careful scheduling of processes tai- ToLevel SL for k, which deletes these superfluous nodes
lored for a given distribution of the heights of the towers. (from top to bottom).
Their existence, however, makes our correctness proofs quite We now briefly sketch the proof of correctness. The first
complicated, but more importantly, it is not clear what ef- part of the proof is similar to the correctness proof for
fect they may have on the worst-case performance of our the linked lists. We show that Inv 1–5 (see Section 3.3)
implementation. hold for each level of our skip list, and that nodes never
The pseudocode for the skip list algorithms is available change levels. We also show that deletions of individual
in [1], and here we describe them only briefly. Each level nodes are performed in three steps (flag, mark, physical
of the skip list can be viewed as a linked list. Therefore, deletion), and that the same postconditions that hold for
the routines that we use to operate on the individual lev- SearchFrom hold for SearchRight as well. These post-
els are similar to our linked list routines. The three ma- conditions guarantee that if SearchRight starts from a
jor routines that implement the dictionary operations are node n that is not superfluous at time T  , then the node
Search SL, Insert SL, and Delete SL. The Search SL m it ends in is not marked at T  . However, m may be
routine calls SearchToLevel SL to determine if there is superfluous at T  , but we show that this can happen only
a root node (and hence, a tower) with key k in the list. if SearchRight enters m by traversing backlinks, and in
SearchToLevel SL(k, v) is used to locate the nodes on this case m.key < n.key. The fact that SearchRight may
level v with keys closest to k. It traverses levels starting from traverse superfluous nodes leads to the fact that Search-
the top one, and each time it reaches a key larger than k, ToLevel SL may enter marked nodes when it descends
it goes down one level. To traverse individual levels, it uses from one level to the next (although scenarios where this
the SearchRight routine, which is similar to the Search- happens are fairly contrived). This is why an operation
From in our linked list algorithms. The only difference is can traverse backlinks of the nodes that were deleted be-
that SearchRight deletes the superfluous nodes along its fore the operation started. As mentioned earlier, this is an
way, performing all three deletion steps if necessary, whereas obstacle to applying the same kind of performance analy-
SearchFrom physically deletes only those nodes that are sis to skip lists, as we used for linked lists. After proving
already logically deleted. some weaker postconditions for SearchToLevel SL and
The Insert SL routine determines the height of the tower Insert SL, we then show that our skip list has the correct
it needs to insert by flipping a coin, and enters a loop where vertical structure within each tower, i.e. the nodes on differ-
it inserts the nodes of the tower one by one from bottom ent levels that contain the same key form a linked list. Then
to top. If a concurrent process inserts a root node with we prove the stronger SearchToLevel SL(k, v) postcon-
the same key, Insert SL reports failure and returns. Each ditions: we show that the node n it ends in is unmarked,
complete iteration of the loop increases the height of the new and, if n.key = k, n is also not superfluous (at some time
tower by one. Insert SL exits from that loop if it finishes during the search).
the construction of the new tower, or if the construction of Finally, we say that the set of elements currently stored in
a new tower gets interrupted by a deletion: if Insert SL the dictionary is the set of the elements of the regular root
notices that the root node got marked, it exits reporting nodes, and we show that all operations can be linearized
success. The Delete SL routine first deletes the root node consistently with this definition. We prove that our im-
of the tower with the supplied key k, making the rest of plementation is lock-free by showing that the only way a

58
process’s operation can be delayed indefinitely is if other [2] K. A. Fraser. Practical lock-freedom. PhD thesis,
processes continually perform successful C&S’s. University of Cambridge, December 2003. Technical
We also investigate the distribution of the heights of the Report UCAM-CL-TR-579.
towers in our skip list. We call a tower full if its insertion [3] T. L. Harris. A pragmatic implementation of
has finished without an interruption; otherwise we say that non-blocking linked-lists. In Proceedings of the 15th
a tower is incomplete. A non-deleted tower can be incom- International Symposium on Distributed Computing,
plete only if its insertion or its deletion is in progress, so the pages 300–314, 2001.
number of incomplete towers at any time is bounded by the [4] M. Herlihy. Wait-free synchronization. ACM
point contention. The distribution of the heights of the full Transactions on Programming Languages and
towers may be a little different from the heights distribu- Systems, 13(1):124–149, 1991.
tion in a sequential skip list, because higher towers are more [5] M. Herlihy. A methodology for implementing highly
likely to be incomplete. However, we believe this would not concurrent data objects. ACM Transactions on
affect the expected running time significantly. Programming Languages and Systems, 15(5):745–770,
1993.
5. CONCLUSION [6] M. Herlihy and J. Wing. Linearizability: a correctness
We have presented new algorithms implementing lock-free condition for concurrent objects. ACM Transactions
linked lists. We proved that the average cost of operations on Programming Languages and Systems,
on our linked lists is linear in the length of the list plus 12(3):463–492, 1990.
the contention, for any possible sequence of operations and [7] IBM System/370 extended architecture, principles of
any possible scheduling. To perform our analysis we used operation., 1983. IBM Publication No. SA22-7085.
a billing technique that might be applicable to other dis- [8] M. M. Michael. High performance dynamic lock-free
tributed data structures. We showed that our linked list hash tables and list-based sets. In Proceedings of the
algorithms can be used in a fairly modular way as the basis 14th annual ACM Symposium on Parallel Algorithms
for a lock-free implementation of skip lists. and Architectures, pages 73–82, 2002.
We have not explicitly incorporated a memory manage- [9] M. M. Michael. Safe memory reclamation for dynamic
ment technique, but a possible approach is to use Valois’s lock-free objects using atomic reads and writes. In
reference counting method [10, 17], which is applicable to Proceedings of the 21st Annual Symposium on
both our linked lists and our skip lists, because there are no Principles of Distributed Computing, 2002.
cycles among the physically deleted nodes. [10] M. M. Michael and M. L. Scott. Correction of a
There are a number of directions for future work in this memory management method for lock-free data
area. It remains an open problem to get a good bound on the structures. Technical Report TR599, Computer
average expected complexity of lock-free implementations of Science Department, University of Rochester, 1995.
a skip list (or, more generally, a dictionary data structure). [11] W. Pugh. Concurrent maintenance of skip lists.
We think the implementation given here and the amortized Technical Report CS-TR-2222, Computer Science
analysis technique may be useful in doing this. However Department, University of Maryland, 1990.
some difficulties remain. For example, an adversary might [12] W. Pugh. Skip lists: a probabilistic alternative to
choose to delete all of the tall towers that are used to tra- balanced trees. Communications of ACM,
verse the skip list quickly. Although an oblivious adversary 33(6):668–676, 1990.
(who cannot see the outcomes of coin flips) cannot directly
[13] N. Shavit and I. Lotan. Skiplist-based concurrent
know the heights of the towers, in a distributed application
priority queues. In Proc. 14th IEEE/ACM
it might indirectly get some information about them by see-
International Parallel and Distributed Processing
ing how many steps are required to do searches. It might
Symposium, pages 263–268, 2000.
be more realistic to separate the two roles of the adversary:
[14] H. Sundell and P. Tsigas. Fast and lock-free
choosing the operations and choosing the schedule.
concurrent priority queues for multi-thread systems.
On a more general note, it would be interesting to de-
In Proceedings of the 17th IEEE/ACM International
velop a usable and practical alternative to the worst-case
Parallel and Distributed Processing Symposium, pages
amortized analysis, which can be overly pessimistic, in the
84–94, April 2003.
context of lock-free data structures. A feasible way of doing
an amortized analysis that bounds the average complexity [15] H. Sundell and P. Tsigas. Scalable and lock-free
over possible schedules would be of great interest. concurrent dictionaries. In Proceedings of the 19th
ACM Symposium on Applied Computing, pages
Acknowledgements 1438–1445, March 2004.
[16] R. K. Treiber. Systems programming: Coping with
This research was funded by the Natural Sciences and
parallelism. Research report RJ 5118, IBM Almaden
Engineering Research Council of Canada and by an Ontario
Research Center, 1986.
Graduate Scholarship. We thank Håkan Sundell, Philippas
Tsigas and the anonymous referees for pointing out related [17] J. D. Valois. Lock-free linked lists using
work and providing helpful comments. compare-and-swap. In Proceedings of the 14th ACM
Symposium on Principles of Distributed Computing,
pages 214–222, 1995.
6. REFERENCES
[1] M. Fomitchev. Lock-free linked lists and skip lists.
Master’s thesis, York University, October 2003.
http://www.cs.yorku.ca/∼mikhail.

59

You might also like