0% found this document useful (0 votes)
3 views2 pages

Concurrency - Part 3: Pitfalls and Summary - Medium

This article discusses common pitfalls in concurrency, including priority inversion, thread explosion, and race conditions, providing examples and solutions for each. It emphasizes the importance of understanding sync vs async and serial vs concurrent queues, as well as the need for careful design to avoid issues. The conclusion offers general advice on managing concurrency effectively in application development.

Uploaded by

milk.pawel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views2 pages

Concurrency - Part 3: Pitfalls and Summary - Medium

This article discusses common pitfalls in concurrency, including priority inversion, thread explosion, and race conditions, providing examples and solutions for each. It emphasizes the importance of understanding sync vs async and serial vs concurrent queues, as well as the need for careful design to avoid issues. The conclusion offers general advice on managing concurrency effectively in application development.

Uploaded by

milk.pawel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Search Write

Be part of a better internet. Get 20% off membership for a limited time

Concurrency Visualized — Part 3:


Pitfalls and Conclusion
Besher Al Maleh · Follow
8 min read · Jan 28, 2020

1.5K 6

Thanks to Pablo Stanley for these amazing illustrations!

This is part 3 of my concurrency series. Check out Part 1 and Part 2 if you missed
them.

In my earlier discussion of sync, async, serial, and concurrent, I alluded to


some pitfalls that you might encounter while working with concurrency.
That’s our main topic for this article. Afterwards, I will wrap up this series
with a summary and some general advice.

Pitfalls

Priority Inversion and Quality of Service


Priority inversion happens when a high priority task is prevented from
running by a lower priority task, effectively inverting their relative priorities.

This situation often occurs when a high QoS queue shares a resources with a Top highlight

low QoS queue, and the low QoS queue gets a lock on that resource.

But I wish to cover a different scenario that is more relevant to our


discussion — it’s when you submit tasks to a low QoS serial queue, then
submit a high QoS task to that same queue. This scenario also results in
priority inversion, because the high QoS task has to wait on the lower QoS
tasks to finish.

GCD resolves priority inversion by temporarily raising the QoS of the entire
queue that contains the low priority tasks which are ‘ahead’ of, or blocking,
your high priority task. It’s kind of like having cars stuck in front of an
ambulance. Suddenly they’re allowed to cross the red light just so that the
ambulance can move (in reality the cars move to the side, but imagine a
narrow (serial) street or something, you get the point :-P)

To illustrate the inversion problem, let’s start with this code:

1 enum Color: String {


2 case blue = " "
3 case white = " "
4 }
5
6 func output(color: Color, times: Int) {
7 for _ in 1...times {
8 print(color.rawValue)
9 }
10 }
11
12 let starterQueue = DispatchQueue(label: "com.besher.starter", qos: .userInteractive)
13 let utilityQueue = DispatchQueue(label: "com.besher.utility", qos: .utility)
14 let backgroundQueue = DispatchQueue(label: "com.besher.background", qos: .background)
15 let count = 10
16
17 starterQueue.async {
18
19 backgroundQueue.async {
20 output(color: .white, times: count)
21 }
22
23 backgroundQueue.async {
24 output(color: .white, times: count)
25 }
26
27 utilityQueue.async {
28 output(color: .blue, times: count)
29 }
30
31 utilityQueue.async {
32 output(color: .blue, times: count)
33 }
34
35 // next statement goes here
36 }

concurrency2.swift hosted with ❤ by GitHub view raw

We create a starter queue (where we submit the tasks from), as well as two
queues with different QoS, then we dispatch tasks to each of these two
queues, each task printing out an equal number of circles of a specific colour
(utility queue is blue, background is white.)

Because these tasks are submitted asynchronously, every time you run the
app, you’re going to see slightly different results. However, as you would
expect, the queue with the lower QoS (background) almost always finishes
last. In fact, the last 10–15 circles are usually all white.

No surprises there

But watch what happens when we submit a sync task to the background
queue after the last async statement. You don’t even need to print anything
inside the sync statement, just adding this line is enough:

1 // add this after the last async statement,


2 // still inside starterQueue.async
3 backgroundQueue.sync {}

concurrency3.swift hosted with ❤ by GitHub view raw

Priority inversion

The results in the console have flipped! Now, the higher priority queue
(utility) always finishes last, and the last 10–15 circles are blue.

To understand why that happens, we need to revisit the fact that


synchronous work is executed on the caller thread (unless you’re submitting
to the main queue.) In our example above, the caller (starterQueue) has the
top QoS (userInteractive.) Therefore, that seemingly innocuous sync task is
not only blocking the starter queue, but it’s also running on the starter’s high
QoS thread. The task therefore runs with high QoS, but there are two other
tasks ahead of it on the same background queue that have background QoS.
Priority inversion detected!

As expected, GCD resolves this inversion by raising the QoS of the entire
queue to temporarily match the high QoS task; consequently, all the tasks on
the background queue end up running at user interactive QoS, which is higher
than the utility QoS. And that’s why the utility tasks finish last!

Side-note: If you remove the starter queue from that example and submit
from the main queue instead, you will get similar results, as the main queue
also has user interactive QoS.

To avoid priority inversion in this example, we need to avoid blocking the


starter queue with the sync statement. Using async would solve that
problem.

Although it’s not always ideal, you can minimize priority inversions by
sticking to the default QoS when creating private queues or dispatching to
the global concurrent queue.

Thread explosion
When you use a concurrent queue, you run the risk of thread explosion if
you’re not careful. This can happen when you try to submit tasks to a
concurrent queue that is currently blocked (e.g. with a semaphore, sync, or
some other way.) Your tasks will run, but the system will likely end up
spinning up new threads to accommodate these new tasks, and threads
aren’t cheap.

This is likely why Apple suggests starting with a serial queue per subsystem
in your app, as each serial queue can only use one thread at a time.
Remember that serial queues are concurrent in relation to other queues, so
you still get a performance benefit when you offload your work to a queue,
even if it isn’t concurrent.

Race conditions
Swift Arrays, Dictionaries, Structs, and other value types are not thread-safe
by default. For example, when you have multiple threads trying to access
and modify the same array, you will start running into trouble.

There are different solutions to the readers-writers problem, such as using


locks or semaphores, but the relevant solution I wish to discuss here is the
use of an isolation queue.

Let’s say we have an array of integers, and we want to submit asynchronous


work that references this array. As long as our work only reads the array and
does not modify it, we are safe. But as soon as we try to modify the array in
one of our asynchronous tasks, we will introduce instability in our app.

It’s a tricky problem because your app can run 10 times without issues, and
then it crashes on the 11th time. One very handy tool for this situation is the
Thread Sanitizer in Xcode. Enabling this option will help you identify
potential race conditions in your app.

This option is only available on the simulator

To demonstrate the problem, let’s take this (admittedly contrived) example:

1 class ViewController: UIViewController {


2
3 let concurrent = DispatchQueue(label: "com.besher.concurrent", attributes: .concurrent
4 var array = [1,2,3,4,5]
5
6 override func viewDidLoad() {
7 for _ in 0...1 {
8 race()
9 }
10 }
11
12 func race() {
13
14 concurrent.async {
15 for i in self.array { // read access
16 print(i)
17 }
18 }
19
20 concurrent.async {
21 for i in 0..<10 {
22 self.array.append(i) // write access
23 }
24 }
25 }
26 }

concurrency4.swift hosted with ❤ by GitHub view raw

One of the async tasks is modifying the array by appending values. If you try
running this on your simulator, you might not crash. But run it enough times
(or increase the loop frequency on line 7), and you will eventually crash. If
you enable the thread sanitizer, you will get a warning every time you run
the app.

To deal with this race condition, we are going to add an isolation queue that
uses the barrier flag. This flag allows any outstanding tasks on the queue to
finish, but blocks any further tasks from executing until the barrier task is
completed.

Think of the barrier like a janitor cleaning a public restroom (shared


resource.) There are multiple (concurrent) stalls inside the restroom that
people can use. Upon arrival, the janitor places a cleaning sign (barrier)
blocking any newcomers from entering until the cleaning is done, but the
janitor does not start cleaning until all the people inside have finished their
business. Once they all leave, the janitor proceeds to clean the public
restroom in isolation. When finally done, the janitor removes the sign
(barrier) so that the people who are queued up outside can finally enter.

Here’s what that looks like in code:

1 class ViewController: UIViewController {


2 let concurrent = DispatchQueue(label: "com.besher.concurrent", attributes: .concurrent
3 let isolation = DispatchQueue(label: "com.besher.isolation", attributes: .concurrent
4 private var _array = [1,2,3,4,5]
5
6 var threadSafeArray: [Int] {
7 get {
8 return isolation.sync {
9 _array
10 }
11 }
12 set {
13 isolation.async(flags: .barrier) {
14 self._array = newValue
15 }
16 }
17 }
18
19 override func viewDidLoad() {
20 for _ in 0...15 {
21 race()
22 }
23 }
24
25 func race() {
26 concurrent.async {
27 for i in self.threadSafeArray {
28 print(i)
29 }
30 }
31
32 concurrent.async {
33 for i in 0..<10 {
34 self.threadSafeArray.append(i)
35 }
36 }
37 }
38 }

concurrency5.swift hosted with ❤ by GitHub view raw

We have added a new isolation queue, and restricted access to the private
array using a getter and setter that will place a barrier when modifying the
array.

The getter needs to be sync in order to directly return a value. The setter can
be async , as we don’t need to block the caller while the write is taking place.

We could have used a serial queue without a barrier to solve the race
condition, but then we would lose the advantage of having concurrent read
access to the array. Perhaps that makes sense in your case, you get to decide.

Conclusion
Thank you so much for reading this series! I hope you learned something
new along the way. I will leave you with a summary and some general advice

Summary
Queues always start their tasks in FIFO order

Queues are always concurrent relative to other queues

Sync vs Async concerns the source

Serial vs Concurrent concerns the destination

Sync is synonymous with ‘blocking’

Async immediately returns control to caller

Serial uses a single thread, and guarantees order of execution

Concurrent uses multiple-threads, and risks thread explosion

Think about concurrency early in your design cycle

Synchronous code is easier to reason about and debug

Avoid relying on global concurrent queues if possible

Consider starting with a serial queue per subsystem

Switch to concurrent queue only if you see a measurable performance


benefit

I like the metaphor from the Swift Concurrency Manifesto of having an


‘island of serialization in a sea of concurrency’. This sentiment was also
shared in this tweet by Matt Diephouse:

Matt Diephouse
@mdiep · Follow

The secret to writing concurrent code is to make most of it


serial. Restrict concurrency to a small, outer layer. (Serial
core, concurrent shell.)

e.g. instead of using a lock to manage 5 properties, create


a new type that wraps them and use a single property
inside the lock.
7:35 PM · Dec 17, 2019

42 Reply Share

Read 1 reply

When you apply concurrency with that philosophy in mind, I think it will
help you achieve concurrent code that can be reasoned about without getting
lost in a mess of callbacks.

If you have any questions or comments, feel free to reach out to me on


Twitter

Besher Al Maleh

Thanks for reading. If you enjoyed this article, feel free to hit that clap
button to help others find it. If you *really* enjoyed it, you can clap up to
50 times

almaleh/Dispatcher
This is the companion app to article on concurrency in iOS. The app
is optimized for iPhone X screen or larger, so I…
github.com

Check out some of my other articles:

Fireworks — A visual particles editor for Swift


medium.com

You don’t (always) need [weak self]


medium.com

Further reading:

Introduction
Concurrency is the notion of multiple things happening at the same time. With the
proliferation of multicore CPUs and…
developer.apple.com

Concurrent Programming: APIs and Challenges


Concurrency describes the concept of running several tasks at the
same time. This can either happen in a time-shared…
www.objc.io

Low-Level Concurrency APIs


In this article we’ll talk about some low-level APIs available on both iOS and OS X. Except
for dispatch_once, we…
www.objc.io

Khanlou
Grand Central Dispatch, or GCD, is an extremely powerful tool. It
gives you low level constructs, like queues and…
khanlou.com

Concurrent vs serial queues in GCD


Thanks for contributing an answer to Stack Overflow! Please be
sure to answer the question. Provide details and share…
stackoverflow.com

WWDC Videos:

Modernizing Grand Central Dispatch Usage — WWDC 2017 —


Videos — Apple Developer
macOS 10.13 and iOS 11 have reinvented how Grand Central
Dispatch and the Darwin kernel collaborate, enabling your…
developer.apple.com

Building Responsive and Efficient Apps with GCD — WWDC 2015


— Videos — Apple Developer
watchOS and iOS Multitasking place increased demands on your
application’s efficiency and responsiveness. With expert…
developer.apple.com

IOS Swift Mobile App Development IOS App Development Mobile

1.5K 6

Written by Besher Al Maleh Follow

594 Followers

Self-taught iOS developer interested in all things iOS www.besher.ca

More from Besher Al Maleh


Besher Al Maleh Besher Al Maleh

You don’t (always) need [weak self] High performance drawing on iOS
—Part 2
This article covers two different ways to
perform 2D drawing while leveraging the GP…

Jun 16, 2019 8.6K 17 Jan 19, 2019 1.2K 6

Besher Al Maleh Besher Al Maleh

High performance drawing on iOS The Nested Closure Trap


—Part 1 Revisiting [weak self] to avoid retain cycles
How I optimized 2D drawing for my game, with this common scenario that involves…
including what worked and what didn’t work

Jan 16, 2019 673 4 Mar 12, 2020 1.6K 2

See all from Besher Al Maleh

Recommended from Medium

Brother Nan Byungwook An

Practice 2: Implementing a How can I make a thread-safe app


Read/Write Lock using NSLock?
In this post, let’s implement a read/write lock, Intro
a common interview question that tests you…

Mar 4 1 Jan 25

Lists

Apple's Vision Pro Tech & Tools


7 stories · 71 saves 17 stories · 270 saves

Icon Design Interesting Design Topics


36 stories · 363 saves 257 stories · 651 saves

Sachin Goswami Dmytro Yaremyshyn

Concurrency Control: Thread Exploring iOS Development: Serial


Safety Techniques for Singleton… vs. Concurrent Dispatch Queues
Classes
Mastering using DispatchQueues
Swift’s DispatchQueues to Ensure What is a DispatchQueue?
Data Integrity and Prevent Race Conditions i…

Mar 31 5 Mar 25 30

Puneet Mittal Igor Sorokin (srk1nn) in ITNEXT

Understanding GCD with Examples Advanced Memory Management |


In the previous article, we discussed what Benchmarks in Swift
synchronous and asynchronous tasks are,… Discover memory management techniques in
Swift and compare their performance.

Jun 23 4 Mar 4 49

See more recommendations

Help Status About Careers Press Blog Privacy Terms Text to speech Teams

You might also like