Tag Archives: gcd

Not blocking the UI with FMDB Queue

Not blocking the UI with FMDB Queue

xCode 7.3 concept demo

So notifications are coming in rapidly…each one is a thread, and the threads’s data are getting inserted into your FMDB. Your FMDB ensures thread safety because FMDB’s queue uses dispatch_sync:

FMDatabaseQueue.m

It gets the block of database code you pass in, and puts it on a SERIAL QUEUE.

The dispatch_sync used here is not a tool for getting concurrent execution, it is a tool for temporarily limiting it for safety.

The SERIAL QUEUE ensures safety by having each block line up one after another, and start their execution ONLY AFTER the previous task has finished executing. This ensures that you are thread safe because they are not writing over each other at the same time.

However there is a problem. Let’s say your main thread is processing a for loop with the inDatabase method. The main thread places the block on FMDB Queue’s serial queue. This means the dispatch_sync that FMDB Queue is using will block your main thread as it processes each task. By definition, dispatch_sync DOES NOT RETURN, until after it has finished its executing task.

Proving FMDB does block

We need to prove that FMDB does indeed block our UI so we first put a UISlider in ViewController for testing purposes. If we are concurrently processing all these incoming notifications in the background, then this UISlider should be responsive.

You put a slider on your UI like so:

When you run a simple for loop outside of method say executeUpdateOnDBwithSQLParams:, you are essentially adding a dispatch_sync on your main thread. This will block, and your UI will NOT be responsive.

In order to solve this, we do 2 things:

  1. Use a concurrent queue and have main thread work on it to ensure concurrency and that the UI is not blocked
  2. Inside of that concurrent queue, we queue up db jobs to FMDB’s thread safe serial queue

Solution

dispatch_sync does not return until its task is finished. Thus, while the task is executing, the main queue can’t do anything because the dispatch_sync has not returned. That’s the just of the issue.

What we did to solve this issue is to

dispatch_async FMDB tasks on a concurrent queue.

This is the basic setup that enables fmdb to be non-blocking.

1) We set up a block on a concurrent queue first. This ensures that whatever runs inside of that concurrent block will be able to run
concurrently with the main thread.

2) The block starts off with executing its log. Then the PRE-TASK. Then it syncs its own block of the “DB Task”. This sync means that it blocks whatever is trying run with it, on conQueue. Hence, that’s why POST-TASK will run after DB task.

3) Finally, after PRE-TASK, then DB task, finish running, POST-TASK runs.


— start block (concurrent queue) —
—  Task PRE-TASK on concurrent queue start  —
PRE-TASK on concurrent queue – 0
PRE-TASK on concurrent queue – 1
PRE-TASK on concurrent queue – 2
^^^ Task PRE-TASK on concurrent queue END ^^^
— start block (sync queue) —
—  Task DB task start  —
DB task – 0
DB task – 1
DB task – 2
DB task – 3
DB task – 4
DB task – 5
DB task – 6
DB task – 7
DB task – 8
DB task – 9
^^^ Task DB task END ^^^
— end block (sync queue) —
—  Task POST-TASK on concurrent queue start  —
POST-TASK on concurrent queue – 0
POST-TASK on concurrent queue – 1
POST-TASK on concurrent queue – 2
^^^ Task POST-TASK on concurrent queue END ^^^
— end block (concurrent queue) —

The dispatch_sync is to simulate FMDB like so:

So BOTH tasks

  • dispatch_sync is running its db tasks (fmdb)
  • free to move your UI (main queue)

are processing on the concurrent queue via dispatch_async.
Thus, that’s how you get FMDB to be non-blocking.

Tidbit: changing dispatch_async to dispatch_sync on the concurrent queue

If you were to change from dispatch_async to dispatch_sync on the concurrent queue “conQueue”, it will block the main queue
when it first starts up because by definition, dispatch_sync means it does not return right away. It will return later
when it runs to the end, but for now, it blocks and you’re not able to move your UI.

Thus, it runs to PRE-TASK, and executes that.

Then it moves down, and runs the “DB task” block via dispatch_async on serial queue “queue”.

The dispatch_async returns immediately, starts executing “DB task” on serial queue “queue”, and then it executes
POST-TASK. Thus, DB task and POST-TASK will be executing together.

After POST-TASK finishes, our concurrent block has ran to the end, and returns (due to dispatch_sync).
At this point, you will be able to move the UI. “DB task” continues its execution because its part of the task that’s still sitting on concurrent queue “conQueue”.
Since its a concurrent queue, it will be processing this task that’s still sitting on its queue, and you will be able to move around the UI because nothing is blocking anymore.

Other details when you have time

where concurrencyQueue is created and initialized like so:

But what about the database writing that is dispatch_sync on the serial queue? Wouldn’t that block?

No. The dispatch_sync on the serial queue only blocks against thread(s) that touches FMDB Queue’s serial queue. In this case, it would be FMDBQueue’s serial queue’s own thread, and the concurrent queue’s thread.

does_not_block_main

max of 64 threads running concurrently

ref – http://stackoverflow.com/questions/34849078/main-thread-does-dispatch-async-on-a-concurrent-queue-in-viewdidload-or-within

Note that when you are using custom concurrent queues, you are only allowed to use 64 concurrent working threads at once. Hence, that’s why when your main thread calls on the concurrent queue and queue up tasks, the system starts blocking your UI after 64 tasks on the queue.

The workaround is to put the task of placing db tasks onto the concurrent queue onto your main queue like so:

Then simply call the utility method run_async_with_UI and place your database calls in there.

Proof of concept

The dispatch_sync(serialQueue,….) is essentially the FMDB Queue.
We just added dispatch_async(concurrencyQueue…). Now, you can see that we are manipulating the database in a thread-safe manner, in the background, without clogging up the UI.

result:

So, the dispatch_async throws all the tasks onto the concurrent queue without returning (aka blocking). That’s why all the task blocks log “concurrentQueue inserts task n”.

The task the dispatch_async throws onto the serialQueue will start executing immediately via dispatch_sync. Dispatch_sync by definition means it won’t return until its block has finished executing. Hence, this means that “concurrentQueue’s END task n” message won’t be logged until after the block on serialQueue has been executed”.


Notice how serialQueue FINISHED 1, then concurrentQueue logs END task 1.
serialQueue FINISHED 0, then concurrentQueue END task 0…

its because dispatch_sync does not return until it has finished executing.
Once it returns, it continues onto the “concurrentQueue END task n” log message.

In other words, due to dispatch_sync, line 10-16 must be run, before line 20 is run.

Another important note is that notice serialQueue has started to execute. But by definition, dispatch_sync blocks and does not return until the current executing task is finished…so how does concurrentQueue keeps inserting?

The reason is that the blocks on serialQueue is running in the background. The dispatch_sync that’s not returning happens in the background, and thus, does not affect the UI. The enqueueing of the “db simulate write” onto the serialQueue is done on the background queue concurrentQueue.

Say we switch it

So now we dispatch_sync a block onto a queue, it will not return until it finishes enqueueing. The key point here is that “due to dispatch_async throwing the task onto the serialQueue and returning immediately”, enqueueing will be:

lots of fast enqueueing of blocks onto the concurrent queue, thus, logging of
line 5, and line 20.

example:

1. block task 1 goes onto concurrent queue via dispatch_sync, WILL NOT RETURN UNTIL WHOLE TASK BLOCK IS FINISHED
2. “simulate DB write” task block goes onto serial Queue via dispatch_async, RETURNS RIGHT AWAY.
3. block task 1 finished, thus RETURNS control to concurrent queue.

4. block task 2 goes onto concurrent queue via dispatch_sync, WILL NOT RETURN UNTIL WHOLE TASK BLOCK IS FINISHED
5. “simulate DB write” task block goes onto serial Queue via dispatch_async, RETURNS RIGHT AWAY.
6. block task 2 finished, thus RETURNS control to concurrent queue.



etc.

continues until the serialQueue, being a background queue, starts processing its first block. Hence it will display:

serialQueue – START 0———
serialQueue – FINISHED 0——–

Hence, the situation is that all the tasks of putting “simulate write tasks onto the serial Queue” are enqueued onto the concurrent queue quickly.

Then, when the serial Queue decides to execute its first task, thats’ when it does its first “DB write simulate”. This DB write simulate does not block UI because its being done in the background.

result:

Then after all the tasks are being enqueued onto the concurrent queue…the serialQueue processing task one by one.

dispatch_once, singleton class

ref – http://www.galloway.me.uk/tutorials/singleton-classes/

Grand Central Dispatch, a.k.a libdispatch and usually referred to as GCD, is a low-level API known for performing asynchronous background work. dispatch_async is its poster child: “Throw this block on a background thread to do some work, and inside of that block toss another block on the main thread to update the UI.”

Not all of GCD is asynchronous, though. There’s dispatch_sync to do some work synchronously. There’s also dispatch_once that’s used to guarantee that something happens exactly once, no matter how violent the program’s threading becomes. It’s actually a very simple idiom:

You first declare a static or global variable of type dispatch_once_t. This is an opaque type that stores the “done did run” state of something. It’s important that your dispatch_once_t be a global or a static. If you forget the static, you may have weird behavior at run time.

Then you pass that dispatch_once_t token to dispatch_once, along with a block. GCD will guarantee that the block will run no more than one time, no matter how many threads you have contending for this one spot.

The usual example you see for dispatch_once is creating shared instances, such as the object returned from calls like -[NSFileManager defaultManager]. Wrap the allocation and initialization of your shared instance in a dispatch_once, and return it. Done.

Recently, though, I had an opportunity to use dispatch_once outside of a sharedBlah situation. Another Rancher and I were working on some sample code for a class. It populated a scrolling view with Lots And Lots Of Stuff. Rather than manually coming up with labels for everything, we used the list of words at /usr/share/dict/words to construct random names. Just a couple of words and string them together. The results were often nonsensical, but sometimes we’d get something delightfully random. Here’s the function:

Pretty straightforward. A static local variable that points to an NSArray of words. Make a check for nilness, then load the file and remove the long words. And it worked great.

Then we decided to emulate network latency by using dispatch_async and coded delays to act like words were dribbling in over a network connection. Performance took an insane nose-dive, as in “there is no way I am checking this in and keeping my job”. A quick check with Instruments showed RandomName being the bottleneck. Every thread was running it. Whoa.

In retrospect, it’s an obvious mistake: accessing global state unprotected in a threaded environment. Here’s the scenario:

Thread A starts doing stuff. It goes to get a RandomName. It sees that words is nil, so it starts loading the words. GCD, when it sees a thread start blocking (say by going into the kernel reading a largish file), it realizes that it can start another thread running to keep those CPUs busy. So Thread B goes to get a RandomName. Thread A isn’t done loading the words, so words is still nil. Therefore Thread B starts reading the words file. It blocks, and goes to sleep, and Thread C starts up. Eventually all of the reads complete, and they all start processing this 235,886 line file. That’s a crazy amount of work.

It’s pretty to fix. You can slap an @synchronized around it. Or use NSLock, pthread_mutex, etc. I didn’t like those options because you do pay a locking price on each access. Granted, it’s a toy app purely for demonstration purposes, but I still think about that stuff. You can also put stuff like that into +initialize (with the proper class check), knowing the limited circumstances +initialize would get called. That didn’t excite me either. It was nice having RandomName being entirely self-contained and not dependent on some other entity initializing the set of words.

Taking a step back and evaluating the problem: words needs to be loaded and initialized exactly once, and then used forever more. What’s an existing library call that lets you do something exactly once? dispatch_once.

gets processed exactly once.

We didn’t even have to modify the code in the block. Performance was back to reasonable levels, and we could get back to demonstrating our concept.

So what’s the point of all of this? Mainly that GCD is not just for running things concurrently – it’s a small pile of useful concurrency tools. dispatch_once is one of those tools, and has applicability outside of making shared class instances. It’s very low overhead, with dispatch_once_t being four or eight bytes, and not requiring a heavyweight lock every time it’s run.

dispatch_get_global_queue vs dispatch_get_main_queue

dispatch your tasks on dispatch_get_main_queue() for UI changes.

The main queue is a special serial queue. Unlike other serial queues, which are uncommitted, in that they are “dating” many threads but only one at time, the main queue is “married” to the main thread and all tasks are performed on it. Jobs on the main queue need to behave nicely with the runloop so that small operations don’t block the UI and other important bits. Like all serial queues, tasks are completed in FIFO order. You get it with dispatch_get_main_queue.

dispatch your tasks on dispatch_get_global_queue (background queue) upon which you can dispatch background tasks that are run asynchronously (i.e. won’t block your user interface). And if you end up submitting multiple blocks to the global queues, these jobs can operate concurrently.

NOTE THAT if you have multiple blocks of code that you want to submit to a background queue that you must have run sequentially in the background, you could create your own serial background queue

and dispatch to that.

Hence dispatch_get_global_queue is concurrent in nature.

Serial vs Concurrent Queue

Concurrent vs. serial determines how submitted tasks are to be run. A concurrent queue allows the tasks to run concurrently with one another. A serial queue only allows one of its tasks to run at a time.

Concurrent Queue

Concurrent queues (also known as a type of global dispatch queue) execute one or more tasks concurrently, but tasks are still started in the order in which they were added to the queue. The currently executing tasks run on distinct threads that are managed by the dispatch queue. The exact number of tasks executing at any given point is variable and depends on system conditions.

To create a concurrent queue:

In iOS 5 and later, you can create concurrent dispatch queues yourself by specifying DISPATCH_QUEUE_CONCURRENT as the queue type. In addition, there are four predefined global concurrent queues for your application to use. For more information on how to get the global concurrent queues

Tasks are executed in Parallel

“Concurrent queues (also known as a type of global dispatch queue) execute one or more tasks concurrently, but tasks are still started in the order in which they were added to the queue.”

Here we have an example of running Concurrent with async dispatching
A concurrent queue means that they are executed in parallel. Hence while block A may be processing, blocks B, C..etc may be executed at the same time as well. In other words, the current executing block can’t assume that it’s the only block running on that queue.

Also, because it’s a concurrent queue, it lets the async dispatching execute blocks whenever they are ready to. Hence that’s why the queue may dispatch its blocks out of sequence.

dispatch_async means control return immediately. In other words, “DON’T wait for me to finish my task, just go on with the next task….”. It DOES NOT BLOCK, which means the main thread (UI thread) keeps running and is responsive to user touches. This includes all other threads also, they keep going about their work because we are not blocking.

dispatch_sync means it blocks until it finishes processing. In other words, “WAIT for me to finish my task, then you can take over”. This BLOCKS all threads, including the main thread. So when you use dispatch_sync, all queues and threads wait for this to finish, including the UI thread so that it does not respond to user touches.

Concurrent Async example

In the code, we’re simply simulating spawning threads concurrently to do certain tasks in certain amount of time units.


^^^^^^^^^^^^^^ TASK C started ^^^^^^^^^^^^^^^
2015-08-01 00:36:28.321 YonoApp[4189:180141] ^^^^^^^^^^^^^^ TASK A started ^^^^^^^^^^^^^^^
2015-08-01 00:36:28.321 YonoApp[4189:180144] ^^^^^^^^^^^^^^ TASK B started ^^^^^^^^^^^^^^^
2015-08-01 00:36:28.321 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.321 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.321 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.321 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.321 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.321 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.323 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.323 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.323 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
CalendarViewController.m -initWithTabBar 1
2015-08-01 00:36:28.323 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.323 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.324 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.328 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.328 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.328 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.328 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.328 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.328 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.329 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.329 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.329 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.329 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.330 YonoApp[4189:180142] ———-> Task C is done with UPDATE BBT TENDERNESS <-------------- 2015-08-01 00:36:28.330 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS ....A gets done ..then B is done

So as you can see, even though we ran it concurrently via dispatch_async, Task C started first because task A and task B were not ready. After C started, then A, and then B.

While A is working…B is working….etc ….and they keep mingling. This happens between all the threads

In other words, your code will not wait for execution to complete. Both blocks will dispatch (and be enqueued) to the queue and the rest of your code will continue executing on that thread. Then at some point in the future, (depending on what else has been dispatched to your queue), Task A will execute and then Task B will execute.

Concurrent Sync example

In the dispatch_sync example, however, you won’t dispatch TASK n+1 until after TASK n has been dispatched and executed. This is called “blocking”. Your code waits (or “blocks”) until the task executes. If were to change dispatch_async to dispatch_sync, then the result would be like this:

2015-08-01 00:40:09.076 YonoApp[4231:181577] ^^^^^^^^^^^^^^ TASK A started ^^^^^^^^^^^^^^^
2015-08-01 00:40:09.076 YonoApp[4231:181577] Task A UPDATE SIGN TENDERNESS
….
2015-08-01 00:40:09.084 YonoApp[4231:181577] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:40:09.084 YonoApp[4231:181577] ———-> Task A is done

2015-08-01 00:40:09.084 YonoApp[4231:181577] ^^^^^^^^^^^^^^ TASK B started ^^^^^^^^^^^^^^^
2015-08-01 00:40:09.084 YonoApp[4231:181577] Task B UPDATE SIGN TENDERNESS
………
2015-08-01 00:40:09.090 YonoApp[4231:181577] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:40:09.091 YonoApp[4231:181577] ———-> Task B is done

2015-08-01 00:40:09.091 YonoApp[4231:181577] ^^^^^^^^^^^^^^ TASK C started ^^^^^^^^^^^^^^^
2015-08-01 00:40:09.091 YonoApp[4231:181577] Task C UPDATE BBT TENDERNESS
…………
2015-08-01 00:40:09.092 YonoApp[4231:181577] Task C UPDATE BBT TENDERNESS
2015-08-01 00:40:09.092 YonoApp[4231:181577] ———-> Task C is done

Serial Queues

Serial queues are monogamous, but uncommitted. If you give a bunch of tasks to each serial queue, it will run them one at a time, using only one thread at a time. The uncommitted aspect is that serial queues may switch to a different thread between tasks.

Serial queues always wait for a task to finish before going to the next one.

Thus tasks are completed in FIFO order. You can make as many serial queues as you need with dispatch_queue_create.

By definition, Serial Queues says that there is only one block running at a time, and they are executed in order.

So if we add in blocks A, B, C, D…then they are started and ended in order. Also notice since we use dispatch_async, that means it returns control to the Main Thread, for other threads to start spawning.

To create a serial queue:

If we dispatched async for tasks A, B, and C

The result would be:

Task A started
Task A ended
Task B started
Task B ended
Task C started
Task C ended

Because by definition serial only allows on task to be running at one time. Async means it does not block, so it is not blocking anything while we run. Hence another serial queue may be running its task at the same time.

If we dispatched sync for tasks A, B, and C, we’d get the same result because by definition sync blocks everyone while it works. Once its done, it unblocks and let’s the next task go.

Multiple Serial Queues

However, if you create four serial queues, each queue executes only one task at a time up to four tasks could still execute concurrently, one from each queue.

Let’s see what happens when we get 2 serial queues together.

2 Serial queues, using Async

serialQueue1 – Task A
serialQueue2 – Task B C

2015-08-01 01:23:47.782 YonoApp[4451:196545] ^^^^^^^^^^^^^^ TASK A1 started ^^^^^^^^^^^^^^^
2015-08-01 01:23:47.782 YonoApp[4451:196544] ^^^^^^^^^^^^^^ TASK B2 started ^^^^^^^^^^^^^^^
2015-08-01 01:23:47.786 YonoApp[4451:196545] Task A1 UPDATE SIGN TENDERNESS
2015-08-01 01:23:47.786 YonoApp[4451:196544] Task B2 UPDATE SIGN TENDERNESS

RIGHT HERE. task A1 and task B2 are executing at the same time. In respective to their own queues, they are running one task at a time, but from a multiple queue standpoint, they are running their one task at a time simultaneously with each other.

2015-08-01 01:23:47.794 YonoApp[4451:196544] Task B2 UPDATE SIGN TENDERNESS
2015-08-01 01:23:47.795 YonoApp[4451:196544] Task B2 UPDATE SIGN TENDERNESS
2015-08-01 01:23:47.795 YonoApp[4451:196544] ———-> Task B2 is done with UPDATE SIGN TENDERNESS <-------------- 2015-08-01 01:23:47.795 YonoApp[4451:196544] ^^^^^^^^^^^^^^ TASK C2 started ^^^^^^^^^^^^^^^ 2015-08-01 01:23:47.797 YonoApp[4451:196544] Task C2 UPDATE BBT TENDERNESS 2015-08-01 01:23:47.797 YonoApp[4451:196544] Task C2 UPDATE BBT TENDERNESS 2015-08-01 01:23:47.797 YonoApp[4451:196544] ----------> Task C2 is done with UPDATE BBT TENDERNESS <-------------- Hence, having multiple serial queues simply means each queue's block work individually and in order, but the serial queues themselves are parallel.

Multiple Serial Queues with Sync

When we have multiple serial queues, and we want them to be in order, we can use dispatch_sync

dispatch_sync means that while a block is executing in this particular queues, ALL OTHER BLOCKS are on hold…until this one finishes. Hence before when we had multiple serial queues doing their reads and writes, you see read/write overlaps because even though one serial queue is doing it one block at a time, the other serial queue(s) are doing their one block at a time as well…resulting in “the one and only working blocks” from multiple serial queues doing their work at the same time.

In order to solve this, we use dispatch_sync, which means for all other blocks to hold and let this block finish. When this block is finished, then we let the next block start.

We you apply dispatch_sync to the code, you’ll see that all tasks are done in order and without overlapping:

^^^^^^^^^^^^^^ TASK A1 started ^^^^^^^^^^^^^^^
2015-08-01 01:27:26.061 YonoApp[4489:197886] Task A1 UPDATE SIGN TENDERNESS
2015-08-01 01:27:26.136 YonoApp[4489:197886] Task A1 UPDATE SIGN TENDERNESS
2015-08-01 01:27:26.136 YonoApp[4489:197886] ———-> Task A1 is done with UPDATE SIGN TENDERNESS

2015-08-01 01:27:26.137 YonoApp[4489:197886] ^^^^^^^^^^^^^^ TASK B2 started ^^^^^^^^^^^^^^^
2015-08-01 01:27:26.137 YonoApp[4489:197886] Task B2 UPDATE SIGN TENDERNESS
2015-08-01 01:27:26.217 YonoApp[4489:197886] Task B2 UPDATE SIGN TENDERNESS
2015-08-01 01:27:26.218 YonoApp[4489:197886] Task B2 UPDATE SIGN TENDERNESS
2015-08-01 01:27:26.218 YonoApp[4489:197886] ———-> Task B2 is done with UPDATE SIGN TENDERNESS

2015-08-01 01:27:26.218 YonoApp[4489:197886] ^^^^^^^^^^^^^^ TASK C2 started ^^^^^^^^^^^^^^^
2015-08-01 01:27:26.218 YonoApp[4489:197886] Task C2 UPDATE BBT TENDERNESS
2015-08-01 01:27:26.219 YonoApp[4489:197886] Task C2 UPDATE BBT TENDERNESS
2015-08-01 01:27:26.221 YonoApp[4489:197886] Task C2 UPDATE BBT TENDERNESS
2015-08-01 01:27:26.221 YonoApp[4489:197886] ———-> Task C2 is done with UPDATE BBT TENDERNESS

Other Notes

Serial queues (also known as private dispatch queues) execute one task at a time in the order in which they are added to the queue. The currently executing task runs on a distinct thread (which can vary from task to task) that is managed by the dispatch queue. Serial queues are often used to synchronize access to a specific resource.

You can create as many serial queues as you need, and each queue operates concurrently with respect to all other queues. In other words, if you create four serial queues, each queue executes only one task at a time but up to four tasks could still execute concurrently, one from each queue.

Serial means the tasks are executed in order. This means that the block of the queue that is executing can assume IT IS THE ONLY BLOCK RUNNING ON THAT QUEUE. However, blocks from other queues may be running concurrently with this queue. That’s why you need to use dispatch_sync to make sure ONLY ONE BLOCK is running from a multiple queue standpoint.

Concurrent means the tasks are executed in parallel. This means that the block of the queue that is executing CAN’T assume that its the only block running on that queue.

dispatch_async vs dispatch_sync

ref: http://stackoverflow.com/questions/4360591/help-with-multi-threading-on-ios

xCode 7.3 sample code

The main reason why you want to use concurrent or serial queues over the main queue is to run tasks in the background.

Sync on a Serial Queue

dispatch_sync –

1) dispatch_sync means that the block is enqueued and will NOT continue enqueueing further tasks UNTIL the current task has been executed.
2) dispatch_sync is a blocking operation. It DOES NOT RETURN until its current task has been executed.

1) prints START
2) puts the block of code onto serialQueue2, then blocks. aka does not return
3) the block of code executes
4) When the block of code finishes, the dispatch_sync then returns, and we move on to the next instruction, which is prints END

output:

— START —
— dispatch start —
—  Task TASK B start  —
TASK B – 0
TASK B – 1


TASK B – 8
TASK B – 9
^^^ Task TASK B END ^^^
— dispatch end —
— END —

Due to the dispatch_sync not returning immediately, it blocks the main queue also. Thus, try playing around with your UISlider.
It is not responsive.

dispatch_async means that the block is enqueued and RETURN IMMEDIATELY, letting next commands execute, as well as the main thread process.

ASYNC on a Serial Queue

Let’s start with a very simple example:

This means:

1) prints START
2) we dispatch a block onto serialQueue2, then return control immediately.
3) Because dispatch_async returns immediately, we can continues down to the next instruction, which is prints END
4) the dispatched block starts executing

— START —
— END —
— dispatch start —
—  Task TASK B start  —
TASK B – 0


TASK B – 9
^^^ Task TASK B END ^^^
— dispatch end —

If you look at your UISlider, it is still responsive.

Slider code

sync-ed

dispatch_sync means that the block is enqueued and will NOT continue enqueueing further tasks UNTIL the current task has been executed.

Now let’s dispatch the first task (printing numerics) in a sync-ed fashion. This means that we put the task on the queue. Then while the queue is processing that task, it WILL NOT queue further tasks until this current task is finished.

The sleep call block other thread(s)/queue(s) from executing. Only when we finish printing out all the numerics, does the queue move on to execute the next task, which is printing out the alphabets.

Let’s throw a button in there, and have it display numbers. We’ll do one dispatch_sync first.

You’ll notice that the slider is unresponsive. That’s because by definition dispatch_sync blocks other threads/queues (including main queue from processing) until our dispatch_sync’s block has been executed. It does return until it has finished its own task.

Async, Sync, on Concurrent Queue

Now let’s dispatch_async first. Then we’ll dispatch_sync. What happens here is that:

1) prints — START —
2) we dispatch the block TASK A onto the concurrent queue, control returns immediately. prints === A start ===

3) Then we dispatch_sync TASK B on the same queue, it does not return and blocks because this task needs to complete before we relinquish control.
task B starts, prints === B start ===

The main UI is now blocked by Task B’s dispatch_sync.

4) Since Task A was executing before Task B, it will run along with B. Both will run at the same time because our queue is concurrent.

5) both tasks finish and prints === END ===

6) control returns ONLY WHEN Task B finishes, then Task B’s dispatch_sync returns control, and we can move on to the next instruction, which is log — END –.

output:

14] — START —
2016-08-25 14:26:13.638 sync_async[29186:3817447] === A start ===
2016-08-25 14:26:13.638 sync_async[29186:3817414] === B start ===
2016-08-25 14:26:13.638 sync_async[29186:3817447] —  Task TASK A start  —
2016-08-25 14:26:13.638 sync_async[29186:3817414] —  Task TASK B start  —
2016-08-25 14:26:14.640 sync_async[29186:3817414] TASK B – 0
2016-08-25 14:26:14.640 sync_async[29186:3817447] TASK A – 0


2016-08-25 14:26:21.668 sync_async[29186:3817447] TASK A – 7
2016-08-25 14:26:21.668 sync_async[29186:3817414] TASK B – 7
2016-08-25 14:26:21.668 sync_async[29186:3817447] ^^^ Task TASK A END ^^^
2016-08-25 14:26:21.668 sync_async[29186:3817414] ^^^ Task TASK B END ^^^
2016-08-25 14:26:21.668 sync_async[29186:3817447] === A end ===
2016-08-25 14:26:21.668 sync_async[29186:3817414] === B end ===
2016-08-25 14:26:21.668 sync_async[29186:3817414] — END —

Sync, Async, on Concurrent Queue

If we were to run it sync, then async:

1) prints START
2) dispatch sync on serial Queue. The sync causes us to block, or NOT RETURN until task finishes. Thus, at this point UI is unresponsive.
3) prints === A start ===
4) Task A is executing.

5) prints === A end ===
6) Now, we dispatch another task via dispatch_async onto serial queue. Control returns immediately, and we move on to the next instruction, which is prints — END –. At this point UI is now responsive again.

7) Due to control returning immediately at 6) dispatch_async, we prints — END —
8) the task starts and prints === B start ===, and task B executes.
9) task B finishes, and we prints === B end ===

output:

— START —
=== A start ===
—  Task TASK A start  —
TASK A – 0

TASK A – 7
^^^ Task TASK A END ^^^
=== A end ===
— END —
=== B start ===
—  Task TASK B start  —
TASK B – 0
TASK B – 1

TASK B – 6
TASK B – 7
^^^ Task TASK B END ^^^
=== B end ===

Async on Serial Queues

However, if we were to use a serial queue, each task would finish executing, before going on to the next one.
Hence Task A would have to finish, then Task B can start.

1) prints — START —
2) dispatch task block onto serial queue via dispatch_async. Returns immediately. UI responsive.
3) prints –END — doe to execution continuing
4) prints == START == as this block starts to execute
5) Task A executes
6) prints == END == task block finishes

output:

— START —
— END —
== START ==
—  Task TASK A start  —
TASK A – 0
TASK A – 1
TASK A – 2
TASK A – 3
TASK A – 4
TASK A – 5
TASK A – 6
TASK A – 7
TASK A – 8
TASK A – 9
^^^ Task TASK A END ^^^
== END ==

Sync on Serial Queue

1) log — START —
2) puts the task block onto the serial queue via dispatch_sync. Does not return control until task finishes. Thus, UI and other queues is blocked.
3) log == START == as the task block starts
4) Task A executes
5) log == END == as the task block ends
6) Task block is finished, so dispatch_sync relinquishes control, thus UI is responsive again. log — END –.

— START —
== START ==
—  Task TASK A start  —
TASK A – 0
TASK A – 1
TASK A – 2

TASK A – 9
^^^ Task TASK A END ^^^
== END ==
— END —

Serial Queue – Async, then Sync

The correct situation is that the serial queue plans thread(s) to execute Task A and Task B one by one. The dispatch_async, and dispatch_sync’s effects are instantaneous:

1) dispatch_async Task A – Task A gets queued. The serial queue starts planning threads to work on this task A. Execution control continues because dispatch_async returns right away.

2) dispatch_sync Task B – Task B gets queued. The serial queue is working on Task A, and thus, by definition of a serial Queue, Task B must wait for Task A to finish before it continues. However, dispatch_sync’s effect is instantaneous and it blocks all other queues, main queues, and the tasks behind Task B from being queued.

Hence, the situation created by 1) and 2), we can see that Task A is being executed, Task B is waiting for Task A to finish, and the dispatch_sync is blocking all other queues, including the main queue. Thus, that is why your UISlider is not responsive.

output:

— START —
== START ==
—  Task TASK A start  —
TASK A – 0
TASK A – 1
TASK A – 2
TASK A – 3
TASK A – 4

TASK A – 9
^^^ Task TASK A END ^^^
== END ==
== START ==
—  Task TASK A start  —
TASK A – 0

TASK A – 9
^^^ Task TASK A END ^^^
== END ==
— END —

Serial Queue – Sync, then Async

The first sync blocks all queues, main queue, and other blocks behind itself. Hence UI is unresponsive.
Block A runs. When it finishes, it relinquishes control. Task B starts via dispatch_async, and returns immediately.
Thus, UI is NOT responsive when Task A is running. Then when Task A finishes, by definition of the serial queue, it let’s Task B runs. Task B starts via dispatch_async and thus, the UI would then be responsive.

Nested dispatches

Async nest Async on a Serial Queue

1) prints — START —
2) dispatch async a block task onto the serial queue. It returns right away, does not block UI. Execution continues.
3) Execution continues, prints — END —

4) the block task starts to execute. prints — OUTER BLOCK START —
5) Task A executes and prints its stuff

6) dispatch async another block onto the same serial queue. It returns execution right away, does not block UI. Execution continues.
7) Execution continues., prints — OUTER BLOCK END –.

8) The inner block starts processing on the serial queue. prints — INNER BLOCK START —
9) Task B executes and prints stuff
10) prints — INNER BLOCK END —

Result:

— START —
— OUTER BLOCK START —
— END —
—  Task TASK A start  —
TASK A – 0

TASK A – 9
^^^ Task TASK A END ^^^
— OUTER BLOCK END —
— INNER BLOCK START —
—  Task TASK B start  —
TASK B – 0

TASK B – 9
^^^ Task TASK B END ^^^
— INNER BLOCK END —

Async nest Sync on a Serial Queue – DEADLOCK!

deadlock

Notice that we’re on a serial queue. Which means the queue must finish the current task, before moving on to the next one.
The key idea here is that the task block that’s being queued at //2, must complete before any other tasks on the queue can start.

At // 6, we put another task block onto the queue, but due to dispatch_sync, we don’t return. We only return if the block at // 6 finish executing.

But how can the 1st task block at // 2 finish, if its being blocked by the 2nd task block at // 6?

This is what leads to the deadlock.

Sync nest Async on a Serial Queue

1) log — START —
2) sync task block onto the queue, blocks UI
3) log — OUTER BLOCK START —
4) Task A processes and finishes
5) dispatch_async another task block onto the queue, the UI is still blocked from 2)’s sync. However execution moves forward within the block due to the dispatch_async
returns immediately.
6) execution moves forward and log — OUTER BLOCK END —
7) outer block finishes execution, dispatch_sync returns. UI has control again. logs — END —
8) log –INNER BLOCK START —
9) Task B executes
10) log — INSERT BLOCK END —

Async nest Async on Concurrent Queue

1) log –START–
2) dispatch_async puts block task onto the concurrent queue. Does not block, returns immediately.
3) execution continues, and we log — END —
4) queue starts processing the task block from //2. prints — OUTER BLOCK START —
5) Task A executes
6) dispatch_async puts another block task onto the concurrent queue. Now there is 2 blocks. Does not block, returns immediately.
7) prints — OUTER BLOCK END –, task block #1 is done and de-queued.
8) prints — INNER BLOCK START —
9) Task B executes
10) prints — INNER BLOCK END —

Async nest Sync on Concurrent Queue

1) prints –START–
2) puts block task on concurrent queue. returns immediately so UI and other queues can process
3) since execution immediately returns, we print — END —

4) prints — OUTER BLOCK START —
5) Task A executes

6) puts another task block onto the concurrent queue. Return ONLY if this block is finished.
Note, that it DOES NOT RETURN only in current execution context of this block!
BUT OUTTER SCOPE CONTEXT STILL CAN PROCESS. That’s why UI is still responsive.

7) dispatch_sync does not return, so we print — INNER BLOCK START —
8) Task B executes
9) prints — INNER BLOCK END —
10) prints — OUTER BLOCK END —

Sync nest Async on Concurrent Queue

1) logs — START —
2) dispatch_sync a block task onto the concurrent queue, we do not return until this whole thing is done. UI not responsive
3) prints — OUTER BLOCK START —
4) Task A executes
5) dispatch_async a 2nd block onto the concurrent queue. They async returns immediately.
6) prints — OUTER BLOCK END –.
7) The 1st task block finishes, and dispatch_sync returns.
8) prints — END —
9) prints — INNER BLOCK START —
10) Task B executes
11) prints — INNER BLOCK END —

2 serial queues

Say it takes 10 seconds to complete a DB operation.

Say I have 1st serial queue. I use dispatch_async to quickly throw tasks on there without waiting.
Then I have a 2nd serial queue. I do the same.

When they execute, the 2 serial queues will be executing at the same time. In a situation where you have
a DB resource, having ONE serial queue makes it thread safe as all threads will be in queue.

But what if someone else spawns a SECOND serial queue. Those 2 serial queues will be accessing the DB resource
at the same time!

—  Task TASK A start  —
—  Task TASK B start  —
TASK B – 0
TASK A – 0
TASK B – 1
TASK A – 1
TASK B – 2

As you can see both operations are writing to the DB at the same time.

If you were to use dispatch_sync instead:

The dispatch_sync will not return until current task block is finished. The good thing about it is that the DB operation in serial queue ONE can finish
without the DB operation in serial TWO starting.

dispatch_sync on serial queue ONE is blocking all other queues, including serial queue TWO.

TASK A – 6
TASK A – 7
TASK A – 8
TASK A – 9
^^^ Task TASK A END ^^^
—  Task TASK B start  —
TASK B – 0
TASK B – 1
TASK B – 2

However, we are also blocking the main thread because we’re working on the main queue ! -.-
In order to not block the main thread, we want to work in another queue where it is being run concurrently with the main queue.
Thus, we just throw everything inside of a concurrent queue.

Our concurrent queue works on the main queue, thus, the UI is responsive.
The blocking of our DB tasks are done within the context of our concurrent queue. It will block processing there,
but won’t touch the main queue. Thus, won’t block the main thread.