Category Archives: C/C++/Objective-C

dispatch_async vs dispatch_sync

ref: http://stackoverflow.com/questions/4360591/help-with-multi-threading-on-ios

xCode 7.3 sample code

The main reason why you want to use concurrent or serial queues over the main queue is to run tasks in the background.

Sync on a Serial Queue

dispatch_sync –

1) dispatch_sync means that the block is enqueued and will NOT continue enqueueing further tasks UNTIL the current task has been executed.
2) dispatch_sync is a blocking operation. It DOES NOT RETURN until its current task has been executed.

1) prints START
2) puts the block of code onto serialQueue2, then blocks. aka does not return
3) the block of code executes
4) When the block of code finishes, the dispatch_sync then returns, and we move on to the next instruction, which is prints END

output:

— START —
— dispatch start —
—  Task TASK B start  —
TASK B – 0
TASK B – 1


TASK B – 8
TASK B – 9
^^^ Task TASK B END ^^^
— dispatch end —
— END —

Due to the dispatch_sync not returning immediately, it blocks the main queue also. Thus, try playing around with your UISlider.
It is not responsive.

dispatch_async means that the block is enqueued and RETURN IMMEDIATELY, letting next commands execute, as well as the main thread process.

ASYNC on a Serial Queue

Let’s start with a very simple example:

This means:

1) prints START
2) we dispatch a block onto serialQueue2, then return control immediately.
3) Because dispatch_async returns immediately, we can continues down to the next instruction, which is prints END
4) the dispatched block starts executing

— START —
— END —
— dispatch start —
—  Task TASK B start  —
TASK B – 0


TASK B – 9
^^^ Task TASK B END ^^^
— dispatch end —

If you look at your UISlider, it is still responsive.

Slider code

sync-ed

dispatch_sync means that the block is enqueued and will NOT continue enqueueing further tasks UNTIL the current task has been executed.

Now let’s dispatch the first task (printing numerics) in a sync-ed fashion. This means that we put the task on the queue. Then while the queue is processing that task, it WILL NOT queue further tasks until this current task is finished.

The sleep call block other thread(s)/queue(s) from executing. Only when we finish printing out all the numerics, does the queue move on to execute the next task, which is printing out the alphabets.

Let’s throw a button in there, and have it display numbers. We’ll do one dispatch_sync first.

You’ll notice that the slider is unresponsive. That’s because by definition dispatch_sync blocks other threads/queues (including main queue from processing) until our dispatch_sync’s block has been executed. It does return until it has finished its own task.

Async, Sync, on Concurrent Queue

Now let’s dispatch_async first. Then we’ll dispatch_sync. What happens here is that:

1) prints — START —
2) we dispatch the block TASK A onto the concurrent queue, control returns immediately. prints === A start ===

3) Then we dispatch_sync TASK B on the same queue, it does not return and blocks because this task needs to complete before we relinquish control.
task B starts, prints === B start ===

The main UI is now blocked by Task B’s dispatch_sync.

4) Since Task A was executing before Task B, it will run along with B. Both will run at the same time because our queue is concurrent.

5) both tasks finish and prints === END ===

6) control returns ONLY WHEN Task B finishes, then Task B’s dispatch_sync returns control, and we can move on to the next instruction, which is log — END –.

output:

14] — START —
2016-08-25 14:26:13.638 sync_async[29186:3817447] === A start ===
2016-08-25 14:26:13.638 sync_async[29186:3817414] === B start ===
2016-08-25 14:26:13.638 sync_async[29186:3817447] —  Task TASK A start  —
2016-08-25 14:26:13.638 sync_async[29186:3817414] —  Task TASK B start  —
2016-08-25 14:26:14.640 sync_async[29186:3817414] TASK B – 0
2016-08-25 14:26:14.640 sync_async[29186:3817447] TASK A – 0


2016-08-25 14:26:21.668 sync_async[29186:3817447] TASK A – 7
2016-08-25 14:26:21.668 sync_async[29186:3817414] TASK B – 7
2016-08-25 14:26:21.668 sync_async[29186:3817447] ^^^ Task TASK A END ^^^
2016-08-25 14:26:21.668 sync_async[29186:3817414] ^^^ Task TASK B END ^^^
2016-08-25 14:26:21.668 sync_async[29186:3817447] === A end ===
2016-08-25 14:26:21.668 sync_async[29186:3817414] === B end ===
2016-08-25 14:26:21.668 sync_async[29186:3817414] — END —

Sync, Async, on Concurrent Queue

If we were to run it sync, then async:

1) prints START
2) dispatch sync on serial Queue. The sync causes us to block, or NOT RETURN until task finishes. Thus, at this point UI is unresponsive.
3) prints === A start ===
4) Task A is executing.

5) prints === A end ===
6) Now, we dispatch another task via dispatch_async onto serial queue. Control returns immediately, and we move on to the next instruction, which is prints — END –. At this point UI is now responsive again.

7) Due to control returning immediately at 6) dispatch_async, we prints — END —
8) the task starts and prints === B start ===, and task B executes.
9) task B finishes, and we prints === B end ===

output:

— START —
=== A start ===
—  Task TASK A start  —
TASK A – 0

TASK A – 7
^^^ Task TASK A END ^^^
=== A end ===
— END —
=== B start ===
—  Task TASK B start  —
TASK B – 0
TASK B – 1

TASK B – 6
TASK B – 7
^^^ Task TASK B END ^^^
=== B end ===

Async on Serial Queues

However, if we were to use a serial queue, each task would finish executing, before going on to the next one.
Hence Task A would have to finish, then Task B can start.

1) prints — START —
2) dispatch task block onto serial queue via dispatch_async. Returns immediately. UI responsive.
3) prints –END — doe to execution continuing
4) prints == START == as this block starts to execute
5) Task A executes
6) prints == END == task block finishes

output:

— START —
— END —
== START ==
—  Task TASK A start  —
TASK A – 0
TASK A – 1
TASK A – 2
TASK A – 3
TASK A – 4
TASK A – 5
TASK A – 6
TASK A – 7
TASK A – 8
TASK A – 9
^^^ Task TASK A END ^^^
== END ==

Sync on Serial Queue

1) log — START —
2) puts the task block onto the serial queue via dispatch_sync. Does not return control until task finishes. Thus, UI and other queues is blocked.
3) log == START == as the task block starts
4) Task A executes
5) log == END == as the task block ends
6) Task block is finished, so dispatch_sync relinquishes control, thus UI is responsive again. log — END –.

— START —
== START ==
—  Task TASK A start  —
TASK A – 0
TASK A – 1
TASK A – 2

TASK A – 9
^^^ Task TASK A END ^^^
== END ==
— END —

Serial Queue – Async, then Sync

The correct situation is that the serial queue plans thread(s) to execute Task A and Task B one by one. The dispatch_async, and dispatch_sync’s effects are instantaneous:

1) dispatch_async Task A – Task A gets queued. The serial queue starts planning threads to work on this task A. Execution control continues because dispatch_async returns right away.

2) dispatch_sync Task B – Task B gets queued. The serial queue is working on Task A, and thus, by definition of a serial Queue, Task B must wait for Task A to finish before it continues. However, dispatch_sync’s effect is instantaneous and it blocks all other queues, main queues, and the tasks behind Task B from being queued.

Hence, the situation created by 1) and 2), we can see that Task A is being executed, Task B is waiting for Task A to finish, and the dispatch_sync is blocking all other queues, including the main queue. Thus, that is why your UISlider is not responsive.

output:

— START —
== START ==
—  Task TASK A start  —
TASK A – 0
TASK A – 1
TASK A – 2
TASK A – 3
TASK A – 4

TASK A – 9
^^^ Task TASK A END ^^^
== END ==
== START ==
—  Task TASK A start  —
TASK A – 0

TASK A – 9
^^^ Task TASK A END ^^^
== END ==
— END —

Serial Queue – Sync, then Async

The first sync blocks all queues, main queue, and other blocks behind itself. Hence UI is unresponsive.
Block A runs. When it finishes, it relinquishes control. Task B starts via dispatch_async, and returns immediately.
Thus, UI is NOT responsive when Task A is running. Then when Task A finishes, by definition of the serial queue, it let’s Task B runs. Task B starts via dispatch_async and thus, the UI would then be responsive.

Nested dispatches

Async nest Async on a Serial Queue

1) prints — START —
2) dispatch async a block task onto the serial queue. It returns right away, does not block UI. Execution continues.
3) Execution continues, prints — END —

4) the block task starts to execute. prints — OUTER BLOCK START —
5) Task A executes and prints its stuff

6) dispatch async another block onto the same serial queue. It returns execution right away, does not block UI. Execution continues.
7) Execution continues., prints — OUTER BLOCK END –.

8) The inner block starts processing on the serial queue. prints — INNER BLOCK START —
9) Task B executes and prints stuff
10) prints — INNER BLOCK END —

Result:

— START —
— OUTER BLOCK START —
— END —
—  Task TASK A start  —
TASK A – 0

TASK A – 9
^^^ Task TASK A END ^^^
— OUTER BLOCK END —
— INNER BLOCK START —
—  Task TASK B start  —
TASK B – 0

TASK B – 9
^^^ Task TASK B END ^^^
— INNER BLOCK END —

Async nest Sync on a Serial Queue – DEADLOCK!

deadlock

Notice that we’re on a serial queue. Which means the queue must finish the current task, before moving on to the next one.
The key idea here is that the task block that’s being queued at //2, must complete before any other tasks on the queue can start.

At // 6, we put another task block onto the queue, but due to dispatch_sync, we don’t return. We only return if the block at // 6 finish executing.

But how can the 1st task block at // 2 finish, if its being blocked by the 2nd task block at // 6?

This is what leads to the deadlock.

Sync nest Async on a Serial Queue

1) log — START —
2) sync task block onto the queue, blocks UI
3) log — OUTER BLOCK START —
4) Task A processes and finishes
5) dispatch_async another task block onto the queue, the UI is still blocked from 2)’s sync. However execution moves forward within the block due to the dispatch_async
returns immediately.
6) execution moves forward and log — OUTER BLOCK END —
7) outer block finishes execution, dispatch_sync returns. UI has control again. logs — END —
8) log –INNER BLOCK START —
9) Task B executes
10) log — INSERT BLOCK END —

Async nest Async on Concurrent Queue

1) log –START–
2) dispatch_async puts block task onto the concurrent queue. Does not block, returns immediately.
3) execution continues, and we log — END —
4) queue starts processing the task block from //2. prints — OUTER BLOCK START —
5) Task A executes
6) dispatch_async puts another block task onto the concurrent queue. Now there is 2 blocks. Does not block, returns immediately.
7) prints — OUTER BLOCK END –, task block #1 is done and de-queued.
8) prints — INNER BLOCK START —
9) Task B executes
10) prints — INNER BLOCK END —

Async nest Sync on Concurrent Queue

1) prints –START–
2) puts block task on concurrent queue. returns immediately so UI and other queues can process
3) since execution immediately returns, we print — END —

4) prints — OUTER BLOCK START —
5) Task A executes

6) puts another task block onto the concurrent queue. Return ONLY if this block is finished.
Note, that it DOES NOT RETURN only in current execution context of this block!
BUT OUTTER SCOPE CONTEXT STILL CAN PROCESS. That’s why UI is still responsive.

7) dispatch_sync does not return, so we print — INNER BLOCK START —
8) Task B executes
9) prints — INNER BLOCK END —
10) prints — OUTER BLOCK END —

Sync nest Async on Concurrent Queue

1) logs — START —
2) dispatch_sync a block task onto the concurrent queue, we do not return until this whole thing is done. UI not responsive
3) prints — OUTER BLOCK START —
4) Task A executes
5) dispatch_async a 2nd block onto the concurrent queue. They async returns immediately.
6) prints — OUTER BLOCK END –.
7) The 1st task block finishes, and dispatch_sync returns.
8) prints — END —
9) prints — INNER BLOCK START —
10) Task B executes
11) prints — INNER BLOCK END —

2 serial queues

Say it takes 10 seconds to complete a DB operation.

Say I have 1st serial queue. I use dispatch_async to quickly throw tasks on there without waiting.
Then I have a 2nd serial queue. I do the same.

When they execute, the 2 serial queues will be executing at the same time. In a situation where you have
a DB resource, having ONE serial queue makes it thread safe as all threads will be in queue.

But what if someone else spawns a SECOND serial queue. Those 2 serial queues will be accessing the DB resource
at the same time!

—  Task TASK A start  —
—  Task TASK B start  —
TASK B – 0
TASK A – 0
TASK B – 1
TASK A – 1
TASK B – 2

As you can see both operations are writing to the DB at the same time.

If you were to use dispatch_sync instead:

The dispatch_sync will not return until current task block is finished. The good thing about it is that the DB operation in serial queue ONE can finish
without the DB operation in serial TWO starting.

dispatch_sync on serial queue ONE is blocking all other queues, including serial queue TWO.

TASK A – 6
TASK A – 7
TASK A – 8
TASK A – 9
^^^ Task TASK A END ^^^
—  Task TASK B start  —
TASK B – 0
TASK B – 1
TASK B – 2

However, we are also blocking the main thread because we’re working on the main queue ! -.-
In order to not block the main thread, we want to work in another queue where it is being run concurrently with the main queue.
Thus, we just throw everything inside of a concurrent queue.

Our concurrent queue works on the main queue, thus, the UI is responsive.
The blocking of our DB tasks are done within the context of our concurrent queue. It will block processing there,
but won’t touch the main queue. Thus, won’t block the main thread.

Dismiss keyboard on touch anywhere outside UITextField

http://stackoverflow.com/questions/5306240/iphone-dismiss-keyboard-when-touching-outside-of-uitextfield

In viewDidLoad

In dismissKeyboard:

property declaration without synthesize

http://www.techrepublic.com/blog/software-engineer/what-you-need-to-know-about-automatic-property-synthesis/

.m

  • By prefixing playerName with self. you will access the property
  • without it you will directly reference the instance variable

Hence self.playerName = @”hehe”, will make you access the either automatically generated set or custom set method.

whereas playerName = @”hehe”, will literally have you use the instance variable.

No Synthesize

Without @synthesize probability in the .m file,

_probability is the iVar
self.probability means to access the automatically generated get/set methods

try it

Custom UITableViewCell

JSCustomCell.h

JSCustomCell.m

result:

ViewController.h

ViewController.m

Protocol

1) Protocol definition

Given protocol in DataModel.h, we have:

2) Define the protocol delegate in delegator (source) class

Our DataModel class is the delegator. It is the “source” of all of our delegations.

the delegate means it “delegates” the messages sent by DataModel (source) to whatever “view” class/controllers (destination) that the delegates point to. Hence, that “view” class/controller will have to implement the delegate methods(s).

3) Implement protocol method in delegator(source) class

In DataModel.m, we have:

..this means, whatever object responds (or has the implementation) to what our delegate has asked for in the protocol declaration, we make that object take care of it.

4)Conform to the delegate in view/controller(delegator) class

Hence, let’s say we have RegistrationViewController. We make it conformed to our delegate UpdateViewDelegate like so:

Which means this class (RegistrationViewController) will have to implement the delegate methods here.

And since our UpdateViewDelegate are all optional methods, we can optionally implement those methods. In our example, let’s implement showMessageBox:

However, those implemented UpdateViewDelegate methods won’t be of any use because if no delegate messages gets passed here. Hence, we will need to have other classes that conforms to UpdateViewDelegate to pass their delegates.

We do so by using those classes (delegator) in our RegistrationViewController (delegatee):

We use the DataModel (which conforms to UpdateViewDelegate) object in our class:

Then assign the delegate from our DataModel object to self.

Now, whenever DataModel object has delegated messages to our RegistrationViewController, the RegistrationViewController’s protocol methods will be able to take care of it.

char arrays and null terminated strings

Character arrays are designated by

Hence, we make an array of characters called “myName” with 10 elements:

char_array_memory

Any left over elements that are not taken up by the initialization will have ” or NULL filled in.

Using pointer to display string

myName is actually a const pointer to the first element of the array. When we use cout to display char arrays, it displays all the data from the pointer up until the terminating NULL. Since our pointer is currently at the beginning of the array (R), cout will display the full name Ricky.

Adding a byte (1) to the pointer will display one character further down the array. Note that a char is equal a byte.
Adding n bytes will display characters further down the array.

Result:
my name is: Ricky
my name is: icky
my name is: cky
my name is: ky
my name is: y

Using address of element to display string

myName[0] gets you the first element ‘R’.

If you use &myName[0], it gets you the ADDRESS of the first element. Hence, the compiler will display all data from that address up to a NULL. Hence &myName[0] will display ‘Ricky’ also.

Result:
&myName[0] displays: Ricky, myName[0] is: R
&myName[1] displays: icky, myName[1] is: i
&myName[2] displays: cky, myName[2] is: c
&myName[3] displays: ky, myName[3] is: k
&myName[4] displays: y, myName[4] is: y

What if the element is NULL?

A better way to display an element is to check for NULL

In our case index 5 and on would give us a NULL.

Getting the address

Use void pointer to get the memory location.
std::cout << (void*)&myName[0] << std::endl; std::cout << (void*)&myName[1] << std::endl; std::cout << (void*)&myName[2] << std::endl; std::cout << (void*)&myName[3] << std::endl; std::cout << (void*)&myName[4] << std::endl; std::cout << (void*)&myName[5] << std::endl; Result: 0x7fff5fbff86e 0x7fff5fbff86f 0x7fff5fbff870 0x7fff5fbff871 0x7fff5fbff872 0x7fff5fbff873 Remember that hex is 0 - 9, a - f and each increment is a byte. Hence R is at address 0x7fff5fbff86e
i is at address 0x7fff5fbff86f
c is at address 0x7fff5fbff860
k is at address 0x7fff5fbff861

where each char is a byte. Also notice that the string terminating NULL also has a address at 0x7fff5fbff873.

void pointer

A void pointer, (void *) is a raw pointer to some memory location.
The type void * simply means “a pointer into memory; I don’t know what sort of data is there”.

note:
When you stream the address of a char to an ostream, it interprets that as being the address of the first character of an ASCIIZ “C-style” string, and tries to print the presumed string. You don’t have a NUL terminator, so the output will keep trying to read from memory until it happens to find one or the OS shuts it down for trying to read from an invalid address. All the garbage it scans over will be sent to your output.

When you are taking the address of i, you get char *.
operator<< interprets that as a C string, and tries to print a character sequence instead of its address.

Arrays

An array is a series of elements of the same type placed in contiguous memory locations that can be individually referenced by adding an index to a unique identifier.

That means that, for example, five values of type int can be declared as an array without having to declare 5 different variables (each with its own identifier). Instead, using an array, the five int values are stored in contiguous memory locations, and all five can be accessed using the same identifier, with the proper index.

For example, an array containing 5 integer values of type int called foo could be represented as:

where each blank panel represents an element of the array. In this case, these are values of type int. These elements are numbered from 0 to 4, being 0 the first and 4 the last; In C++, the first element in an array is always numbered with a zero (not a one), no matter its length.

Like a regular variable, an array must be declared before it is used. A typical declaration for an array in C++ is:

type name [elements];

where type is a valid type (such as int, float…), name is a valid identifier and the elements field (which is always enclosed in square brackets []), specifies the length of the array in terms of the number of elements.

Therefore, the foo array, with five elements of type int, can be declared as:

Initializing Arrays

By default, regular arrays of local scope (for example, those declared within a function) are left uninitialized. This means that none of its elements are set to any particular value; their contents are undetermined at the point the array is declared.

But the elements in an array can be explicitly initialized to specific values when it is declared, by enclosing those initial values in braces {}. For example:

int foo [5] = { 16, 2, 77, 40, 12071 };

This statement declares an array that can be represented like this:

The number of values between braces {} shall not be greater than the number of elements in the array. For example, in the example above, foo was declared having 5 elements (as specified by the number enclosed in square brackets, []), and the braces {} contained exactly 5 values, one for each element. If declared with less, the remaining elements are set to their default values (which for fundamental types, means they are filled with zeroes). For example:

int bar [5] = { 10, 20, 30 };

Will create an array like this: