All posts by admin

XML

https://www.w3schools.com/xml/

XML stands for eXtensible Markup Language.

XML was designed to store and transport data.

XML was designed to be both human- and machine-readable.

XML stands for eXtensible Markup Language
XML is a markup language much like HTML
XML was designed to store and transport data
XML was designed to be self-descriptive
XML is a W3C Recommendation

XML Does Not DO Anything

like this:

Its just data. And the data looks like markup similar to HTML.

XML is just information wrapped in tags.

Someone must write a piece of software to send, receive, store, or display it

XML and HTML were designed with different goals:

XML was designed to carry data – with focus on what data is
HTML was designed to display data – with focus on how data looks
XML tags are not predefined like HTML tags are – in HTML b tag means to bold whatever text is in between. But XML tags are not predefined.

The XML language has no predefined tags.

The tags in the example above (like ‘to’ and ‘from’) are not defined in any XML standard. These tags are “invented” by the author of the XML document.

HTML works with predefined tags like ‘p’, ‘h1’, ‘table’, etc.

With XML, the author must define both the tags and the document structure.

XML is Extensible
Most XML applications will work as expected even if new data is added (or removed).

Imagine an application designed to display the original version of note.xml (‘to’ ‘from’ ‘heading’ ‘data’).

Then imagine a newer version of note.xml with added ‘date’ and ‘hour’ elements, and a removed ‘heading’.

The way XML is constructed, older version of the application can still work:

It will simply the extra data.

XML Simplifies Things

It simplifies data sharing
It simplifies data transport
It simplifies platform changes
It simplifies data availability
Many computer systems contain data in incompatible formats. Exchanging data between incompatible systems (or upgraded systems) is a time-consuming task for web developers. Large amounts of data must be converted, and incompatible data is often lost.

XML stores data in plain text format. This provides a software- and hardware-independent way of storing, transporting, and sharing data.

XML also makes it easier to expand or upgrade to new operating systems, new applications, or new browsers, without losing data.

With XML, data can be available to all kinds of “reading machines” like people, computers, voice machines, news feeds, etc.

XML Separates Data from Presentation

XML does not carry any information about how to be displayed.

The same XML data can be used in many different presentation scenarios.

Because of this, with XML, there is a full separation between data and presentation.

XML is Often a Complement to HTML

In many HTML applications, XML is used to store or transport data, while HTML is used to format and display the same data.

XML Separates Data from HTML

When displaying data in HTML, you should not have to edit the HTML file when the data changes.

With XML, the data can be stored in separate XML files.

With a few lines of JavaScript code, you can read an XML file and update the data content of any HTML page.

Self-Describing Syntax

XML uses a much self-describing syntax.

A prolog defines the XML version and the character encoding:

The next line is the root element of the document:

The next line starts a ‘book’ element:

The ‘book’ elements have 4 child elements: ‘title’, ‘author’, ‘year’, ‘price’.

The next line ends the book element:

XML Documents Must Have a Root Element

XML documents must contain one root element that is the parent of all other elements:

The XML Prolog

This line is called the XML prolog:

The XML prolog is optional. If it exists, it must come first in the document.

XML Tags are Case Sensitive

XML tags are case sensitive. The tag is different from the tag .

Opening and closing tags must be written with the same case:

XML Attribute Values Must be Quoted

XML elements can have attributes in name/value pairs just like in HTML.

In XML, the attribute values must always be quoted.

INCORRECT:

Entity References

Some characters have a special meaning in XML.

If you place a character like “<" inside an XML element, it will generate an error because the parser interprets it as the start of a new element. This will generate an XML error:

What is an XML Element?

An XML element is everything from (including) the element’s start tag to (including) the element’s end tag.

An element can contain:

text
attributes
other elements
or a mix of the above

In the example above:

XML Elements vs. Attributes

Take a look at these examples:

In the first example gender is an attribute. In the last, gender is an element. Both examples provide the same information.

There are no rules about when to use attributes or when to use elements in XML.

BUT, Avoid XML Attributes?

Some things to consider when using attributes are:

attributes cannot contain multiple values (elements can)
attributes cannot contain tree structures (elements can)
attributes are not easily expandable (for future changes)

Don’t end up like this:

Namespace Declaration

A Namespace is declared using reserved attributes. Such an attribute name must either be xmlns or begin with xmlns: shown as below −

Syntax
The Namespace starts with the keyword xmlns.
The word name is the Namespace prefix.
The URL is the Namespace identifier.

Example
Namespace affects only a limited area in the document. An element containing the declaration and all of its descendants are in the scope of the Namespace. Following is a simple example of XML Namespace −

Here, the Namespace prefix is cont, and the Namespace identifier (URI) as www.tutorialspoint.com/profile. This means, the element names and attribute names with the cont prefix (including the contact element), all belong to the www.tutorialspoint.com/profile namespace.

calling function on optional, optional initializer

Having optionals is very helpful in that if you decide to call a function on it, it will not crash and simply return nil.

In our case, we created a class where if its dictionary has valid entries, others can use that class and query it.
However, if that dictionary does not have anything, then execution would continue without a crash.

Quick note about dictionaries

Chapter 11: Dictionaries

A dictionary is an unordered collection that stores multiple values of the same type.

Each value from the dictionary is associated with a unique key. All the keys have the same type.

The type of a dictionary is determined by the type of the keys and the type of the values. A dictionary of type[String:Int] has keys of type String and values of type Int.

Declare Dictionaries
To declare a dictionary you can use the square brackets syntax([KeyType:ValueType]).

You can access specific elements from a dictionary using the subscript syntax. To do this pass the key of the value you want to retrieve within square brackets immediately after the name of the dictionary.

Because it’s possible not to have a value associated with the provided key (i.e, nil) the subscript will return an optional value of the value type

Thus this means that

Hence, to unwrap the value returned by the subscript you can do one of two things: use optional binding or force the value if you know for sure it exists.

Example 1

If we have a valid property in order to initialize the dictionary, we return a valid self object. This let’s others create and query our OutOfBoundsDictionary.

If our property name does not exist, then we do not want others to be able to query. Thus in the initialization function, it put a ? to denote that what we return is optional self. If the name does not exist, then we return nil. When we return nil, any function calls on a nil, will simply be ignored.

Thus, if name is initialized, then execution will run through normally and print a valid last name.
If property name was not initialized, then a is returned as nil and calling any functions on it will be ignored.

Another way to do it

…is to have a standard initializer. Return a valid self object. However due to not having any entries in the dictionary, when other objects try to use findPerson with a firstName, it will return nil. And thus, will not print anything

Hash Table, prime numbers, and hash functions

basic hash table to Strings (xcode 8.3.3)
Hash table with a stack or queue (xcode 8.3.3)
HashTable with choice for data structure

http://www.partow.net/programming/hashfunctions/
https://www.quora.com/Why-are-prime-numbers-used-for-constructing-hash-functions
http://algs4.cs.princeton.edu/34hash/
Why do hash functions use prime numbers?
https://cs.stackexchange.com/questions/11029/why-is-it-best-to-use-a-prime-number-as-a-mod-in-a-hashing-function

Why do hash functions use prime numbers for number of buckets ?

Consider a hash function (or a set of numeric data) that gives you multiples of 10.

If we use a bucket size of say, 4 buckets, we get:

10 mod 4 = 2

20 mod 4 = 0

30 mod 4 = 2

40 mod 4 = 0

50 mod 4 = 2

So from the set of hash results such as {10, 20, 30, 40, 50}, if we were to hash them into our buckets, all of them would go either into bucket 0, or bucket 2. all odd numbers would collide at bucket 2. All even numbers collide at bucket 0. The distribution of data into buckets is not good.

Let’s say we used 7 buckets instead. We take the generated hash keys, and do the mod to see how they are distributed throughout the hash table:

10 mod 7 = 3

20 mod 7 = 6

30 mod 7 = 2

40 mod 7 = 4

50 mod 7 = 1

much better. The numbers are getting distributed more evenly.

Let’s say we used 5 buckets.

10 mod 5 = 0

20 mod 5 = 0

30 mod 5 = 0

40 mod 5 = 0

50 mod 5 = 0

Even though 5 is a prime number, all of our keys are multiples of 5, and thus the mod will always be 0. This will distribute all of our keys into bucket 0.

Therefore, this means we have to choose a prime number that doesn’t divide our keys, choosing a large prime number is usually enough.

the reason prime numbers are used is to neutralize the effect of patterns in the keys in the distribution of collisions of a hash function.

In other words, say we have a function that generates a set (or just a simple data list) of data K

Function generateList generates array: {0, 2, 3, 5, 6, 7, 9, 11, 12, 13, 14, 15, 18, 19, 24, 27, 28, 29, 30, 36, 42, 48 ….etc}

We use a hash table where the number of buckets is m = 12 (non-prime)

Let’s call each number inside of data K, a hash key. So 0 is a hash key. 1 is a hash key. 2 is a hash key. 5 is a hash key, and so on.

We map a hash key onto the bucket size m via AND 0xff, or % array size m. (In our example, we use % array size)

hash-to-bucket
0 % 12 = 0
12 % 12 = 0
24 % 12 = 0
36 % 12 = 0

Thus, for every integer K in our list , 12, 24, 36…would all hash to bucket 0.

The other integers will have their respective buckets
2 % 12 = 2
3 % 12 = 3


13 % 12 = 1
14 % 12 = 2
15 % 12 = 3
18 % 12 = 6
19 % 12 = 7

Given, N is the number of buckets in a hash table. Let’s say 4.

If K is uniformly distributed, in other words, if K has integers (distribution) which has all outcomes equally likely, then the choice of bucket size ‘m’ is not so critical. For example, 1,2,3,4,5,6….etc, thus, K’s integer distribution all comes out equally likely.

However, there is an issue which can arise if the keys being hashed are not uniformly distributed. Specifically when the keys result in values in which N is a factor of that key, or the key is a multiple factor of N.

i.e.

4 is a factor of 20, 40, 60, 80. In the same way, 20 is a multiple factor of 4. 40 is a multiple factor of 4, and so on.

if K is 4, then K is a factor of N (4). 4 % 4 = 0
if K is 20, then K is a multiple factor of N(4). 20 % 4 = 0

4 is factor of 20, 20 % 4 = 0, key 20 get bucket 0
4 is factor of 40, 40 % 4 = 0, key 40 get bucket 0
4 is factor of 60, 60 % 4 = 0, key 60 get bucket 0
4 is factor of 80, 80 % 4 = 0, key 80 get bucket 0

In this situation, integer K, that aren’t factors of N or multiples of factors of N will remain empty, causing the load factor of the other buckets to increase disproportionately.

This situation seems to be the only valid reason to use a prime number. Given that a prime number has only itself and one as factors, using a prime number to resolve this problem means even if the keys are not uniformly distributed and instead possess some of kind structure (specifically multiples of a value), the likelihood that those values will arbitrarily hash to either the value one or N (the prime number) will be vanishingly small.

But, what happens if K is not uniformly distributed? Imagine that the keys that are most likely to occur are the multiples of 10 (like our example above) such as 10, 20, 30, 40, 50…and they keep appearing a lot.

In this case, all of the buckets that are NOT multiples of 10 will be empty with high probability (which is really bad in terms of hash table performance).

In general:

Every key in K (every integer in our array) that shares a common factor with the (number of buckets) m will be hashed to a bucket that is a multiple of this factor.

Every key in K, let’s say 14, which has factors 2, 7.
# of buckets (12) – 2, 3, 4, 6.

14 shares a common factor of 2 with (m, 12 buckets)
Hence any multiples of 14 will be hashed to a bucket that is a multiple of 2.

14 % 12 = 2
28 % 12 = 4
42 % 12 = 6
56 % 12 = 8
70 % 12 = 10
84 % 12 = 0
98 % 12 = 2


Hence, every key in K – 14, 28, 42, 56, …etc
that shares a common factor with (m number of buckets, 12) – common factor is 2
will be hashed to a bucket that is a multiple of that common factor (which is 2, 4, 6, 8)

Therefore, to minimize collisions, it is important to reduce the number of common factors between m and the elements of K. How can this be achieved?

By choosing m to be a number that has very few factors: a prime number.

extending String

initially getting an index from String is used like this:

We call String’s index, use its variable startIndex (which is a static String.Index)
to indicate we start at beginning of the String. Then we off set by i. And that’s where we’ll
return the character in the String.

By using the subscript above, we simply use it to initialize a String, then return that String

Finally, in we extend Character, and create a var ASCII.
In it, we first create a String with its self Character. We access unicodeScalars, which is a
list of ASCII numbers for each character.

Hash table – Separate Chaining, Open Hashing

ref – https://en.wikipedia.org/wiki/Associative_array
https://github.com/redmacdev1988/LinkedListHashTable

Open hashing – in this strategy, none of the objects are actually stored in the hash table’s array; instead once an object is hashed, it is stored in a list which is separate from the hash table’s internal array. “open” refers to the freedom we get by leaving the hash table, and using a separate list. By the way, “separate list” hints at why open hashing is also known as “separate chaining”.

The most frequently used general purpose implementation of an associative array is with a hash table: an array combined with a hash function that separates each key into a separate “bucket” of the array. The basic idea behind a hash table is that accessing an element of an array via its index is a simple, constant-time operation. Therefore, the average overhead of an operation for a hash table is only the computation of the key’s hash, combined with accessing the corresponding bucket within the array. As such, hash tables usually perform in O(1) time, and outperform alternatives in most situations.

Hash tables need to be able to handle collisions: when the hash function maps two different keys to the same bucket of the array. The two most widespread approaches to this problem are separate chaining and open addressing.

https://en.wikipedia.org/wiki/Hash_table#Separate_chaining

Average | Worst Case

Space O(n)[1] | O(n)
Search O(1) | O(n)
Insert O(1) | O(n)
Delete O(1) | O(n)

The idea of hashing is to distribute the entries (key/value pairs) across an array of buckets. Given a key, the algorithm computes an index that suggests where the entry can be found:

index = f(key, array_size)

hash = hashfunc(key) // where hash is some number
index = hash % array_size // in order to fit the hash into the array size, it is reduced to an index using modulo operator

In this method, the hash is independent of the array size, and it is then reduced to an index (a number between 0 and array_size − 1) using the modulo operator (%) .

Choosing a hash function

http://eternallyconfuzzled.com/tuts/datastructures/jsw_tut_hashtable.aspx

Table size and range finding

The hash functions introduced in The Art of Hashing were designed to return a value in the full unsigned range of an integer. For a 32-bit integer, this means that the hash functions will return a value in the range [0..4,294,967,296). Because it is extremely likely that your table will be smaller than this, it is possible that the hash value may exceed the boundaries of the array.

The solution to this problem is to force the range down so that it fits the table size.

For example, if the table size is 888, and we get 8,403,958, how do we fit this value within the table?

A table size should not be chosen randomly because most of the collision resolution methods require that certain conditions be met for the table size or they will not work correctly. Most of the time, this required size is either a power of two, or a prime number.

Why a power of two? Because we use bitwise operation to get performance benefits. Bitwise operations are done on binary bit combinations and thus, that’s why the table size needs to be in power of 2. A table size of a power of two may be desirable on some implementations where bitwise operations offer performance benefits. The way to force a value into the range of a power of two can be performed quickly with a masking operation.

For example, to force the range of any value into eight bits, you simply use the bitwise AND operation on a mask of 0xff (hexadecimal for 255):

0x8a AND 0xff = 0x8a

from hex to digit:

Note that _ _ _ _ _ _ _ _ = 8 bits = 1 byte.
2 2 2 2 2 2 2 2 = 2^8 = 256 representations (memory addresses)

0x8a -> 1000(8) 1010(a) -> 1000 1010 binary

binary to decimal is 1 * 128 + 0 * 64 + 0 * 32 + 0 * 16 + 1 * 8 + 0 * 4 + 1 * 2 + 0 * 1 = 128 + 10 = 138.

from digit to hex:

Thus, if we were to get a value of 138, we force this value to give us a return index from an array size of 8 bits by using AND operation with 0xff.

Thus, in code, you’ll get the parameter 138. Then you convert it to 1000 1010, then to hex which is 0x8a.
Then you apply the AND bit op which gives you 0x8a AND 0Xff = 0x8a

so 138 in 256. But what if you get some large number like 888?
888 in binary is 0x378

0x378 AND 0x0ff = 78

we append 0 to front of 0xff because we’re dealing with 3 hex places. We apply the AND and get 78. Thus if you get hash value 888, it would give you index 78.

table[hash(key) & 0xff]
This is a fast operation, but it only works with powers of two. If the table size is not a power of two, the remainder of division can be used to force the value into a desired range with the remainder operator. Note that this is slightly different than masking because while the mask was the upper value that you will allow, the divisor must be one larger than the upper value to include it in the range. This operation is also slower in theory than masking (in practice, most compilers will optimize both into the same machine code):

table[hash(key) % 256]
When it comes to hash tables, the most recommended table size is any prime number.

This recommendation is made because hashing in general is misunderstood, and poor hash functions require an extra mixing step of division by a prime to resemble a uniform distribution. (https://cs.stackexchange.com/questions/11029/why-is-it-best-to-use-a-prime-number-as-a-mod-in-a-hashing-function)

Another reason that a prime table size is recommended is because several of the collision resolution methods require it to work. In reality, this is a generalization and is actually false (a power of two with odd step sizes will typically work just as well for most collision resolution strategies), but not many people consider the alternatives and in the world of hash tables, prime rules.

Advantages in using Hash Table

The main advantage of hash tables over other table data structures is speed. This advantage is more apparent when the number of entries is large. Hash tables are particularly efficient when the maximum number of entries can be predicted in advance, so that the bucket array can be allocated once with the optimum size and never resized.

If the set of key-value pairs is fixed and known ahead of time (so insertions and deletions are not allowed), one may reduce the average lookup cost by:
1) a careful choice of the hash function,
2) bucket table size,
3) internal data structures.

In particular, one may be able to devise a hash function that is collision-free, or even perfect. In this case the keys need not be stored in the table.

Open Addressing

Another popular technique is open addressing:

  1. at each index of our list we store one and one only key-value pair
  2. when trying to store a pair at index x, if there’s already a key-value pair, try to store our new pair at x + 1
    if x + 1 is taken, try x + 2 and so on…
  3. When retrieving an element, hash the key and see if the element at that position (x) matches our key. If not, try to access the element at position x + 1. Rinse and repeat until you get to the end of the list, or when you find an empty index — that means our element is not in the hash table

Separate chaining with linked lists

Chained hash tables with linked lists are popular because they require only basic data structures with simple algorithms, and can use simple hash functions that are unsuitable for other methods.

The cost of a table operation is that of scanning the entries of the selected bucket for the desired key. If the distribution of keys is sufficiently uniform, the average cost of a lookup depends only on the average number of keys per bucket—that is, it is roughly proportional to the load factor.

For this reason, chained hash tables remain effective even when the number of table entries n is much higher than the number of slots. For example, a chained hash table with 1000 slots and 10,000 stored keys (load factor 10) is five to ten times slower than a 10,000-slot table (load factor 1); but still 1000 times faster than a plain sequential list.

For separate-chaining, the worst-case scenario is when all entries are inserted into the same bucket, in which case the hash table is ineffective and the cost is that of searching the bucket data structure. If the latter is a linear list, the lookup procedure may have to scan all its entries, so the worst-case cost is proportional to the number n of entries in the table.

The bucket chains are often searched sequentially using the order the entries were added to the bucket. If the load factor is large and some keys are more likely to come up than others, then rearranging the chain with a move-to-front heuristic may be effective. More sophisticated data structures, such as balanced search trees, are worth considering only if the load factor is large (about 10 or more), or if the hash distribution is likely to be very non-uniform, or if one must guarantee good performance even in a worst-case scenario. However, using a larger table and/or a better hash function may be even more effective in those cases.

What is the equivalent of an Objective-C id in Swift?

https://stackoverflow.com/questions/24005678/what-is-the-equivalent-of-an-objective-c-id-in-swift

Swift 3

Any, if you know the sender is never nil.

@IBAction func buttonClicked(sender : Any) {
println(“Button was clicked”, sender)
}
Any?, if the sender could be nil.

@IBAction func buttonClicked(sender : Any?) {
println(“Button was clicked”, sender)
}

Reader Writer #2 using Semaphore

semaphore_Reader_Writer_2

https://en.wikipedia.org/wiki/Readers%E2%80%93writers_problem

Readers won’t starve Writers anymore

Due to readTry, if additional future reads come in, these future reads wait FCFS with future writes via readTry. Thus, this line gives
fair chance for future writes to grab readTry and do its writing.

Another very important thing is that once a future write grabs readyTry, no future reads will be able to grab readTry. This causes existing reads to eventually finish reading, decreasing readCount back to 0. Once this future write finishes, then the next waiting Read can start again.

Writers may starve Readers

The very first writer has a hold on readyTry so that no other “additional” readers can come in.
When we’re done writing, we decrement. IF WE’RE THE LAST WRITER, make sure to let go of readyTry, so other readers can come in.

However, this starves readers because reader’s first execution is to try to grab readTry. Since its already been held by the first
writer, the reader will fail.

Furthermore, future writers come in and do not need to grab readTry again. It was already held by the first
writer and thus, future writers will simply wait for the resource to do its writing, then decrement writeCount.

ONLY the last writer can let go of readTry. And this is where it starves readers.

Let’s see what happens when a reader X and writer C get executed.

output

Reader Writer #1 using Semaphore

Semaphore Reader Writer #1 demo

https://en.wikipedia.org/wiki/Readers%E2%80%93writers_problem

Starting off

We have a semaphore that holds 1 process. We the reference to this object semaphoreResource
The whole point of this semaphore is for Readers and Writers to fight over it. If a Reader is holding it, a Writer cannot write.
If a Writer is holding it, Readers cannot read.

Writers fight with Readers for semaphoreResource but will never ever touch semaphoreReaderMutex.

That’s because semaphoreReaderMutex is used between Readers in order to update/change a variable called readCount.

readCount determines whether the first reader will hold the semaphoreResource and also whether the last reader will let go of semaphoreResource.
That’s the whole purpose of semaphoreReaderMutex.

We define two callbacks to simulate reading and writing. Reading takes 3 seconds and writing takes 5 seconds.

Writer

In the case of Writers, its very straightforward. It grabs the resource semaphoreResource. If its succeeds, it will then do the writing by simply calling the runWriteCodeBlock callback. After the writing, it will let go of the semaphore:

Readers

The idea is that we grab the reader semaphore (semaphoreReaderMutex) in order to make changes to the resource semaphore (semaphoreResource).
The reader semaphore allows the 1st reader to lock the resource semaphore, and naturally, the last reader to unlock the semaphore.

Any readers after the 1st one, does not need to lock the resource semaphore anymore. They just go on ahead and do their reading.
However, they DO NEED to grab the reader mutex because they are changing the readCount variable. This is to update the number of readers.
Only when the last reader finishes reading (updates the readCount to 0) then it unlocks the resource semaphore so that the writers can write.

However, it may be that other readers will read also. In this solution, every writer must claim the resource individually. This means that a stream of readers can subsequently lock all potential writers out and starve them. As long as future readers keep coming in, the next waiting writer will NEVER be able to write.

This is so, because after the first reader locks the resource, no writer can lock it, before it gets released. All future writers MUST WAIT FOR ALL READERS TO FINISH (readCount back to 0) in order to grab hold of the resource semaphore in order to do its writing.

In other words, a few readers come along, increases the readCount. Then leaves. Then more readers come. And more, and further even into the future such that readCount is always > 0. Hence this is what will starve the writer.

Therefore, this solution does not satisfy fairness.

full source

DispatchSemaphore

https://priteshrnandgaonkar.github.io/concurrency-with-swift-3/

Dispatch Groups

Dispatch group demo xCode 8.3.3

DispatchQueue uses enter() and leave() to group chunks of code into one work item and process them in the group. This can be done async or sync.

Note that DispatchQueues do not use sync, because the purpose would not make sense. The whole purpose of a DispatchGroup is to have multiple tasks run, and notify when they are done. Since sync is done one by one, and in order, there is no purpose to notify anything due to the nature of sync: Each task must begin and end in order.

Source Code

First, take note that we are looping through placing code chunks onto a global queue via async operation. Thus,

Then these 4 code chunks gets processed by the global queue asynchronously. Their initial lines gets printed:

—– code execution entered dispatchGroup for index 0 ——
—– code execution entered dispatchGroup for index 2 ——
—– code execution entered dispatchGroup for index 3 ——
—– code execution entered dispatchGroup for index 1 ——
code execution 0 start
code execution 2 start
code execution 3 start
code execution 1 start

This means that for each code chunk, that code chunk has placed a section of code into the dispatchGroup. In our case, the section of code prints 0 to 100.
The dispatchGroup now has 4 section of code to process.

The dispatchGroup will now execute the 4 section of code asynchronously.

Due to code chunk 0’s NO SLEEP, it simply does its printing first and it does it fast. Code chunk 1 can’t start because it requires every loop to sleep for 0.01 seconds before it starts. Thus, since code chunk 0’s loop does 0 sleep, it prints all first.

Then code chunk 1 starts, and sleeps for 0.01 between each print. Code chunk 2 will also start, and it sleeps for 0.02 between each print….

Due to

index 1’s loop having to sleep 0.01ms,
index 2’s loop having to sleep 0.02ms,
index 3’s loop having to sleep 0.03ms,

for every iteration, you can see that index 1’s printing of its loop is just a tad bit faster than index 2 and index 3’s loops.

index 1’s code chunk finishes and thus, it leaves the group.

eventually index 2’s code also finishes and leaves the group. Then index 3 will finally group after printing its loop.

Then when the group is finally empty, it will notify us and process the code block to print “Block 4”. Since this command is within the for loop, it will print 4 times.

Of course you can also remove the sleep line, if that’s the case, all of code chunks on the global queue will be processed asynchronously and they will all finish in similar time to no sleep for any of them. It still holds that once ALL of them are finished, they will notify, and thus, the notify code blocks will run.

output

—– code execution entered dispatchGroup for index 0 ——
—– code execution entered dispatchGroup for index 2 ——
—– code execution entered dispatchGroup for index 3 ——
—– code execution entered dispatchGroup for index 1 ——
code execution 0 start
code execution 2 start
code execution 3 start
code execution 1 start
loop index 0—- code chunk 0—
loop index 1—- code chunk 0—
….
loop index 100—- code chunk 0—
code execution 0 finish

—— code execution left dispatchGroup for index 0 ——–
loop index 0—- code chunk 1—
loop index 0—- code chunk 2—
loop index 1—- code chunk 1—
loop index 0—- code chunk 3—
….
loop index 100—- code chunk 1—
code execution 1 finish

—— code execution left dispatchGroup for index 1 ——–
loop index 51—- code chunk 2—
loop index 35—- code chunk 3—
loop index 52—- code chunk 2—

loop index 68—- code chunk 3—
loop index 100—- code chunk 2—
code execution 2 finish

—— code execution left dispatchGroup for index 2 ——–
loop index 69—- code chunk 3—
loop index 70—- code chunk 3—
loop index 71—- code chunk 3—

loop index 98—- code chunk 3—
loop index 99—- code chunk 3—
loop index 100—- code chunk 3—
code execution 3 finish

—— code execution left dispatchGroup for index 3 ——–

Block 4
Block 4
Block 4
Block 4

delegate vs callbacks

https://medium.cobeisfresh.com/why-you-shouldn-t-use-delegates-in-swift-7ef808a7f16b

The difference between delegates and callbacks is that

with delegates, the NetworkService is telling the delegate “There is something changed.”

We declare a protocol that says, whatever object conforms to this, must implement func didCompleteRequest(result: String):
This is so that we can pass the result String to that object.

Hence, we have a NetworkService object, it has a delegate to some object A that conforms to NetworkServiceDelegate. This means that object A will implement
func didCompleteRequest(result: String).

That way, whenever something is fetched from a URL, we can call on the delegate (reference that object A), and pass the result String via the protocol method
didCompleteRequest:

Hence, the delegate (the object that conforms to the protocol) is notified of the change

With callbacks, the delegate is observing the NetworkService

It will call “networkService.fetchDataFromUrl(url: “http://www.google.com”)” somewhere. Then it will observe for the data to pass through from fetchDataFromURL, and finally to the defined definition of onComplete as declared in viewDidLoad.