All posts by admin

AutoLayout, constraints, with xib/storyboards

A Beginner’s Guide to Auto Layout with Xcode 8


auto-constraints

Why Auto Layouts?

Say we put a label in the middle of our xib with “hello world”. If you run it in on different iPhone devices, the label will appear in different positions. This is because each device is a different size.

If we were to put something at the “middle” of an iPhone SE, it will look much different on a much larger iPhone 7S. In fact, it uses points straight up. So “middle” on iPhone SE may be (120, 260). When you run it on the iPhone 7S, the label will be at (120, 260), which is obviously not the middle of the iPhone 7S device. It will appear on the upper left region of the 7S screen.

Same concept when the device switches from portrait to landscape. Even though the device rotates, the label stays same as coordinate (120, 260)

Hence, auto layout is a constraint-based layout system. It allows developers to create an adaptive UI that responds appropriately to changes in screen size and device orientation.

Using Auto Layout to Center our label

Each button has its own function:

Align – Create alignment constraints, such as aligning the left edges of two views.
Pin – Create spacing constraints, such as defining the width of a UI control.
Issues – Resolve layout issues.
Stack – Embed views into a stack view. Stack view is a new feature since Xcode 7. We will further discuss about it in the next chapter.

center horizontally and center vertically. Both constraints are with respect to the view.

To create the constraints, we will use the Align function. First, select the button in Interface Builder and then click the Align icon in the layout bar. In the pop-over menu, check both “Horizontal in container” and “Vertically in container” options. Then click the “Add 2 Constraints” button.

add-vertical-horizontal-constraint

Now run the app in different device sizes, and you’ll see that the label should be centered in all of them.

Resolving Layout Constraint Issues

The layout constraints that we have just set are perfect. But that is not always the case. Xcode is intelligent enough to detect any constraint issues.

Try to drag the Hello World button to the lower-left part of the screen. Xcode immediately detects some layout issues and the corresponding constraint lines turns orange that indicates a misplaced item.

constraint_mishap

When there is any layout issue, the Document Outline view displays a disclosure arrow (red/orange). Now click the disclosure arrow to see a list of the issues. Interface Builder is smart enough to resolve the layout issues for us. Click the indicator icon next to the issue and a pop-over shows you a number of solutions. In this case, select the “Update Frame” option and click “Fix Misplacement” button. The button will then be moved to the center of the view.

xib-disclosure-arrow

Then, simply choose update the frame for xCode to resolve the problem for you.

list-issues

Alternative way to view Storyboard

You can also add more devices to preview:

add_more_devices

Add a label to the bottom right hand corner.

If you opened the preview assistant again, you should see the UI change immediately. Note that without defining any layout constraints for the label, you are not able to display the label on all iPhone devices.

adding-label-to-alternative-xib-view

The label is located 0 points away from the right margin of the view and 20 points away from the bottom of the view.

This is much better. When you describe the position of an item precisely, you can easily come up with the layout constraints. Here, the constraints of the label are:

The label is 0 points away from the right margin of the view.
The label is 20 points away from the bottom of the view.

In auto layout, we refer this kind of constraints as spacing constraints. To create these spacing constraints, you can use the Pin button of the layout button.

create-pin-constraints

Once you added the two constraints, all constraint lines should be in solid blue. When you preview the UI or run the app in simulator, the label should display properly on all screen sizes, and even in landscape mode.

constraints-landscape

loading and showing images for Tables and Collections in swift

https://github.com/DigitalLeaves/FlawlessTablesAndCollectionViews

Flawless UICollectionViews and UITableViews


https://medium.com/capital-one-developers/smooth-scrolling-in-uitableview-and-uicollectionview-a012045d77f

Image flashes demo (The problem)
no flashes demo (The solution)

First, some background

tableView:cellForRowAtIndexPath: and collectionView:cellForItemAtIndexPath: are called whenever a new cell has to be displayed

The cells have an NSIndexPath[section-row] to identify its position.
In order to get the NSIndexPath of the cell, use

There is an array that stores the data to be displayed for the cells.

data_table

Cells, unlike data in the array, are re-used for efficiency. Thus, they do not stay in place. When a cell with data is about to be displayed, cells are dequeued ( or allocated when there is no cells ). The table will assign the current IndexPath to it, then set data onto it.

data_cell_table

In detail, the cells are kept in a pool where they are dequeued and served as they are needed.
When you ask for a cell with dequeueCellWithReuseIdentifier: a new one is created if and only if there’s no previous created cell that can be served.

Objective C version

In objective C, as you can see, we first ask the pool to return us a cell to use.
If its nil, which means the pool does not have spare ones, we need to allocate and create our own.
Once created, we can start settings it properties, and then return it to the class to be displayed.

Swift version

In swift, it combines the re-use or creating a new one in one method call of dequeueReusableCell.

Example

So, let us go ahead and see how it all starts out. When the table or collection view first start out, it sees that the the visible rows needs to be displayed.
First, it looks at the first row at index (section 0, row 0), and that it needs to display that cell.

It goes into delegate method cellForRow and tries to dequeue a cell. Because we are just starting out, our cell pool will be empty. Thus, get a new freshly allocated cell for us to use. We assign its display properties (namely, text, color, etc). In our case we simply assign the text property to something. Say, a string “one”.

display_cell_1-4

It then goes to the second row at index (section 0, row 1) and does the same thing. It will see that the cell pool is empty, and thus creates a new cell. We assign its display properties, and give it a strong “two”.

This applies for the rest of the cells that needs to be drawn on the table. Say if 8 cells are showing, usually table will allocate a few more cells, say 10. Take note that even though cells 9 and 10 are allocated, its indexPath will be nil because it is not shown by the table yet Once they are shown, their indexPath will be assigned an IndexPath.

Scrolling up, reusing those cells

At this point, we have successfully created table view cells, set their properties, and have displayed the data in the table.

The cell pool is still empty because we are currently using all of the cells. In other words, they are on display.

Now, the user uses their finger and swipes up. The whole table scrolls up one page.

swipe_up_recollect_cells

At this point, the first row at index (section 0, row 0), disappears off the screen. The cell object representing that row gets queued into the cell pool.
then the second row at index (section 0, row 1), disappears off the screen. It also gets queued into the cell pool…
As each on display cell disappears off screen, they get “re-collected” into the cell pool.

But! As each row disappears, new rows from the bottom appears right!?

We need to make sure they are drawn. So at this point, say, (section 0, row 4) starts to appear and it needs display.

It runs through delegate method cellForRow for (section 0, row 4) and tries to dequeue a cell.

It gets a cell object that (section 0, row 1) was previously using.
(section 0, row 1) have disappeared off screen and is not using its cell anymore. It has returned its cell back to the cell pool.

Take note that when the cell (which was previously used by row 0) is dequeued for (section 0, row 4), the tableView will changes the cell’s indexPath to (0,4). Thus, this signifies that this cell now represents for tableView’s section 0, row 4 now.

Hence the cell variable we get back is a valid object with our designated IndexPath of (0, 4)

dequeue_cells

Even though the cell’s IndexPath now is (0,4), its data has not been “cleaned” or “zeroed”, so it has the same configuration it had. In other words, that cell’s property text still has the previous string in it. And thus, as we dequeue that cell object for row 4, we over-write the text property with whatever row 4’s string is.

Then we properly return the cell object.

Note that the disappearing and appearing of the cells are determined by the TableView or Collection class. It may enqueue a bunch of disappearing cells first into the cell pool, then allow appearing cells to dequeue them. Or they may simply do it one by one.

The Problem

problem_1

1) When the first cell is loaded, it uses dequeueReusableCellWithIdentifier and gets a fresh cell object with address 0x…ffaabb.

2) It then uses the singleton ImageManager and starts doing a async download operation for image 1.

3) The user then swipes up. This makes the cell go out of display, and thus, the cell objects gets put into the cell pool, with its indexPath assigned to nil.

4) As the first row disappears, the 4th row appears, it uses dequeueReusableCellWithIdentifier cell and gets the cell object 0x…ffaabb from the cell pool. This cell was JUST used by row 1.

5) At this point, image 1 download progresses to 50%.

6) Due to 4) with its cell visible, it starts another async image download operation in singleton ImageManager. Image 4’s download progresses to 10%.

problem_2

7) With row 4 fully visible, it now has the cell object, and is downlading Image 4.

8) Image 1 finishes downloading.

9) Our closure in the cellForRow method points to the cell 0x…ffaabb. It then assigns cell 0x…ffaabb’s imageView.image to image1.

10) Now, for a split second, the image on row 4 is of image1.

11) Then a second later, image 4 finishes downloading, and thus in the same manager as 9), the closure code from cellForRow assigns 0x…ffaabb’s imageView.image to image 4.

12) Even though row 4 now correctly depicts image4 as intended, steps 9) to 11) creates a flash of of image 1 switching to image 4. The user can see it, depending on how slow the download speed is, and thus, is the problem we’re trying to solve.

Async Operations and when they complete

So, instead of doing instantaneous data assignments, we need to do async operations that may take a few seconds. Then after a certain amount of seconds is over, it comes back and updates our UI.

1) cellForRow hits dequeue cell and gets cell 0x…9aa00

2) Each row of the table matches up to the index of the URL array that gives us a string URL to download an image. cellForRow’s indexPath provides the index and we use that index to get the url from the data array.

we will be using this imageURL and use the Downloader singleton to download that image

3) The Downloader singleton uses the url and literally downloads the image. When its done, it hits up a closure to update the table UI

4) This here is the most important part. Once the download is done. It hits a closure. The closure references
the cell (that was dequeued for this table row), and the table IndexPath.

async_operation_tableview

It references the cell because we want to see which indexPath it is representing

It references the table IndexPath to know which row index was assigned to this operation.

Code is below:

Now in normal circumstances, the cell dequeued for say table row 11 has IndexPath [0, 11]. Table view IndexPath is [0, 11].

The Downloader finishes downloading the image, puts it in cache, and then calls our closure for completion.
It sees that IndexPath of the cell that’s we’re referencing is valid and is [0, 11]. This means as far as the cell is concerned, it is on display for row 11.
(If the IndexPath is nil, it means even though the cell is alive, it is not used by any table rows and not on display yet)

Furthermore, the indexPath of the table is [0, 11]. This means we’re currently processing for that row. Hence, due to:

1) cell’s IndexPath is representing and on display for row 11
2) cellForRow delegate method is called for table row 11

we can safely assign the downloaded image onto this cell.

Start download, cell scrolls off screen, download finishes

Let’s say we’re on row 11 and it starts to download an image.

It gets an URL from data[11]
Uses that URL and starts downloading image 11.

Then all of a sudden, the user scrolls row 11 out of view.

cell_scroll_off_screen

At this point 2 things happen:

1) cell for row 11’s indexPath gets set to nil because it is not on display anymore
2) row 15 appears, and dequeues a cell for usage.

1)

the download for image 11 completes! It runs to the closure. Notice 2 things. The closure references 2 important things:

– the cell that just before represented row 11. Its IndexPath is now nil because it is not only display anymore.
– the index of the cellForRow that is calling this closure (11)

We do a comparison and see that nil != 11, thus we don’t assign image 11 to cell’s imageView.

2)

On the other hand, row 15 appeared and dequeues a cell. It starts downloading the image, the image finishes and hits the closure. The closure references 2 things:

– the cell with IndexPath [0, 15] because it is visible
– the index of the cellForRow that is calling this closure (15)

We do a comparison and see that 15 == 15. Thus, we assign the JUST downloaded image for cell with IndexPath [0, 15].

Not visible offscreen cell gets taken by a row that is now visible

two_rows_use_one_cell

There is another situation where when we scroll off screen, the cell for index 11 (0x…ff1000) nows has indexPath of nil.

The downloader for image 11 is going on.

Row 15 appears on screen. It gets dequeued the cell (0x…ff1000) that was previously used by row 11. This is because row 11 disappeared and not using the cell anymore. Then, cellForRow at index 15 starts downloading image 15.

Hence cell 0x…ff1000 now has IndexPath of 15 because it is representing visible row 15.

downloader for image 11 finishes, and runs its closure. It references 0x…ff1000, but wait, the IndexPath for that is now [0, 15]!!
The index of the cellForRow that is calling this closure is 11. Thus, 15 != 11, and we do not assign image 11 to this cell.

image 15 finishes downloading, and runs its closure. It references 0x…ff1000 and the IndexPath for it is [0, 15].
The index of the cellForRow that is calling this closure is 15. Thus 15 == 15 is valid, and it goes ahead and assigns image 15 to
the cell’s imageView.image.

After everything has been downloaded

After everything is downloaded, all images should be instantaneously retrieved from the dictionary cache (url: Image). Once it gets the image, it would use the main queue to update our table.

In your cellForRowAt, the cellIndex and tableIndex check should succeed much more now because there is no more delay. The image retrieval is instantaneous and then calls the closure right away.

Unicode

ref –

  • stackoverflow.com/questions/2241348/what-is-unicode-utf-8-utf-16
    www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/
  • https://blog.hubspot.com/website/what-is-utf-8

Text is one of many assets that computers store and process. Text is made up of individual characters, each of which is represented in computers by a string of bits.

Problem is, which string of bits should match up to which letter? ASCII came up with a table to solve this.

So, the sentence “The quick brown fox jumps over the lazy dog.” represented in ASCII binary would be:

01010100 01101000 01100101 00100000 01110001
01110101 01101001 01100011 01101011 00100000
01100010 01110010 01101111 01110111 01101110
00100000 01100110 01101111 01111000 00100000
01101010 01110101 01101101 01110000 01110011
00100000 01101111 01110110 01100101 01110010
00100000 01110100 01101000 01100101 00100000
01101100 01100001 01111010 01111001 00100000
01100100 01101111 01100111 00101110

Because there are 8 bits in 1 byte, there are 2^8 (256) ways to represent a character in ASCII. When ASCII was introduced in 1960, this was okay, since developers needed only 128 bytes to represent all the English characters and symbols they needed.

Unicode is a character set

Unicode assigns every character in existence a unique number called a code point.

In other words, a letter maps to something called a code point

As Joel would explain it, every platonic letter in every alphabet is assigned a magic number by the Unicode consortium which is written like this: U+0639

This magic number is called a code point.

The U+ means “Unicode”
The numbers are hexadecimal.

The English letter A would be U+0041.

Why is it U+0041?

0041 is in hex.
We convert it to decimal like so: is 0 * (16^4)^4 + 0 * (16^3) + 4 * (16^2) + 1 * (16^1) = 0 + 0 + 64 + 1 = 65.
65 is represented as A.

What is Unicode?

Unicode was a brave effort to create a single character set that included every reasonable writing system on the planet and some make-believe ones like Klingon, too.

Unicode comprises 1,114,112 code points in the range 0hex to 10FFFFhex. The Unicode code space is divided into seventeen planes (the basic multilingual plane, and 16 supplementary planes), each with 65,536 (= 216) code points. Thus the total size of the Unicode code space is 17 × 65,536 = 1,114,112.

10FFFF (hex)

1 0 15 15 15 15 convert each hex to digit 0-9, A(10) – F(15)

0001 0000 1111 1111 1111 1111 convert digits to binary

1114111 binary to decimal

we also have decimal 0, so total is 1114112 representations of characters

In fact, Unicode has a different way of thinking about characters, and you have to understand the Unicode way of thinking of things or nothing will make sense.

Until now, we’ve assumed that a letter maps to some bits which you can store on disk or in memory:

A –> 0100 0001

In Unicode, a letter maps to something called a code point which is still just a theoretical concept. How that code point is represented in memory or on disk is a whole nuther story.

In Unicode, the letter A is a platonic ideal. It’s just floating in heaven:

A

This platonic A is different than B, and different from a, but the same as A and A and A. The idea that A in a Times New Roman font is the same character as the A in a Helvetica font, but different from “a” in lower case, does not seem very controversial, but in some languages just figuring out what a letter is can cause controversy. Is the German letter ß a real letter or just a fancy way of writing ss? If a letter’s shape changes at the end of the word, is that a different letter? Hebrew says yes, Arabic says no. Anyway, the smart people at the Unicode consortium have been figuring this out for the last decade or so, accompanied by a great deal of highly political debate, and you don’t have to worry about it. They’ve figured it all out already.

Every platonic letter in every alphabet is assigned a magic number by the Unicode consortium which is written like this: U+0639. This magic number is called a code point. The U+ means “Unicode” and the numbers are hexadecimal. U+0639 is the Arabic letter Ain. The English letter A would be U+0041. You can find them all using the charmap utility on Windows 2000/XP or visiting the Unicode web site.

There is no real limit on the number of letters that Unicode can define and in fact they have gone beyond 65,536 so not every unicode letter can really be squeezed into two bytes, but that was a myth anyway.

OK, so say we have a string:

Hello

which, in Unicode, corresponds to these five code points:

U+0048 U+0065 U+006C U+006C U+006F.

Just a bunch of code points. Numbers, really. We haven’t yet said anything about how to store this in memory or represent it in an email message.

That’s where encodings come in.

Step 1 – Number Code to decimal

Let’s look at an example. In Unicode the character A is given code point U+0041. The U+ denotes that its a unicode. The 0041 is represented as hex.

hex 0041 to decimal is converted like so:

0 * (16^3)^4 + 0 * (16^2) + 4 * (16^1) + 1 * (16^0) =
0 + 0 + 64 + 1 = 65.

Thus, decimal 65 is represented as A.

Step 2 – Convert decimal to Binary?

Bi-nary means two.

Binary is represented by bits.
1 Bit binary represents 0 or 1. 2 ^ (1 bit) = 2 combinations
2 bits binary represents 00, 01, 10, 11, 4 combinations. 2 ^ (2 bits) = 4 combinations

Point codes represent text in computers, telecommunications equipment, and other devices.
It maps the character “A” to the number 65

How do we convert this decimal to binary?

Division Way

(the proper way is to divide by 2, and use the remainder as bit)
http://www.electronics-tutorials.ws/binary/bin_2.html

65 / 2 = 32 R1 binary bits: [1]
32 / 2 = 16 R0 binary bits: [0 1]
16 / 2 = 8 R0 binary bits: [0 0 1]
8 / 2 = 4 R0 binary bits: [0 0 0 1]
4 / 2 = 2 R0 binary bits: [0 0 0 0 1]
2 / 2 = 1 R0 binary bits: [0 0 0 0 0 1]
1 / 2 = 0 R1 binary bits: [1 0 0 0 0 0 1]

So as you can see the bits are 1 0 0 0 0 0 1
So in 8 bit format we have, 0 1 0 0 0 0 1

That is the binary conversion from decimal point 65.

Layout visual way
First, we must lay out the binary and show what decimal it represents

The 0th bit is represented by 2 ^ 0 = 1
The 1st bit is represented by 2 ^ 1 = 2
The 2nd bit is represented by 2 ^ 2 = 4
The 3rd bit is represented by 2 ^ 3 = 8
The 4th bit is represented by 2 ^ 4 = 16
The 5th bit is represented by 2 ^ 5 = 32
The 6th bit is represented by 2 ^ 6 = 64
…so on.

8 bits
0 0 0 0 0 0 0 0

We lay them out and try to see which of these decimals add up to be 65. The trick is to get the largest number where its less than or equal to 65. In our case, its the 6th bit (64). Thus, at the the 6th bit we mark it 1.

0 1 0 0 0 0 0 0

65 – 64 = 1.

Then we try to find a bit where its less than or equal to 1.

Obviously, it would be perfect at 0th bit.

0 1 0 0 0 0 0 1

64 + 1 = 65 Thus it matches the 65 ASCII code.

65_to_binary_1

We see that at the 2^(6th bit) is 64. Then all we need is 1, where it is at 2^0.
Hence at the binary, we need to mark 1 for the 6th bit and the 0th bit in order to represent that we need the decimal value it represents:
+ 0 * (2 ^ 4)
0 * (2^7) + 1 * (2 ^ 6) + 0 * (2 ^ 5) + 0 * (2 ^ 4) + 0 * (2 ^ 3) + 0 * (2 ^ 2) + 0 * (2 ^ 1) + 1 * (2 ^ 0)

Make sure you use only 0 or 1 because we are dealing with binary.

65_to_binary_2

Finally, at the binary level, simply write the binary needed to represent the numbers:

01000010

65_to_binary_3

2) Binary to Hex

First, a hex has 16 code points: 0-15

where 0 – 9 takes on the numbers 0 – 9

and 10 – 15 takes on A, B, C, D, E, F

Thus a hex bit has 15 combinations of 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F

This is represented by 4 binary bits because 2^4 = 16

Hence we need to break our binary bits into hex, or literally into 4 bit binary

Thus 01000001 gets divided into 4 bits so that each 4 bits can be converted to a hex: 0100 0001

Let’s convert it to hex:

0100 = 0 * (2^3) + 1 * (2^2) + 0 * (2^1) + 0 * (2^0) = 4 hex
0001 = 0 * (2^3) + 0 * (2^2) + 0 * (2^1) + 1 * (2^0) = 1 hex

(if you get 1111 = 16 –> F)

Thus, ASCII character ‘A’, is 65 code point, binary 0100001, or hex 0x41.

And if we are to check in the unicode table, We see that indeed A is mapped to decimal 65, with a hex of 0x41 for UTF8

and a hex of 0x0041 for UTF 16.

https://unicode-table.com/en/

unicode-a

Thus, you have now succesfully converted from Unicode’s Code point to binary representation.

UTF 8

UTF-8 is an encoding system for Unicode.

It can translate any Unicode character to a matching unique binary string

and can also translate the binary string back to a Unicode character. This is the meaning of “UTF”, or “Unicode Transformation Format.”

There are other encoding systems for Unicode besides UTF-8, but UTF-8 is unique because it represents characters in one-byte units. Remember that one byte consists of eight bits, hence the “-8” in its name.

A character in UTF8 can be from 1 to 4 bytes long. UTF-8 can represent any character in the unicode standard. UTF-8 is backwards compatible with ASCII. UTF-9 is preferred encoding for e-mail and web pages.

The first 128 characters of Unicode (which correspond one-to-one with ASCII) are encoded using a single octet with the same binary value as ASCII, making valid ASCII text valid UTF-8-encoded Unicode as well.

Unicode is a character set. UTF-8 is encoding

UTF-8 is defined to encode code points (any of the numerical values that make up the code space that contains a symbol) in one to four bytes.

UTF-8 uses one byte to represent code points from 0-127. These first 128 Unicode code points correspond one-to-one with ASCII character mappings, so ASCII characters are also valid UTF-8 characters.

The first UTF-8 byte signals how many bytes will follow it. Then the code point bits are “distributed” over the following bytes.

For example:
character: é
Unicode: U+00E9

Calculate Decimal:
0 * 16^3 + 0 * 16^2 + E * 16^1 + 9 * 16^0 =
0 + 0 + 224 + 9 = 233
So the decimal for é is 233

Decimal to Binary:

223/2 = 116 R1 [1]
116/2 = 58 R0 [0 1]
58/2 = 29 R0 [0 0 1]
29/2 = 14 R1 [1 0 0 1]
14/2 = 7 R0 [0 1 0 0 1]
7/2 = 3 R1 [1 0 1 0 0 1]
3/2 = 1 R1 [1 1 0 1 0 0 1]
1/2 = 0 R1 [1 1 1 0 1 0 0 1]

So the binary for character é is [1 1 1 0 1 0 0 1]

It is not part of the ASCII character set. UTF-8 represents this eight-bit number using two bytes.
It will take one byte to represent our binary. If we were to include header data, then it’ll be within two bytes. Hence, this is why we decided on using two bytes.

The first byte begins with 110: 110 XXXXX
The 1s indicate that this is a two-byte sequence.
0 indicates that the code point bits will follow.

The second byte being with 10 to signal that it is a continuation in a UTF8 sequence: 10 XXXXXX

Hence we have 110XXXXX 10XXXXXX in order to represent [1 1 1 0 1 0 0 1]

This leaves 11 slots for the bits. Hence we replace the X with our bits, starting from the right.

110XXX[11] 10[101001]

We have 3 X’s left. UTF-8 pads the leading bits with 0s to fill out the remaining spaces.

UTF-8 representation is: 11000011 10101001
And we split them into 4 bits and get the binary value. We then convert binary value into an hex.

1100 = 1 * 2^3 + 1 * 2^2 + 0 * 2^1 + 0 * 2^0 = 12 0xC
0011 = 0 * 2^3 + 0 * 2^2 + 1 * 2^1 + 1 * 2^0 = 3 0x3
1010 = 1 * 2^3 + 0 * 2^2 + 1 * 2^1 + 0 * 2^0 = 10 0xA
1001 = 1 * 2^3 + 0 * 2^2 + 0 * 2^1 + 1 * 2^0 = 9 0x9

Hence, your UTF-8 code units is C3A9. This matches up with our U+00E9 for character é

Let’s convert Unicode character 八 into different representations

The code point is given as: U+516B

code point to decimal
5 * 16^3 + 1 * 16^2 + 6 * 16^1 + B * 16^0 =
5*4096 + 256 + 96 + 11 = 20843

The decimal representation is 20843

decimal to binary

20843/2 = 10421 R1 [1]
10421/2 = 5210 R1 [1 1]
5210/2 = 2605 R0 [0 1 1]
2605/2 = 1302 R0 [0 0 1 1]


5/2 = 2 R1 [1 0 0 0 1 0 1 1 0 1 0 1 1]
2/2 = 1 R0 [0 1 0 0 0 1 0 1 1 0 1 0 1 1]
1/2 = 0 R1 [1 0 1 0 0 0 1 0 1 1 0 1 0 1 1]

Hence the binary representation is 1 0 1 0 0 0 1 0 1 1 0 1 0 1 1
We call it UTF-8 Code point bits

We first know that it is 15 bits. Hence we will need a full 2 bytes to represent it. If we were to include the header info for utf-8, then we’ll need a total of 3 bytes to represent this plus header info.

The first byte begins with 1110.
111 indicates its a three-byte sequence.
The 0 indicates that the code point bits will follow.

The second byte begins with 10 to signal that it is a continuation in a UTF-8 sequence.

The third byte begins with 10 to signal that it is a continuation in a UTF-8 sequence.

Therefore, the 3 byte utf-8 looks like this:
1110XXXX 10XXXXXX 10XXXXXX

We then fill in our binary representation starting from the right
1 0 1 0 0 0 1 0 1 1 0 1 0 1 1
[1110]X101 [10]000101 [10]101011

Padding bits: We now have one spot left over, so we’ll just fill it with a 0.
11100101 10000101 10101011

Split this into 4 bit groups:
1110 0101 1000 0101 1010 1011

each group represents hexadecimal.

Calculate decimal value of binary:
1110: 14 0xB
0101: 5 0x5
1000: 8 0x8
1010: 5 0x5
1010: 10 0xA
1011: 11 0xB

UTF-8 representation is: B5 85 AB

?? in swift

https://stackoverflow.com/questions/30837085/whats-the-purpose-of-double-question-mark-in-swift
https://stackoverflow.com/questions/30772063/operator-in-swift
http://kayley.name/2016/02/swifts-nil-coalescing-operator-aka-the-operator-or-double-question-mark-operator.html
http://laurencaponong.com/double-question-marks-in-swift/

It is called nil coalescing operator. If highs is not nil than it is unwrapped and the value returned. If it is nil then “” returned. It is a way to give a default value when an optional is nil.

Technically, it can be understood as:

More Examples

setters and getters swift

https://nilliu.github.io/2016/08/16/swift3-getter-n-setter/

Testing it

Year 2080, Cyborgs

Testing it