Category Archives: Node JS

Chat Room using Node

ref – https://socket.io/get-started/chat

setup

Assuming you have node, npm, nodemon/gulp installed.

First, make a directory called DesktopApp. Then start the process of creating a node project.


npm init

Enter your information for the package.json file.

Install Express:

npm install –save express@4.15.2

Server can push messages to clients. Whenever you write a chat message, the idea is that the server will get it and push it to all other connected clients.

implementing the feedTheKraken

Get the project running

Download the source code onto your computer
download v1.0

cd into the directory, and install all packages
npm install

Then start the server:
npm start
You should see the server starting up.

In the directory, there is an index.html file. Double click it.

You’ll see the web page.

Go ahead and starting using it. Click on the browse button and select images you want to kraken.

Then click the “Feed it” button. This sends all your images to the server.
The server will then download these images into an ‘unprocessed’ folder that is unique to your browser.

Once the images are in that folder, it sends the images to Kraken.io to be processed. You will see the images being processed in your terminal.

Once processed, Kraken.io will return urls that contains the finished image. The server takes those url and downloads these finished image into a ‘processed’ folder.

Then it zips the processed folder and will log out the finishing steps.

Starting the project from scratch

set up a basic project with gulp:

ref – http://chineseruleof8.com/code/index.php/2015/06/30/starting-gulp/

You should now have a functioning server going with nodemon as work flow.

install body parser

npm install express body-parser –save

edit your app.js

We implement the server to be a bit more detailed, and standard.

Creating the index.html

in your project directory, touch index.html

First, we have an file input control that takes in multiple files.
Second, we have a button right underneath it. This button will execute a js function. The JS function will proceed to pass the file information onto a url that hits our server.

First, let’s see the skeleton. Notice we have included boostrap css. This is so that we have some ready
made CSS to use.

Fingerprint2

Make sure you include this in your script area

Multiple browsers will be hitting our servers. Hence, all the unprocessed and processed images are to be kept in a folder for that browser only. There will be many folders that match up to each browser via a unique string id. As the browsers make their requests, images will be kept in those folders. That is how we know which image belong to which browsers. Fingerprint2 basically distinguishes browsers by returning a unique id for that browser.

Using axios to send files to node server

Make sure you include this in your script area

Keep in mind that in order to distinguish ourselves from the other browsers, we throw in the browser id in the url parameter. That way, when we save our images into a folder on the server, the server will keep track of it via our browser id.

1) first, we get the array of images received from the “file” control.
2) Then we create a FormData class that collects.
3) We include the browser id in the url parameter, and pass in the FormData in the parameter
4) We receive response from server

Don’t forget to implement upload for POST requests on the server. The point is that we have different browsers uploading images. We keep track of each browser’s image by creating a “unprocessed-${browser id}” folder. It holds all uploaded images from that browser that is not current processed by Kraken.

You should then be able to see the response back to index.html with result: “good”.

Installing Multer

In your directory, install multer:

npm i multer

Create a function called processImagesFromClientPromise and implement it like so.

make sure you implement createFolderForBrowser because as the images come in, you’ll need a place to store them.

Zipping a folder

After krakening all the images, we place it in the “processed” folder.
In the future, we may want to send all these images via email, or a link so the user can download it.
Its best if we can zip all these images together. We use the archiver to do this.

First, we install archiver:


npm install archiver –save

This is how we implement it. However, in the future, we want to place it inside of a Promise for refactoring.

Downloading the Krakened image from provided url

something like this:

https://dl.kraken.io/api/83/b0/37/75905b74440a18e6fe315cae5e/IMG_1261.jpg

We provide the link string via uri.
We give it a filename such as “toDownload”
Then we provide a callback for once it is done.

used like so:

Function setup

However, the problem is that all of that deteriorates down to pyramid of Doom. Each task does something asynchronous, and we wait until it is done. When it’s complete, inside of the callback, we call the next task.

Hence our tasks list is something like this:

  1. processImagesFromClient
  2. readyKrakenPromises
  3. runAllKrakenPromises
  4. saveAsZip

Some of the functionalities are run inside of a callback. Some of them are run at the end of the functions, hence at the end, we get this complicated chain that no one wants to follow.

Hence, let’s use Promises to fix it.

Promises version

ref – http://chineseruleof8.com/code/index.php/2017/10/03/promise-js/

…with promises, it looks much prettier:

full source promises version

Basically we group code inside a new Promise. Then return that promise. Whatever variable we return, shall be passed inside of resolve. Resolve indicates that we move on to the next Promise.

Also, whatever parameter that gets passed into resolve, will appear in the .then parameter. You may then pass that parameter on to the next function.

Encapsulation

However, make sure we encapsulate the functionalities. We don’t want outside to be able to use functions such as readyKrakenPromises, runAllKrakenPromises, and saveAsZip.

So we change these functions to be private functions. Then create a public function that does the Promise calls like so:

used like so:

app.js full source

index.html full source

localhost Mongo set up and usage for Slideshow-Backend

https://github.com/redmacdev1988/SlideShow-Backend

Running your backend locally

1) open terminal and type “mongod” in order to start your mongo db.
2) pull down the project SlideShow-Backend onto your desktop
3) open a new terminal and cd into the SlideShow-Backend directory
4) then type “npm start”
5) you will the server starting up:

[nodemon] 1.14.11
[nodemon] to restart at any time, enter rs
[nodemon] watching: *.*
[nodemon] starting node index.js
Serving up files in ‘views’ folder
Serving up files in ‘Images’ folder
index.js –  created model loading here
index.js –  mongoose instance connection url connection
index.js –  routes speicfied
 pictorial list RESTful API server on: 8080

6) Open a browser and go to: http://localhost:8080/

You should see the list of image strings appear as an array on the browser. This list of image names are images that were included in the project directory under SlideShow-Backend/Images/your-image.jpg

for example:

SlideShow-Backend/Images/railay-ocean.jpg
SlideShow-Backend/Images/tiantian-buddha.jpg

In your local environment, set up mongo. Then use a terminal and set up your mongo db with database “PictorialDB”.

> show dbs
BadmintonSubscriberDatabase 0.000GB
EventsDB 0.000GB
PictorialDB 0.000GB
admin 0.000GB
local 0.000GB

Start up your mongo database

EPCNSZXW0324:~ $ mongod

2018-03-30T16:25:09.506+0800 I CONTROL [initandlisten] MongoDB starting : pid=17616 port=27017 dbpath=/data/db 64-bit host=EPCNSZXW0324.princeton.epam.com
2018-03-30T16:25:09.507+0800 I CONTROL [initandlisten] db version v3.4.10
2018-03-30T16:25:09.507+0800 I CONTROL [initandlisten] git version: 078f28920cb24de0dd479b5ea6c66c644f6326e9

sign into your mongo db

Last login: Fri Mar 30 16:23:35 on ttys002
EPCNSZXW0324:~ rickytsao$ mongo
MongoDB shell version v3.4.10
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.10

create pictorial db

> use PictorialDB
switched to db PictorialDB

Inserting data through code

First create a Schem for our pictorial. Then add it to module.exports for global usage.

PictorialModel.js

Once we make this model, whenever we insert into the database, we’ll see the row data there.

Then in your SlideShow-Backend app, implement create_a_pictorial where:
1) you get data from the body
2) Insert data into Pictorial
3) Finally, call the function “save” on the schema.

pictorialController.js

Then, check in your mongo db to verify:


> db.pictorials.find()

{ “_id” : ObjectId(“59fc081ff6cf8a0970256c38”), “name” : “railay-ocean”, “fileName” : “railay-ocean.jpg”, “description” : “I long to back here to swim”, “Created_date” : ISODate(“2017-11-03T06:09:35.515Z”), “__v” : 0 }
{ “_id” : ObjectId(“59fc0ca4f6cf8a0970256c39”), “name” : “tiantian-buddha”, “fileName” : “tiantian-buddha.jpg”, “description” : “It was a gray cloudy day, but was fun!”, “Created_date” : ISODate(“2017-11-03T06:28:52.385Z”), “__v” : 0 }

Check Images

1) Make sure you copy images into the Images folder at “SlideShow-Backend/Images”. i.e railay-ocean.jpg, tiantian-buddha.jpg, etc.

2) Now you can access images at http://localhost:8080/ImageName. This is where the app will go in order to retrieve images.

Using Mocha and Chai for unit tests in Node JS projects

ref – https://ubuverse.com/introduction-to-node-js-api-unit-testing-with-mocha-and-chai/
https://scotch.io/tutorials/test-a-node-restful-api-with-mocha-and-chai

https://gist.github.com/yoavniran/1e3b0162e1545055429e#mocha
https://gist.github.com/yoavniran/1e3b0162e1545055429e#chai

Creating the Directory

mkdir mocha-chai-demo
cd mocha-chai-demo

Setting up the package.json

in directory /mocha-chai-demo:

npm init

use defualt for all the options.
You will then have a package.json file created, which looks something like this:

package name: (mocha-chai-demo) tdd-for-node
version: (1.0.0)
description: test driven dev for node
entry point: (index.js)
test command:
git repository:
keywords: tdd chai mocha node js
author: ricky
license: (ISC)
About to write to /Users/xxxxxxxxx/Desktop/mocha-chai-demo/package.json:

{
“name”: “tdd-for-node”,
“version”: “1.0.0”,
“description”: “test driven dev for node”,
“main”: “index.js”,
“scripts”: {
“test”: “echo \”Error: no test specified\” && exit 1″
},
“keywords”: [
“tdd”,
“chai”,
“mocha”,
“node”,
“js”
],
“author”: “ricky”,
“license”: “ISC”
}

Is this ok? (yes) yes
EPCNSZXW0324:mocha-chai-demo rickytsao$ ls
package.json

Open it with text editor by typing:

open -a TextEdit package.json

and you’ll see your typical specs laid out for your node project.

Whenever you run the project, usually people would do “nodex index.js”. But it can get annoying. So instead, we can shortcut it with start. So next time, all you gotta do to start the server is to type: npm start

You should also edit the test script for mocha.

Under scripts, the value for the key “test” should be: “mocha –recursive”

Implementing the server

in directory /mocha-chai-demo:
touch index.js

Then copy the code below into the index.js

That file will run as our server.

Then run node index.js to see the server in action

If you were to run the app:

node index.js

You’ll see the confirmation messages.

Open up a browser, type: http://localhost:8080/colors
you’ll get {“results”:[“RED”,”GREEN”,”BLUE”]}

Creating the tests

The recommended way to organize your tests within your project is to put all of them in their own /test directory.
By default, Mocha checks for unit tests using the globs ./test/*.js and ./test/*.coffee. From there, it will load and execute any file that calls the describe() method.

In mocha-chai-demo:

mkdir test
cd test

you should now be in /mocha-chai-demo/test
touch test.js

then copy the below test code

Save the file.

Then in the test directory, just type:

npm test

You’ll see that we have 4 passed tests.

Details

ref – http://chaijs.com/api/bdd/

Expect

The BDD style is exposed through expect or should interfaces. In both scenarios, you chain together natural language assertions.

The should style allows for the same chainable assertions as the expect interface, however it extends each object with a should property to start your chain. This style has some issues when used with Internet Explorer, so be aware of browser compatibility.

Differences between expect and should

First of all, notice that the expect require is just a reference to the expect function, whereas with the should require, the function is being executed.

The expect interface provides a function as a starting point for chaining your language assertions. It works on node.js and in all browsers.

The should interface extends Object.prototype to provide a single getter as the starting point for your language assertions. It works on node.js and in all modern browsers except Internet Explorer.

about Json Web Tokens (JWT)

todo –
https://www.freecodecamp.org/news/securing-node-js-restful-apis-with-json-web-tokens-9f811a92bb52/

https://medium.com/swlh/a-practical-guide-for-jwt-authentication-using-nodejs-and-express-d48369e7e6d4

https://scotch.io/tutorials/the-anatomy-of-a-json-web-token

The reason why JWT are used is to prove that the sent data was actually created by an authentic source.

JSON Web Tokens work across different programming languages: JWTs work in .NET, Python, Node.js, Java, PHP, Ruby, Go, JavaScript, and Haskell. So you can see that these can be used in many different scenarios.

JWTs are self-contained: They will carry all the information necessary within itself. This means that a JWT will be able to transmit basic information about itself, a payload (usually user information), and a signature.

JWTs can be passed around easily: Since JWTs are self-contained, they are perfectly used inside an HTTP header when authenticating an API. You can also pass it through the URL.

What is a JWT

Since there are 3 parts separated by a ., each section is created differently. We have the 3 parts which are:

  • header
  • payload
  • signature

header

The header carries 2 parts:

declaring the type, which is JWT
the hashing algorithm to use (HMAC SHA256 in this case)

Now once this is base64encode, we have the first part of our JSON web token!

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9

{“alg”:”HS256″,”typ”:”JWT”} is treated as a string, which encodes to eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9

If you were to chop off some string, for example {“alg”:”HS256″,, it will encode to eyJhbGciOiJIUzI1NiIs.

What is base64encode?

http://stackoverflow.com/questions/201479/what-is-base-64-encoding-used-for

Encoding transforms data into another format using a scheme that is publicly available so that it can easily be reversed. Hence, it is for maintaining data usability and uses schemes that are publicly available.

When you have some binary data that you want to ship across a network, you generally don’t do it by just streaming the bits and bytes over the wire in a raw format.

Why?

– because some media are made for streaming text.
– some protocols may interpret your binary data as control characters (like a modem)
– your binary data could be screwed up because the underlying protocol might think that you’ve entered a special character combination

So to get around this, people encode the binary data into characters. Base64 is one of these types of encodings. Why 64? Because you can generally rely on the same 64 characters being present in many character sets, and you can be reasonably confident that your data’s going to end up on the other side of the wire uncorrupted.

How does base64 work?

https://en.wikipedia.org/wiki/Base64

Another example:

A quote from Thomas Hobbes’ Leviathan (be aware of spaces between lines):

Man is distinguished, not only by his reason, but by this singular passion from
other animals, which is a lust of the mind, that by a perseverance of delight
in the continued and indefatigable generation of knowledge, exceeds the short
vehemence of any carnal pleasure.

is represented as a byte sequence of 8-bit-padded ASCII characters encoded in MIME’s Base64 scheme as follows:

TWFuIGlzIGRpc3Rpbmd1aXNoZWQsIG5vdCBvbmx5IGJ5IGhpcyByZWFzb24sIGJ1dCBieSB0aGlz
IHNpbmd1bGFyIHBhc3Npb24gZnJvbSBvdGhlciBhbmltYWxzLCB3aGljaCBpcyBhIGx1c3Qgb2Yg
dGhlIG1pbmQsIHRoYXQgYnkgYSBwZXJzZXZlcmFuY2Ugb2YgZGVsaWdodCBpbiB0aGUgY29udGlu
dWVkIGFuZCBpbmRlZmF0aWdhYmxlIGdlbmVyYXRpb24gb2Yga25vd2xlZGdlLCBleGNlZWRzIHRo
ZSBzaG9ydCB2ZWhlbWVuY2Ugb2YgYW55IGNhcm5hbCBwbGVhc3VyZS4=

In the above quote, let’s take the example of the word ‘Man’.

How to encode ‘Man’ into ‘TWFu’

General idea:

Encoded in ASCII, the characters M, a, and n are stored as the bytes 77, 97, and 110, respectively.
They are also the 8-bit binary values 01001101, 01100001, and 01101110.

These three values are joined together into a 24-bit string, producing 010011010110000101101110.

Groups of 6 bits (6 bits have a maximum of 2 to the 6 = 64 different binary values) are converted into individual numbers from left to right (in this case, there are four numbers in a 24-bit string), which are then converted into their corresponding Base64 character values.

In detail:

Say our text content is “Man”

Originally, the 3 values in bits are:

01001101 01100001 01101110

We split this into 6 bits of 4 index integers

010011 010110 000101 101110
where
010011 –> 19
010110 –> 22
000101 –> 5
101110 –> 46

Put these integers through the Base64 index table, would give us
TWFu

Payload

The payload will carry the bulk of our JWT, also called the JWT Claims. This is where we will put the information that we want to transmit and other information about our token.

There are multiple claims that we can provide. This includes registered claim names, public claim names, and private claim names.

REGISTERED CLAIMS

Claims that are not mandatory whose names are reserved for us.

These include:

  • iss: The issuer of the token
  • sub: The subject of the token
  • aud: The audience of the token
  • exp: This will probably be the registered claim most often used. This will define the expiration in NumericDate value. The expiration MUST be after the current date/time.
  • nbf: Defines the time before which the JWT MUST NOT be accepted for processing
  • iat: The time the JWT was issued. Can be used to determine the age of the JWT
  • jti: Unique identifier for the JWT. Can be used to prevent the JWT from being replayed. This is helpful for a one time use token.

PUBLIC CLAIMS

These are the claims that we create ourselves like user name, information, and other important information.

PRIVATE CLAIMS

A producer and consumer may agree to use claim names that are private. These are subject to collision, so use them with caution.

For example

This will encode64 to:

eyJpc3MiOiJzY290Y2guaW8iLCJleHAiOjEzMDA4MTkzODAsIm5hbWUiOiJDaHJpcyBTZXZpbGxlamEiLCJhZG1pbiI6dHJ1ZX0

Signature

The third and final part of our JSON Web Token is going to be the signature. This signature is made up of a hash of the following components:

  • the header
  • the payload
  • secret

This is how we get the third part of the JWT:

The secret is the signature held by the server. This is the way that our server will be able to verify existing tokens and sign new ones.

This gives us the final part of our JWT:
03f329983b86f7d9a9f5fef85305880101d5e302afafa20154d094b229f75773

Thus, our JWT is now:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzY290Y2guaW8iLCJleHAiOjEzMDA4MTkzODAsIm5hbWUiOiJDaHJpcyBTZXZpbGxlamEiLCJhZG1pbiI6dHJ1ZX0.03f329983b86f7d9a9f5fef85305880101d5e302afafa20154d094b229f75773

The JSON Web Token standard can be used across multiple languages and is quickly and easily interchangeable.

You can use the token in a URL, POST parameter, or an HTTP header. The versatility of the JSON Web Token let’s us authenticate an API quickly and easily by passing information through the token.