Ca23301-Full Stack Web Development Unit-III

Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

CA23301 FULL STACK WEB DEVELOPMENT

UNIT-III

ADVANCED NODE JS AND DATABASE

UNIT-III 3. 1
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

UNIT III ADVANCED NODE JS AND DATABASE


Introduction to NoSQL databases – MongoDB system overview - Basic querying with MongoDB shell –
Request body parsing in Express – NodeJS MongoDB connection – Adding and retrieving data to
MongoDB from NodeJS – Handling SQL databases from NodeJS – Handling Cookies in NodeJS –
Handling User Authentication with NodeJS

UNIT-III 3. 2
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Introduction to NOSQL databases

What is NoSQL?

NoSQL Database is a non-relational Data Management System, that does not require
a fixed schema. It avoids joins, and is easy to scale. The major purpose of using a
NoSQL database is for distributed data stores with humongous data storage needs.
NoSQL is used for Big data and real-time web apps. For example, companies like
Twitter, Facebook and Google collect terabytes of user data every single day.

NoSQL database stands for “Not Only SQL” or “Not SQL.” Though a better term
would be “NoREL”, NoSQL caught on. Carl Strozz introduced the NoSQL concept in
1998.

Traditional RDBMS uses SQL syntax to store and retrieve data for further insights.
Instead, a NoSQL database system encompasses a wide range of database
technologies that can store structured, semi-structured, unstructured and polymorphic
data.

Why NoSQL?

The concept of NoSQL databases became popular with Internet giants like Google,
Facebook, Amazon, etc. who deal with huge volumes of data. The system response
time becomes slow when you use RDBMS for massive volumes of data.

To resolve this problem, we could “scale up” our systems by upgrading our existing
hardware. This process is expensive.

The alternative for this issue is to distribute database load on multiple hosts whenever
the load increases. This method is known as “scaling out.”

UNIT-III 3. 3
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

NoSQL database is non-relational, so it scales out better than relational databases as


they are designed with web applications in mind.

Features of NoSQL

Non-relational

 NoSQL databases never follow the relational model


 Never provide tables with flat fixed-column records
 Work with self-contained aggregates or BLOBs
 Doesn’t require object-relational mapping and data normalization
 No complex features like query languages, query planners,referential integrity
joins, ACID

Schema-free

 NoSQL databases are either schema-free or have relaxed schemas


 Do not require any sort of definition of the schema of the data
 Offers heterogeneous structures of data in the same domain

NoSQL is Schema-Free

Simple API

 Offers easy to use interfaces for storage and querying data provided
 APIs allow low-level data manipulation & selection methods
 Text-based protocols mostly used with HTTP REST with JSON
 Mostly used no standard based NoSQL query language
 Web-enabled databases running as internet-facing services

Distributed

 Multiple NoSQL databases can be executed in a distributed fashion


 Offers auto-scaling and fail-over capabilities
 Often ACID concept can be sacrificed for scalability and throughput

UNIT-III 3. 4
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

 Mostly no synchronous replication between distributed nodes Asynchronous


Multi-Master Replication, peer-to-peer, HDFS Replication
 Only providing eventual consistency
 Shared Nothing Architecture. This enables less coordination and higher
distribution.

NoSQL is Shared Nothing.

Types of NoSQL Databases

NoSQL Databases are mainly categorized into four types: Key-value pair,
Column-oriented, Graph-based and Document-oriented. Every category has its unique
attributes and limitations. None of the above-specified database is better to solve all
the problems. Users should select the database based on their product needs.

Types of NoSQL Databases:

 Key-value Pair Based


 Column-oriented Graph
 Graphs based
 Document-oriented

Key Value Pair Based

Data is stored in key/value pairs. It is designed in such a way to handle lots of data
and heavy load.

UNIT-III 3. 5
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Key-value pair storage databases store data as a hash table where each key is unique,
and the value can be a JSON, BLOB(Binary Large Objects), string, etc.

For example, a key-value pair may contain a key like “Website” associated with a
value like “Guru99”.

It is one of the most basic NoSQL database example. This kind of NoSQL database is
used as a collection, dictionaries, associative arrays, etc. Key value stores help the
developer to store schema-less data. They work best for shopping cart contents.

Redis, Dynamo, Riak are some NoSQL examples of key-value store DataBases. They
are all based on Amazon’s Dynamo paper.

Column-based

Column-oriented databases work on columns and are based on BigTable paper by


Google. Every column is treated separately. Values of single column databases are
stored contiguously.

Column based NoSQL database

UNIT-III 3. 6
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

They deliver high performance on aggregation queries like SUM, COUNT, AVG,
MIN etc. as the data is readily available in a column.

Column-based NoSQL databases are widely used to manage data warehouses,


business intelligence, CRM, Library card catalogs,

HBase, Cassandra, HBase, Hypertable are NoSQL query examples of column based
database.

Document-Oriented:

Document-Oriented NoSQL DB stores and retrieves data as a key value pair but the
value part is stored as a document. The document is stored in JSON or XML formats.
The value is understood by the DB and can be queried.

Relational Vs. Document

In this diagram on your left you can see we have rows and columns, and in the right,
we have a document database which has a similar structure to JSON. Now for the
relational database, you have to know what columns you have and so on. However,
for a document database, you have data store like JSON object. You do not require to
define which make it flexible.

The document type is mostly used for CMS systems, blogging platforms, real-time
analytics & e-commerce applications. It should not use for complex transactions
which require multiple operations or queries against varying aggregate structures.

Amazon SimpleDB, CouchDB, MongoDB, Riak, Lotus Notes, MongoDB, are


popular Document originated DBMS systems.

Graph-Based

A graph type database stores entities as well the relations amongst those entities. The
entity is stored as a node with the relationship as edges. An edge gives a relationship
between nodes. Every node and edge has a unique identifier.

UNIT-III 3. 7
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Compared to a relational database where tables are loosely connected, a Graph


database is a multi-relational in nature. Traversing relationship is fast as they are
already captured into the DB, and there is no need to calculate them.

Graph base database mostly used for social networks, logistics, spatial data.

Neo4J, Infinite Graph, OrientDB, FlockDB are some popular graph-based databases.

What is the CAP Theorem?

CAP theorem is also called brewer’s theorem. It states that is impossible for a
distributed data store to offer more than two out of three guarantees

1. Consistency
2. Availability
3. Partition Tolerance

Consistency:

The data should remain consistent even after the execution of an operation. This
means once data is written, any future read request should contain that data. For
example, after updating the order status, all the clients should be able to see the same
data.

Availability:

The database should always be available and responsive. It should not have any
downtime.

UNIT-III 3. 8
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Partition Tolerance:

Partition Tolerance means that the system should continue to function even if the
communication among the servers is not stable. For example, the servers can be
partitioned into multiple groups which may not communicate with each other. Here, if
part of the database is unavailable, other parts are always unaffected.

Eventual Consistency

The term “eventual consistency” means to have copies of data on multiple machines
to get high availability and scalability. Thus, changes made to any data item on one
machine has to be propagated to other replicas.

Data replication may not be instantaneous as some copies will be updated


immediately while others in due course of time. These copies may be mutually, but in
due course of time, they become consistent. Hence, the name eventual consistency.

BASE: Basically Available, Soft state, Eventual consistency

 Basically, available means DB is available all the time as per CAP theorem
 Soft state means even without an input; the system state may change
 Eventual consistency means that the system will become consistent over time

Advantages of NoSQL

 Can be used as Primary or Analytic Data Source


 Big Data Capability
 No Single Point of Failure
 Easy Replication
 No Need for Separate Caching Layer
 It provides fast performance and horizontal scalability.
 Can handle structured, semi-structured, and unstructured data with equal effect

UNIT-III 3. 9
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

 Object-oriented programming which is easy to use and flexible


 NoSQL databases don’t need a dedicated high-performance server
 Support Key Developer Languages and Platforms
 Simple to implement than using RDBMS
 It can serve as the primary data source for online applications.
 Handles big data which manages data velocity, variety, volume, and
complexity
 Excels at distributed database and multi-data center operations
 Eliminates the need for a specific caching layer to store data
 Offers a flexible schema design which can easily be altered without downtime
or service disruption

Disadvantages of NoSQL

 No standardization rules
 Limited query capabilities
 RDBMS databases and tools are comparatively mature
 It does not offer any traditional database capabilities, like consistency when
multiple transactions are performed simultaneously.
 When the volume of data increases it is difficult to maintain unique values as
keys become difficult
 Doesn’t work as well with relational data
 The learning curve is stiff for new developers
 Open source options so not so popular for enterprises.

MongoDB system overview

What is MongoDB?

MongoDB is a document-oriented NoSQL database used for high volume data


storage. Instead of using tables and rows as in the traditional relational databases,
MongoDB makes use of collections and documents. Documents consist of key-value
pairs which are the basic unit of data in MongoDB. Collections contain sets of
documents and function which is the equivalent of relational database tables.
MongoDB is a database which came into light around the mid-2000s.

MongoDB Features

1. Each database contains collections which in turn contains documents. Each


document can be different with a varying number of fields. The size and
content of each document can be different from each other.
2. The document structure is more in line with how developers construct their
classes and objects in their respective programming languages. Developers
will often say that their classes are not rows and columns but have a clear
structure with key-value pairs.
3. The rows (or documents as called in MongoDB) doesn’t need to have a
schema defined beforehand. Instead, the fields can be created on the fly.

UNIT-III 3. 10
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

4. The data model available within MongoDB allows you to represent


hierarchical relationships, to store arrays, and other more complex structures
more easily.
5. Scalability – The MongoDB environments are very scalable. Companies
across the world have defined clusters with some of them running 100+ nodes
with around millions of documents within the database

MongoDB Example

The below example shows how a document can be modeled in MongoDB.

1. The _id field is added by MongoDB to uniquely identify the document in the
collection.
2. What you can note is that the Order Data (OrderID, Product, and Quantity )
which in RDBMS will normally be stored in a separate table, while in
MongoDB it is actually stored as an embedded document in the collection
itself. This is one of the key differences in how data is modeled in MongoDB.

Key Components of MongoDB Architecture

Below are a few of the common terms used in MongoDB

1. _id – This is a field required in every MongoDB document. The _id field
represents a unique value in the MongoDB document. The _id field is like the
document’s primary key. If you create a new document without an _id field,
MongoDB will automatically create the field. So for example, if we see the
example of the above customer table, Mongo DB will add a 24 digit unique
identifier to each document in the collection.

_Id CustomerID CustomerName OrderID


563479cc8a8a4246bd27d784 11 Guru99 111
563479cc7a8a4246bd47d784 22 Trevor Smith 222
563479cc9a8a4246bd57d784 33 Nicole 333

1. Collection – This is a grouping of MongoDB documents. A collection is the


equivalent of a table which is created in any other RDMS such as Oracle or

UNIT-III 3. 11
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

MS SQL. A collection exists within a single database. As seen from the


introduction collections don’t enforce any sort of structure.
2. Cursor – This is a pointer to the result set of a query. Clients can iterate
through a cursor to retrieve results.
3. Database – This is a container for collections like in RDMS wherein it is a
container for tables. Each database gets its own set of files on the file system.
A MongoDB server can store multiple databases.
4. Document – A record in a MongoDB collection is basically called a document.
The document, in turn, will consist of field name and values.
5. Field – A name-value pair in a document. A document has zero or more fields.
Fields are analogous to columns in relational databases.The following diagram
shows an example of Fields with Key value pairs. So in the example below
CustomerID and 11 is one of the key value pair’s defined in the document.

1. JSON – This is known as JavaScript Object Notation. This is a


human-readable, plain text format for expressing structured data. JSON is
currently supported in many programming languages.

Just a quick note on the key difference between the _id field and a normal collection
field. The _id field is used to uniquely identify the documents in a collection and is
automatically added by MongoDB when the collection is created.

Why Use MongoDB?

Below are the few of the reasons as to why one should start using MongoDB

1. Document-oriented – Since MongoDB is a NoSQL type database, instead of


having data in a relational type format, it stores the data in documents. This
makes MongoDB very flexible and adaptable to real business world situation
and requirements.
2. Ad hoc queries – MongoDB supports searching by field, range queries, and
regular expression searches. Queries can be made to return specific fields
within documents.
3. Indexing – Indexes can be created to improve the performance of searches
within MongoDB. Any field in a MongoDB document can be indexed.
4. Replication – MongoDB can provide high availability with replica sets. A
replica set consists of two or more mongo DB instances. Each replica set
member may act in the role of the primary or secondary replica at any time.
The primary replica is the main server which interacts with the client and
performs all the read/write operations. The Secondary replicas maintain a copy
of the data of the primary using built-in replication. When a primary replica

UNIT-III 3. 12
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

fails, the replica set automatically switches over to the secondary and then it
becomes the primary server.
5. Load balancing – MongoDB uses the concept of sharding to scale horizontally
by splitting data across multiple MongoDB instances. MongoDB can run over
multiple servers, balancing the load and/or duplicating data to keep the system
up and running in case of hardware failure.

Data Modelling in MongoDB

As we have seen from the Introduction section, the data in MongoDB has a flexible
schema. Unlike in SQL databases, where you must have a table’s schema declared
before inserting data, MongoDB’s collections do not enforce document structure. This
sort of flexibility is what makes MongoDB so powerful.

When modeling data in Mongo, keep the following things in mind

1. What are the needs of the application – Look at the business needs of the
application and see what data and the type of data needed for the application.
Based on this, ensure that the structure of the document is decided
accordingly.
2. What are data retrieval patterns – If you foresee a heavy query usage then
consider the use of indexes in your data model to improve the efficiency of
queries.
3. Are frequent inserts, updates and removals happening in the database?
Reconsider the use of indexes or incorporate sharding if required in your data
modeling design to improve the efficiency of your overall MongoDB
environment.

Difference between MongoDB & RDBMS

Below are some of the key term differences between MongoDB and RDBMS

RDBMS MongoDB Difference


Table Collection In RDBMS, the table contains the columns and rows which are
used to store the data whereas, in MongoDB, this same
structure is known as a collection. The collection contains
documents which in turn contains Fields, which in turn are
key-value pairs.
In RDBMS, the row represents a single, implicitly structured
Row Document data item in a table. In MongoDB, the data is stored in
documents.
Field In RDBMS, the column denotes a set of data values. These in
Column
MongoDB are known as Fields.
Joins Embedded In RDBMS, data is sometimes spread across various tables and
documents in order to show a complete view of all data, a join is
sometimes formed across tables to get the data. In MongoDB,
the data is normally stored in a single collection, but separated
by using Embedded documents. So there is no concept of joins
in MongoDB.

UNIT-III 3. 13
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Apart from the terms differences, a few other differences are shown below

1. Relational databases are known for enforcing data integrity. This is not an
explicit requirement in MongoDB.
2. RDBMS requires that data be normalized first so that it can prevent orphan
records and duplicates Normalizing data then has the requirement of more
tables, which will then result in more table joins, thus requiring more keys and
indexes.As databases start to grow, performance can start becoming an issue.
Again this is not an explicit requirement in MongoDB. MongoDB is flexible
and does not need the data to be normalized first.

Basic querying with MongoDB shell

The MongoDB shell is a great tool for navigating, inspecting, and even manipulating
document data. If you’re running MongoDB on your local machine, firing up the shell
is as simple as typing mongo and hitting enter, which will connect to MongoDB at
localhost on the standard port (27017). If you’re connecting to a MongoDB Atlas
cluster or other remote instance, then add the connection string after the command
mongo .

{_id: ObjectId("5effaa5662679b5af2c58829"),
email: “[email protected]”,
name: {given: “Jesse”, family: “Xiao”},
age: 31,
addresses: [{label: “home”,
street: “101 Elm Street”,
city: “Springfield”,
state: “CA”,
zip: “90000”,
country: “US”},
{label: “mom”,
street: “555 Main Street”,
city: “Jonestown”,
province: “Ontario”,
country: “CA”}]

Here are a few quick shell examples:

List Databases

> show dbs;admin 0.000GBconfig 0.000GBlocal 0.000GBmy_database


0.004GB
>

UNIT-III 3. 14
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

List Collections

> use my_database;> show collections;


users
posts>

Count Documents in a Collection

> use my_database;> db.users.count()


20234>

Find the First Document in a Collection

> db.users.findOne()
{
"_id": ObjectId("5ce45d7606444f199acfba1e"),
"name": {given: "Alex", family: "Smith"},
"email": "[email protected]"
"age": 27
}
>

Find a Document by ID

> db.users.findOne({_id: ObjectId("5ce45d7606444f199acfba1e")})


{
"_id": ObjectId("5ce45d7606444f199acfba1e"),
"name": {given: "Alex", family: "Smith"},
"email": "[email protected]",
"age": 27
}
>

Querying MongoDB Collections

The MongoDB Query Language (MQL) uses the same syntax as documents, making
it intuitive and easy to use for even advanced querying. Let’s look at a few MongoDB
query examples.

Find a Limited Number of Results

> db.users.find().limit(10)
…>

Find Users by Family name

> db.users.find({"name.family": "Smith"}).count()


1>

Note that we enclose “name.family” in quotes, because it has a dot in the middle.

UNIT-III 3. 15
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Query Documents by Numeric Ranges

// All posts having “likes” field with numeric value greater than one:>
db.post.find({likes: {$gt: 1}})
// All posts having 0 likes> db.post.find({likes: 0})
// All posts that do NOT have exactly 1 like> db.post.find({likes: {$ne: 1}})

Sort Results by a Field

// order by age, in ascending order (smallest values first)


> db.user.find().sort({age: 1})
{
"_id": ObjectId("5ce45d7606444f199acfba1e"),
"name": {given: "Alex", family: "Smith"},
"email": "[email protected]",
"age": 27
}
{
_id: ObjectId("5effaa5662679b5af2c58829"),
email: “[email protected]”,
name: {given: “Jesse”, family: “Xiao”},
age: 31
}
>
// order by age, in descending order (largest values first)
> db.user.find().sort({age: -1})
{
_id: ObjectId("5effaa5662679b5af2c58829"),
email: “[email protected]”,
name: {given: “Jesse”, family: “Xiao”},
age: 31
}
{
"_id": ObjectId("5ce45d7606444f199acfba1e"),
"name": {given: "Alex", family: "Smith"},
"email": "[email protected]",
"age": 27
}
>

Managing Indexes

MongoDB allows you to create indexes, even on nested fields in subdocuments, to


keep queries performing well even as collections grow very large.

Create an Index

> db.user.createIndex({"name.family": 1})


Create a Unique Index> db.user.createIndex({email: 1}, {unique: true})

UNIT-III 3. 16
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Unique indexes allow you to ensure that there is at most one record in the collection
with a given value for that field – very useful with things like email addresses!

See Indexes on a Collection

> db.user.getIndexes()
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "my_database.user"
},
{
"v" : 2,
"key" : {
"name.given" : 1
},
"name" : "name.given_1",
"ns" : "my_database.user"
}]

Note that by default, collections always have an index on the _id field, for easy
document retrieval by primary key, so any additional indexes will be listed after that.

Drop an Index

> db.user.dropIndex("name.given_1")

Inventory collection

[
{ "item": "journal", "qty": 25, "size": { "h": 14, "w": 21, "uom": "cm" },
"status": "A" },
{ "item": "notebook", "qty": 50, "size": { "h": 8.5, "w": 11, "uom": "in" },
"status": "A" },
{ "item": "paper", "qty": 100, "size": { "h": 8.5, "w": 11, "uom": "in" },
"status": "D" },
{ "item": "planner", "qty": 75, "size": { "h": 22.85, "w": 30, "uom":
"cm" }, "status": "D" },
{ "item": "postcard", "qty": 45, "size": { "h": 10, "w": 15.25, "uom":
"cm" }, "status": "A" }
]

UNIT-III 3. 17
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Select All Documents in a Collection

db.inventory.find()

Specify Equality Condition

db.inventory.find({ status: "D" })

Specify Conditions Using Query Operators

db.inventory.find({ status: { $in: [ "A", "D" ] } })

Specify AND Conditions

db.inventory.find({ status: "A", qty: { $lt: 30 } })

Specify OR Conditions

db.inventory.find({ $or: [ { status: "A" }, { qty: { $lt: 30 } } ] })

Specify AND as well as OR Conditions

db.inventory.find({ status: "A", $or: [ { qty: { $lt: 30 } }, { item: /^p/ } ] })

Request body parsing in Express

Parse incoming request bodies in a middleware before your handlers, available under
the req.body property.

Note As req.body’s shape is based on user-controlled input, all properties and values
in this object are untrusted and should be validated before trusting. For example,
req.body.foo.toString() may fail in multiple ways, for example the foo property may
not be there or may not be a string, and toString may not be a function and instead a
string or other user input.

Body-parser module provides the following parsers:

 JSON body parser


 Raw body parser
 Text body parser
 URL-encoded form body parser

Installation

$ npm install body-parser

API

var bodyParser = require('body-parser')

UNIT-III 3. 18
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

The bodyParser object exposes various factories to create middlewares. All


middlewares will populate the req.body property with the parsed body when the
Content-Type request header matches the type option, or an empty object ({}) if there
was no body to parse, the Content-Type was not matched, or an error occurred.

bodyParser.json([options])

Returns middleware that only parses json and only looks at requests where the
Content-Type header matches the type option. This parser accepts any Unicode
encoding of the body and supports automatic inflation of gzip and deflate encodings.

A new body object containing the parsed data is populated on the request object after
the middleware (i.e. req.body).

Options

The json function takes an optional options object that may contain any of the
following keys:

inflate

When set to true, then deflated (compressed) bodies will be inflated; when false,
deflated bodies are rejected. Defaults to true.

limit

Controls the maximum request body size. If this is a number, then the value specifies
the number of bytes; if it is a string, the value is passed to the bytes library for parsing.
Defaults to '100kb'.

reviver

The reviver option is passed directly to JSON.parse as the second argument. You can
find more information on this argument in the MDN documentation about
JSON.parse.

strict

When set to true, will only accept arrays and objects; when false will accept anything
JSON.parse accepts. Defaults to true.

type

The type option is used to determine what media type the middleware will parse. This
option can be a string, array of strings, or a function. If not a function, type option is
passed directly to the type-is library and this can be an extension name (like json), a
mime type (like application/json), or a mime type with a wildcard (like */* or */json).
If a function, the type option is called as fn(req) and the request is parsed if it returns a
truthy value. Defaults to application/json.

UNIT-III 3. 19
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

verify

The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is
a Buffer of the raw request body and encoding is the encoding of the request. The
parsing can be aborted by throwing an error.

bodyParser.raw([options])

Returns middleware that parses all bodies as a Buffer and only looks at requests
where the Content-Type header matches the type option. This parser supports
automatic inflation of gzip and deflate encodings.

A new body object containing the parsed data is populated on the request object after
the middleware (i.e. req.body). This will be a Buffer object of the body.

Options

The raw function takes an optional options object that may contain any of the
following keys:

inflate

When set to true, then deflated (compressed) bodies will be inflated; when false,
deflated bodies are rejected. Defaults to true.

limit

Controls the maximum request body size. If this is a number, then the value specifies
the number of bytes; if it is a string, the value is passed to the bytes library for parsing.
Defaults to '100kb'.

type

The type option is used to determine what media type the middleware will parse. This
option can be a string, array of strings, or a function. If not a function, type option is
passed directly to the type-is library and this can be an extension name (like bin), a
mime type (like application/octet-stream), or a mime type with a wildcard (like */* or
application/*). If a function, the type option is called as fn(req) and the request is
parsed if it returns a truthy value. Defaults to application/octet-stream.

verify

The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is
a Buffer of the raw request body and encoding is the encoding of the request. The
parsing can be aborted by throwing an error.

UNIT-III 3. 20
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

bodyParser.text([options])

Returns middleware that parses all bodies as a string and only looks at requests where
the Content-Type header matches the type option. This parser supports automatic
inflation of gzip and deflate encodings.

A new body string containing the parsed data is populated on the request object after
the middleware (i.e. req.body). This will be a string of the body.

Options

The text function takes an optional options object that may contain any of the
following keys:

defaultCharset

Specify the default character set for the text content if the charset is not specified in
the Content-Type header of the request. Defaults to utf-8.

inflate

When set to true, then deflated (compressed) bodies will be inflated; when false,
deflated bodies are rejected. Defaults to true.

limit

Controls the maximum request body size. If this is a number, then the value specifies
the number of bytes; if it is a string, the value is passed to the bytes library for parsing.
Defaults to '100kb'.

type

The type option is used to determine what media type the middleware will parse. This
option can be a string, array of strings, or a function. If not a function, type option is
passed directly to the type-is library and this can be an extension name (like txt), a
mime type (like text/plain), or a mime type with a wildcard (like */* or text/*). If a
function, the type option is called as fn(req) and the request is parsed if it returns a
truthy value. Defaults to text/plain.

verify

The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is
a Buffer of the raw request body and encoding is the encoding of the request. The
parsing can be aborted by throwing an error.

bodyParser.urlencoded([options])

Returns middleware that only parses urlencoded bodies and only looks at requests
where the Content-Type header matches the type option. This parser accepts only

UNIT-III 3. 21
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

UTF-8 encoding of the body and supports automatic inflation of gzip and deflate
encodings.

A new body object containing the parsed data is populated on the request object after
the middleware (i.e. req.body). This object will contain key-value pairs, where the
value can be a string or array (when extended is false), or any type (when extended is
true).

Options

The urlencoded function takes an optional options object that may contain any of the
following keys:

extended

The extended option allows to choose between parsing the URL-encoded data with
the querystring library (when false) or the qs library (when true). The “extended”
syntax allows for rich objects and arrays to be encoded into the URL-encoded format,
allowing for a JSON-like experience with URL-encoded. For more information,
please see the qs library.

Defaults to true, but using the default has been deprecated. Please research into the
difference between qs and querystring and choose the appropriate setting.

inflate

When set to true, then deflated (compressed) bodies will be inflated; when false,
deflated bodies are rejected. Defaults to true.

limit

Controls the maximum request body size. If this is a number, then the value specifies
the number of bytes; if it is a string, the value is passed to the bytes library for parsing.
Defaults to '100kb'.

parameterLimit

The parameterLimit option controls the maximum number of parameters that are
allowed in the URL-encoded data. If a request contains more parameters than this
value, a 413 will be returned to the client. Defaults to 1000.

type

The type option is used to determine what media type the middleware will parse. This
option can be a string, array of strings, or a function. If not a function, type option is
passed directly to the type-is library and this can be an extension name (like
urlencoded), a mime type (like application/x-www-form-urlencoded), or a mime type
with a wildcard (like */x-www-form-urlencoded). If a function, the type option is
called as fn(req) and the request is parsed if it returns a truthy value. Defaults to
application/x-www-form-urlencoded.

UNIT-III 3. 22
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

verify

The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is
a Buffer of the raw request body and encoding is the encoding of the request. The
parsing can be aborted by throwing an error.

Errors

The middlewares provided by this module create errors using the http-errors module.
The errors will typically have a status/statusCode property that contains the suggested
HTTP response code, an expose property to determine if the message property should
be displayed to the client, a type property to determine the type of error without
matching against the message, and a body property containing the read body, if
available.

The following are the common errors created, though any error can come through for
various reasons.

content encoding unsupported

This error will occur when the request had a Content-Encoding header that contained
an encoding but the “inflation” option was set to false. The status property is set to
415, the type property is set to 'encoding.unsupported', and the charset property will
be set to the encoding that is unsupported.

entity parse failed

This error will occur when the request contained an entity that could not be parsed by
the middleware. The status property is set to 400, the type property is set to
'entity.parse.failed', and the body property is set to the entity value that failed parsing.

entity verify failed

This error will occur when the request contained an entity that could not be failed
verification by the defined verify option. The status property is set to 403, the type
property is set to 'entity.verify.failed', and the body property is set to the entity value
that failed verification.

request aborted

This error will occur when the request is aborted by the client before reading the body
has finished. The received property will be set to the number of bytes received before
the request was aborted and the expected property is set to the number of expected
bytes. The status property is set to 400 and type property is set to 'request.aborted'.

request entity too large

This error will occur when the request body’s size is larger than the “limit” option.
The limit property will be set to the byte limit and the length property will be set to

UNIT-III 3. 23
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

the request body’s length. The status property is set to 413 and the type property is set
to 'entity.too.large'.

request size did not match content length

This error will occur when the request’s length did not match the length from the
Content-Length header. This typically occurs when the request is malformed,
typically when the Content-Length header was calculated based on characters instead
of bytes. The status property is set to 400 and the type property is set to
'request.size.invalid'.

stream encoding should not be set

This error will occur when something called the req.setEncoding method prior to this
middleware. This module operates directly on bytes only and you cannot call
req.setEncoding when using this module. The status property is set to 500 and the
type property is set to 'stream.encoding.set'.

stream is not readable

This error will occur when the request is no longer readable when this middleware
attempts to read it. This typically means something other than a middleware from this
module read the reqest body already and the middleware was also configured to read
the same request. The status property is set to 500 and the type property is set to
'stream.not.readable'.

too many parameters

This error will occur when the content of the request exceeds the configured
parameterLimit for the urlencoded parser. The status property is set to 413 and the
type property is set to 'parameters.too.many'.

unsupported charset “BOGUS”

This error will occur when the request had a charset parameter in the Content-Type
header, but the iconv-lite module does not support it OR the parser does not support it.
The charset is contained in the message as well as in the charset property. The status
property is set to 415, the type property is set to 'charset.unsupported', and the charset
property is set to the charset that is unsupported.

unsupported content encoding “bogus”

This error will occur when the request had a Content-Encoding header that contained
an unsupported encoding. The encoding is contained in the message as well as in the
encoding property. The status property is set to 415, the type property is set to
'encoding.unsupported', and the encoding property is set to the encoding that is
unsupported.

UNIT-III 3. 24
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Examples

Express/Connect top-level generic

This example demonstrates adding a generic JSON and URL-encoded parser as a


top-level middleware, which will parse the bodies of all incoming requests. This is the
simplest setup.

var express = require('express')var bodyParser = require('body-parser')


var app = express()
// parse
application/x-www-form-urlencodedapp.use(bodyParser.urlencoded({ extended:
false }))
// parse application/jsonapp.use(bodyParser.json())

app.use(function (req, res) {


res.setHeader('Content-Type', 'text/plain')
res.write('you posted:\n')
res.end(JSON.stringify(req.body, null, 2))})

Express route-specific

This example demonstrates adding body parsers specifically to the routes that need
them. In general, this is the most recommended way to use body-parser with Express.

var express = require('express')var bodyParser = require('body-parser')


var app = express()
// create application/json parservar jsonParser = bodyParser.json()
// create application/x-www-form-urlencoded parservar urlencodedParser =
bodyParser.urlencoded({ extended: false })
// POST /login gets urlencoded bodiesapp.post('/login', urlencodedParser, function
(req, res) {
res.send('welcome, ' + req.body.username)})
// POST /api/users gets JSON bodiesapp.post('/api/users', jsonParser, function (req, res)
{
// create user in req.body})

Change accepted type for parsers

All the parsers accept a type option which allows you to change the Content-Type that
the middleware will parse.

var express = require('express')var bodyParser = require('body-parser')


var app = express()
// parse various different custom JSON types as JSONapp.use(bodyParser.json({ type:
'application/*+json' }))
// parse some custom thing into a Bufferapp.use(bodyParser.raw({ type:
'application/vnd.custom-type' }))
// parse an HTML body into a stringapp.use(bodyParser.text({ type: 'text/html' }))

UNIT-III 3. 25
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

NodeJS MongoDB connection - Adding and retrieving data to MongoDB from


NodeJS

Mostly all modern-day web applications have some sort of data storage system at the
backend. For example, if you take the case of a web shopping application, data such
as the price of an item would be stored in the database.

The Node js framework can work with databases with both relational (such as Oracle
and MS SQL Server) and non-relational databases (such as MongoDB).

Node.js and NoSQL Databases

Over the years, NoSQL database such as MongoDB and MySQL have become quite
popular as databases for storing data. The ability of these databases to store any type
of content and particularly in any type of format is what makes these databases so
famous.

Node.js has the ability to work with both MySQL and MongoDB as databases. In
order to use either of these databases, you need to download and use the required
modules using the Node package manager.

For MySQL, the required module is called “mysql” and for using MongoDB the
required module to be installed is “Mongoose.”

With these modules, you can perform the following operations in Node.js

1. Manage the connection pooling – Here is where you can specify the number of
MySQL database connections that should be maintained and saved by Node.js.
2. Create and close a connection to a database. In either case, you can provide a
callback function which can be called whenever the “create” and “close”
connection methods are executed.
3. Queries can be executed to get data from respective databases to retrieve data.
4. Data manipulation, such as inserting data, deleting, and updating data can also
be achieved with these modules.

Using MongoDB and Node.js

let’s assume that we have the below MongoDB data in place.

Database name: EmployeeDB

Collection name: Employee

Documents
{
{Employeeid : 1, Employee Name : Guru99},
{Employeeid : 2, Employee Name : Joe},
{Employeeid : 3, Employee Name : Martin},
}

UNIT-III 3. 26
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

1. Installing the NPM Modules

You need a driver to access Mongo from within a Node application. There are a
number of Mongo drivers available, but MongoDB is among the most popular. To
install the MongoDB module, run the below command

npm install mongodb

1. Creating and closing a connection to a MongoDB database. The below


code snippet shows how to create and close a connection to a MongoDB
database.

Code Explanation:

1. The first step is to include the mongoose module, which is done through the
require function. Once this module is in place, we can use the necessary
functions available in this module to create connections to the database.
2. Next, we specify our connection string to the database. In the connect string,
there are 3 key values which are passed.

 The first is ‘mongodb’ which specifies that we are connecting to a mongoDB


database.
 The next is ‘localhost’ which means we are connecting to a database on the
local machine.
 The next is ‘EmployeeDB’ which is the name of the database defined in our
MongoDB database.

3. The next step is to actually connect to our database. The connect function
takes in our URL and has the facility to specify a callback function. It will be
called when the connection is opened to the database. This gives us the
opportunity to know if the database connection was successful or not.
4. In the function, we are writing the string “Connection established” to the
console to indicate that a successful connection was created.
5. Finally, we are closing the connection using the db.close statement.

If the above code is executed properly, the string “Connected” will be written to the
console as shown below.

UNIT-III 3. 27
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

1. Querying for data in a MongoDB database – Using the MongoDB driver


we can also fetch data from the MongoDB database.The below section will
show how we can use the driver to fetch all of the documents from our
Employee collection in our EmployeeDB database. This is the collection in
our MongoDB database, which contains all the employee-related documents.
Each document has an object id, Employee name, and employee id to define
the values of the document.

var MongoClient = require('mongodb').MongoClient;


var url = 'mongodb://localhost/EmployeeDB';

MongoClient.connect(url, function(err, db) {

var cursor = db.collection('Employee').find();

cursor.each(function(err, doc) {

console.log(doc);

});
});

Code Explanation:

1. In the first step, we are creating a cursor (A cursor is a pointer which is used to
point to the various records fetched from a database. The cursor is then used to

UNIT-III 3. 28
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

iterate through the different records in the database. Here we are defining a
variable name called cursor which will be used to store the pointer to the
records fetched from the database. ) which points to the records which are
fetched from the MongoDb collection. We also have the facility of specifying
the collection ‘Employee’ from which to fetch the records. The find() function
is used to specify that we want to retrieve all of the documents from the
MongoDB collection.
2. We are now iterating through our cursor and for each document in the cursor
we are going to execute a function.
3. Our function is simply going to print the contents of each document to the
console.

It is also possible to fetch a particular record from a database. This can be done by
specifying the search condition in the find() function. For example, suppose if you
just wanted to fetch the record which has the employee name as Guru99, then this
statement can be written as follows

var cursor=db.collection('Employee').find({EmployeeName: "guru99"})

If the above code is executed successfully, the following output will be displayed in
your console.

Output:

From the output,

 You will be able to clearly see that all the documents from the collection are
retrieved. This is possible by using the find() method of the mongoDB
connection (db) and iterating through all of the documents using the cursor.

1. Inserting documents in a collection – Documents can be inserted into a


collection using the insertOne method provided by the MongoDB library. The
below code snippet shows how we can insert a document into a mongoDB
collection.

UNIT-III 3. 29
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

var MongoClient = require('mongodb').MongoClient;


var url = 'mongodb://localhost/EmployeeDB';

MongoClient.connect(url, function(err, db) {

db.collection('Employee').insertOne({
Employeeid: 4,
EmployeeName: "NewEmployee"
});
});

Code Explanation:

1. Here we are using the insertOne method from the MongoDB library to insert a
document into the Employee collection.
2. We are specifying the document details of what needs to be inserted into the
Employee collection.

If you now check the contents of your MongoDB database, you will find the record
with Employeeid of 4 and EmployeeName of “NewEmployee” inserted into the
Employee collection.

Note: The console will not show any output because the record is being inserted in
the database and no output can be shown here.

To check that the data has been properly inserted in the database, you need to execute
the following commands in MongoDB

1. Use EmployeeDB
2. db.Employee.find({Employeeid :4 })

The first statement ensures that you are connected to the EmployeeDb database. The
second statement searches for the record which has the employee id of 4.

UNIT-III 3. 30
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

1. Updating documents in a collection – Documents can be updated in a


collection using the updateOne method provided by the MongoDB library.
The below code snippet shows how to update a document in a mongoDB
collection.

var MongoClient = require('mongodb').MongoClient;


var url = 'mongodb://localhost/EmployeeDB';

MongoClient.connect(url, function(err, db) {

db.collection('Employee').updateOne({
"EmployeeName": "NewEmployee"
}, {
$set: {
"EmployeeName": "Mohan"
}
});
});

Code Explanation:

1. Here we are using the “updateOne” method from the MongoDB library, which
is used to update a document in a mongoDB collection.
2. We are specifying the search criteria of which document needs to be updated.
In our case, we want to find the document which has the EmployeeName of
“NewEmployee.”
3. We then want to set the value of the EmployeeName of the document from
“NewEmployee” to “Mohan”.

If you now check the contents of your MongoDB database, you will find the record
with Employeeid of 4 and EmployeeName of “Mohan” updated in the Employee
collection.

To check that the data has been properly updated in the database, you need to execute
the following commands in MongoDB

1. Use EmployeeDB

UNIT-III 3. 31
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

2. db.Employee.find({Employeeid :4 })

The first statement ensures that you are connected to the EmployeeDb database. The
second statement searches for the record which has the employee id of 4.

1. Deleting documents in a collection – Documents can be deleted in a


collection using the “deleteOne” method provided by the MongoDB library.
The below code snippet shows how to delete a document in a mongoDB
collection.

var MongoClient = require('mongodb').MongoClient;


var url = 'mongodb://localhost/EmployeeDB';

MongoClient.connect(url, function(err, db) {

db.collection('Employee').deleteOne(

{
"EmployeeName": "Mohan"
}

);
});

Code Explanation:

1. Here we are using the “deleteOne” method from the MongoDB library, which
is used to delete a document in a mongoDB collection.
2. We are specifying the search criteria of which document needs to be deleted.
In our case, we want to find the document which has the EmployeeName of
“Mohan” and delete this document.

UNIT-III 3. 32
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

If you now check the contents of your MongoDB database, you will find the record
with Employeeid of 4 and EmployeeName of “Mohan” deleted from the Employee
collection.

To check that the data has been properly updated in the database, you need to execute
the following commands in MongoDB

1. Use EmployeeDB
2. db.Employee.find()

The first statement ensures that you are connected to the EmployeeDb database. The
second statement searches and display all of the records in the employee collection.
Here you can see if the record has been deleted or not.

Handling SQL databases from NodeJS

Node.js can be used in database applications.

One of the most popular databases is MySQL.

Install MySQL Driver

Once you have MySQL up and running on your computer, you can access it by using
Node.js.

To access a MySQL database with Node.js, you need a MySQL driver. This tutorial
will use the "mysql" module, downloaded from NPM.

To download and install the "mysql" module, open the Command Terminal and
execute the following:

C:\Users\Your Name>npm install mysql

Now you have downloaded and installed a mysql database driver.

Node.js can use this module to manipulate the MySQL database:

var mysql = require('mysql');

Create Connection

Start by creating a connection to the database.

Use the username and password from your MySQL database.

demo_db_connection.js

UNIT-III 3. 33
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword"
});

con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
});

Save the code above in a file called "demo_db_connection.js" and run the file:

Run "demo_db_connection.js"

C:\Users\Your Name>node demo_db_connection.js

Which will give you this result:

Connected!

Now you can start querying the database using SQL statements.

Query a Database

Use SQL statements to read from (or write to) a MySQL database. This is also called
"to query" the database.

The connection object created in the example above, has a method for querying the
database:

con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
con.query(sql, function (err, result) {
if (err) throw err;
console.log("Result: " + resul
t);
});
});

The query method takes an sql statements as a parameter and returns the result.

Creating a Database

To create a database in MySQL, use the "CREATE DATABASE" statement:

UNIT-III 3. 34
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Example

Create a database named "mydb":

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword"
});

con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
con.query("CREATE DATABASE mydb", function (err, result) {
if (err) throw err;
console.log("Database created");
});
});

Save the code above in a file called "demo_create_db.js" and run the file:

Run "demo_create_db.js"

C:\Users\Your Name>node demo_create_db.js

Which will give you this result:

Connected!
Database created

Creating a Table

To create a table in MySQL, use the "CREATE TABLE" statement.

Make sure you define the name of the database when you create the connection:

Example

Create a table named "customers":

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"

UNIT-III 3. 35
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

});

con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
var sql = "CREATE TABLE customers (name VARCHAR(255), address
VARCHAR(255))";
con.query(sql, function (err, result) {
if (err) throw err;
console.log("Table created");
});
});

Save the code above in a file called "demo_create_table.js" and run the file:

Run "demo_create_table.js"

C:\Users\Your Name>node demo_create_table.js

Which will give you this result:

Connected!
Table created

Insert Into Table

To fill a table in MySQL, use the "INSERT INTO" statement.

Example

Insert a record in the "customers" table:

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"
});

con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
var sql = "INSERT INTO customers (name, address) VALUES ('Company Inc',
'Highway 37')";
con.query(sql, function (err, result) {
if (err) throw err;
console.log("1 record inserted");

UNIT-III 3. 36
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

});
});

Save the code above in a file called "demo_db_insert.js", and run the file:

Run "demo_db_insert.js"

C:\Users\Your Name>node demo_db_insert.js

Which will give you this result:

Connected!
1 record inserted

Insert Multiple Records

To insert more than one record, make an array containing the values, and insert a
question mark in the sql, which will be replaced by the value array:
INSERT INTO customers (name, address) VALUES ?

Example

Fill the "customers" table with data:

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"
});

con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
var sql = "INSERT INTO customers (name, address) VALUES ?";
var values = [
['John', 'Highway 71'],
['Peter', 'Lowstreet 4'],
['Amy', 'Apple st 652'],
['Hannah', 'Mountain 21'],
['Michael', 'Valley 345'],
['Sandy', 'Ocean blvd 2'],
['Betty', 'Green Grass 1'],
['Richard', 'Sky st 331'],
['Susan', 'One way 98'],
['Vicky', 'Yellow Garden 2'],
['Ben', 'Park Lane 38'],
['William', 'Central st 954'],

UNIT-III 3. 37
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

['Chuck', 'Main Road 989'],


['Viola', 'Sideway 1633']
];
con.query(sql, [values], function (err, result) {
if (err) throw err;
console.log("Number of records inserted: " + result.affectedRows);
});
});

Save the code above in a file called "demo_db_insert_multple.js", and run the file:

Run "demo_db_insert_multiple.js"

C:\Users\Your Name>node demo_db_insert_multiple.js

Which will give you this result:

Connected!
Number of records inserted: 14

Selecting From a Table

To select data from a table in MySQL, use the "SELECT" statement.

Example

Select all records from the "customers" table, and display the result object:

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"
});

con.connect(function(err) {
if (err) throw err;
con.query("SELECT * FROM customers", function (err, result, fields) {
if (err) throw err;
console.log(result);
});
});

SELECT * will return all columns

Save the code above in a file called "demo_db_select.js" and run the file:

Run "demo_db_select.js"

UNIT-III 3. 38
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

C:\Users\Your Name>node demo_db_select.js

Which will give you this result:

[
{ id: 1, name: 'John', address: 'Highway 71'},
{ id: 2, name: 'Peter', address: 'Lowstreet 4'},
{ id: 3, name: 'Amy', address: 'Apple st 652'},
{ id: 4, name: 'Hannah', address: 'Mountain 21'},
{ id: 5, name: 'Michael', address: 'Valley 345'},
{ id: 6, name: 'Sandy', address: 'Ocean blvd 2'},
{ id: 7, name: 'Betty', address: 'Green Grass 1'},
{ id: 8, name: 'Richard', address: 'Sky st 331'},
{ id: 9, name: 'Susan', address: 'One way 98'},
{ id: 10, name: 'Vicky', address: 'Yellow Garden 2'},
{ id: 11, name: 'Ben', address: 'Park Lane 38'},
{ id: 12, name: 'William', address: 'Central st 954'},
{ id: 13, name: 'Chuck', address: 'Main Road 989'},
{ id: 14, name: 'Viola', address: 'Sideway 1633'}
]

Select With a Filter

When selecting records from a table, you can filter the selection by using the
"WHERE" statement:

Example

Select record(s) with the address "Park Lane 38":

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"
});

con.connect(function(err) {
if (err) throw err;
con.query("SELECT * FROM customers WHERE address = 'Park Lane 38'",
function (err, result) {
if (err) throw err;
console.log(result);
});
});

Save the code above in a file called "demo_db_where.js" and run the file:

UNIT-III 3. 39
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Run "demo_db_where.js"

C:\Users\Your Name>node demo_db_where.js

Which will give you this result:

[
{ id: 11, name: 'Ben', address: 'Park Lane 38'}
]

Wildcard Characters

You can also select the records that starts, includes, or ends with a given letter or
phrase.

Use the '%' wildcard to represent zero, one or multiple characters:

Example

Select records where the address starts with the letter 'S':

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"
});

con.connect(function(err) {
if (err) throw err;
con.query("SELECT * FROM customers WHERE address LIKE 'S%'", function
(err, result) {
if (err) throw err;
console.log(result);
});
});

Save the code above in a file called "demo_db_where_s.js" and run the file:

Run "demo_db_where_s.js"

C:\Users\Your Name>node demo_db_where_s.js

Which will give you this result:

[
{ id: 8, name: 'Richard', address: 'Sky st 331'},

UNIT-III 3. 40
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

{ id: 14, name: 'Viola', address: 'Sideway 1633'}


]

Sort the Result

Use the ORDER BY statement to sort the result in ascending or descending order.

The ORDER BY keyword sorts the result ascending by default. To sort the result in
descending order, use the DESC keyword.

Example

Sort the result alphabetically by name:

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"
});

con.connect(function(err) {
if (err) throw err;
con.query("SELECT * FROM customers ORDER BY name", function (err, result)
{
if (err) throw err;
console.log(result);
});
});

Save the code above in a file called "demo_db_orderby.js" and run the file:

Run "demo_db_orderby.js"

C:\Users\Your Name>node demo_db_orderby.js

Which will give you this result:

[
{ id: 3, name: 'Amy', address: 'Apple st 652'},
{ id: 11, name: 'Ben', address: 'Park Lane 38'},
{ id: 7, name: 'Betty', address: 'Green Grass 1'},
{ id: 13, name: 'Chuck', address: 'Main Road 989'},
{ id: 4, name: 'Hannah', address: 'Mountain 21'},
{ id: 1, name: 'John', address: 'Higheay 71'},
{ id: 5, name: 'Michael', address: 'Valley 345'},
{ id: 2, name: 'Peter', address: 'Lowstreet 4'},
{ id: 8, name: 'Richard', address: 'Sky st 331'},

UNIT-III 3. 41
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

{ id: 6, name: 'Sandy', address: 'Ocean blvd 2'},


{ id: 9, name: 'Susan', address: 'One way 98'},
{ id: 10, name: 'Vicky', address: 'Yellow Garden 2'},
{ id: 14, name: 'Viola', address: 'Sideway 1633'},
{ id: 12, name: 'William', address: 'Central st 954'}
]

ORDER BY DESC

Use the DESC keyword to sort the result in a descending order.

Example

Sort the result reverse alphabetically by name:

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"
});

con.connect(function(err) {
if (err) throw err;
con.query("SELECT * FROM customers ORDER BY name DESC", function (err,
result) {
if (err) throw err;
console.log(result);
});
});

Save the code above in a file called "demo_db_orderby_desc.js" and run the file:

Run "demo_db_orderby_desc.js"

C:\Users\Your Name>node demo_db_orderby_desc.js

Which will give you this result:

[
{ id: 12, name: 'William', address: 'Central st 954'},
{ id: 14, name: 'Viola', address: 'Sideway 1633'},
{ id: 10, name: 'Vicky', address: 'Yellow Garden 2'},
{ id: 9, name: 'Susan', address: 'One way 98'},
{ id: 6, name: 'Sandy', address: 'Ocean blvd 2'},
{ id: 8, name: 'Richard', address: 'Sky st 331'},
{ id: 2, name: 'Peter', address: 'Lowstreet 4'},

UNIT-III 3. 42
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

{ id: 5, name: 'Michael', address: 'Valley 345'},


{ id: 1, name: 'John', address: 'Higheay 71'},
{ id: 4, name: 'Hannah', address: 'Mountain 21'},
{ id: 13, name: 'Chuck', address: 'Main Road 989'},
{ id: 7, name: 'Betty', address: 'Green Grass 1'},
{ id: 11, name: 'Ben', address: 'Park Lane 38'},
{ id: 3, name: 'Amy', address: 'Apple st 652'}
]

Delete Record

You can delete records from an existing table by using the "DELETE FROM"
statement:

Example

Delete any record with the address "Mountain 21":

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"
});

con.connect(function(err) {
if (err) throw err;
var sql = "DELETE FROM customers WHERE address = 'Mountain 21'";
con.query(sql, function (err, result) {
if (err) throw err;
console.log("Number of records deleted: " + result.affectedRows);
});
});

Notice the WHERE clause in the DELETE syntax: The WHERE clause specifies
which record or records that should be deleted. If you omit the WHERE clause, all
records will be deleted!

Save the code above in a file called "demo_db_delete.js" and run the file:

Run "demo_db_delete.js"

C:\Users\Your Name>node demo_db_delete.js

Which will give you this result:

Number of records deleted: 1

UNIT-III 3. 43
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Delete a Table

You can delete an existing table by using the "DROP TABLE" statement:

Example

Delete the table "customers":

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"
});

con.connect(function(err) {
if (err) throw err;
var sql = "DROP TABLE customers";
con.query(sql, function (err, result) {
if (err) throw err;
console.log("Table deleted");
});
});

Save the code above in a file called "demo_db_drop_table.js" and run the file:

Run "demo_db_drop_table.js"

C:\Users\Your Name>node demo_db_drop_table.js

Which will give you this result:

Table deleted

Update Table

You can update existing records in a table by using the "UPDATE" statement:

Example

Overwrite the address column from "Valley 345" to "Canyon 123":

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",

UNIT-III 3. 44
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

database: "mydb"
});

con.connect(function(err) {
if (err) throw err;
var sql = "UPDATE customers SET address = 'Canyon 123' WHERE address =
'Valley 345'";
con.query(sql, function (err, result) {
if (err) throw err;
console.log(result.affectedRows + " record(s) updated");
});
});

Notice the WHERE clause in the UPDATE syntax: The WHERE clause specifies
which record or records that should be updated. If you omit the WHERE clause, all
records will be updated!

Save the code above in a file called "demo_db_update.js" and run the file:

Run "demo_db_update.js"

C:\Users\Your Name>node demo_db_update.js

Which will give you this result:

1 record(s) updated

Limit the Result

You can limit the number of records returned from the query, by using the "LIMIT"
statement:

Example

Select the 5 first records in the "customers" table:

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"
});

con.connect(function(err) {
if (err) throw err;
var sql = "SELECT * FROM customers LIMIT 5";
con.query(sql, function (err, result) {
if (err) throw err;

UNIT-III 3. 45
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

console.log(result);
});
});

Save the code above in a file called "demo_db_limit.js" and run the file:

Run "demo_db_limit.js"

C:\Users\Your Name>node demo_db_limit.js

Which will give you this result:

[
{ id: 1, name: 'John', address: 'Highway 71'},
{ id: 2, name: 'Peter', address: 'Lowstreet 4'},
{ id: 3, name: 'Amy', address: 'Apple st 652'},
{ id: 4, name: 'Hannah', address: 'Mountain 21'},
{ id: 5, name: 'Michael', address: 'Valley 345'}
]

Start From Another Position

If you want to return five records, starting from the third record, you can use the
"OFFSET" keyword:

Example

Start from position 3, and return the next 5 records:

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"
});

con.connect(function(err) {
if (err) throw err;
var sql = "SELECT * FROM customers LIMIT 5 OFFSET 2";
con.query(sql, function (err, result) {
if (err) throw err;
console.log(result);
});
});

Note: "OFFSET 2", means starting from the third position, not the second!

Save the code above in a file called "demo_db_offset.js" and run the file:

UNIT-III 3. 46
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Run "demo_db_offset.js"

C:\Users\Your Name>node demo_db_offset.js

Which will give you this result:

[
{ id: 3, name: 'Amy', address: 'Apple st 652'},
{ id: 4, name: 'Hannah', address: 'Mountain 21'},
{ id: 5, name: 'Michael', address: 'Valley 345'},
{ id: 6, name: 'Sandy', address: 'Ocean blvd 2'},
{ id: 7, name: 'Betty', address: 'Green Grass 1'}
]

Shorter Syntax

You can also write your SQL statement like this "LIMIT 2, 5" which returns the same
as the offset example above:

Example

Start from position 3, and return the next 5 records:

var mysql = require('mysql');

var con = mysql.createConnection({


host: "localhost",
user: "yourusername",
password: "yourpassword",
database: "mydb"
});

con.connect(function(err) {
if (err) throw err;
var sql = "SELECT * FROM customers LIMIT 2, 5";
con.query(sql, function (err, result) {
if (err) throw err;
console.log(result);
});
});

Note: The numbers are reversed: "LIMIT 2, 5" is the same as "LIMIT 5 OFFSET 2"

UNIT-III 3. 47
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Handling Cookies in NodeJS

What are cookies?

A cookie is usually a tiny text file stored in your web browser. A cookie was initially
used to store information about the websites that you visit. But with the advances in
technology, a cookie can track your web activities and retrieve your content
preferences.

This will help the website you have visited to know more about you and customize
your future experience.

For example;

Cookies save your language preferences. This way, when you visit that
website in the future, the language you used will be remembered.

You have most likely visited an e-commerce website. When you include items
into your shopping cart, a cookie will remember your choices. Your shopping
list item will still be there whenever you revisit the site. Basically, a cookie is
used to remember data from the user.

How cookies work

When a user visits a cookie-enabled website for the first time, the browser will
prompt the user that the web page uses cookies and request the user to accept cookies
to be saved on their computer. Typically, when a makes a user request, the server
responds by sending back a cookie (among many other things).

This cookie is going to be stored in the user’s browser. When a user visits the website
or sends another request, that request will be sent back together with the cookies. The
cookie will have certain information about the user that the server can use to make
decisions on any other subsequent requests.

A perfect example is accessing Facebook from a browser. When you want to access
your Facebook account, you have to log in with the correct credentials to be granted
the proper access. But in this case, it would be tiresome to continuously log in to
Facebook every time.

When you first make a login request and the server verifies your credentials, the
server will send your Facebook account content. It will also send cookies to your
browser. The cookies are then stored on your computer and submitted to the server
with every request you make to that website. A cookie will be saved with an identifier
that is unique to that user.

When you revisit Facebook, the request you make, the saved cookie, and the server
will keep track of your login session and remember who you are and thus keep you
logged in.

UNIT-III 3. 48
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

The different types of cookies include:

Session cookies - store user’s information for a short period. When the current
session ends, that session cookie is deleted from the user’s computer.

Persistent cookies - a persistent cookie lacks expiration date. It is saved as


long as the webserver administrator sets it.

Secure cookies - are used by encrypted websites to offer protection from any
possible threats from a hacker.

Third-party cookies - are used by websites that show ads on their pages or
track website traffic. They grant access to external parties to decide the types
of ads to show depending on the user’s previous preferences.

Setting up cookies with Node.js

Let’s dive in and see how we can implement cookies using Node.js. We will create
and save a cookie in the browser, update and delete a cookie.

Go ahead and create a project directory on your computer. Initialize Node.js using
npm init -y to generate a package.json file to manage Node.js project dependencies.

We will use the following NPM packages:

Express - this is an opinionated server-side framework for Node.js that helps


you create and manage HTTP server REST endpoints.

cookie-parser - cookie-parser looks at the headers in between the client and the
server transactions, reads these headers, parses out the cookies being sent, and
saves them in a browser. In other words, cookie-parser will help us create and
manage cookies depending on the request a user makes to the server.

Run the following command to install these NPM packages:

npm install express cookie-parser

We will create a simple example to demonstrate how cookies work.

Step 1 - Import the installed packages

To set up a server and save cookies, import the cookie parser and express modules to
your project. This will make the necessary functions and objects accessible.

const express = require('express')const cookieParser = require('cookie-parser')

Step - 2 Get your application to use the packages

You need to use the above modules as middleware inside your application, as shown
below.

UNIT-III 3. 49
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

//setup express appconst app = express()// let’s you use the cookieParser in your
applicationapp.use(cookieParser());

This will make your application use the cookie parser and Express modules.

Step - 3 Set a simple route to start the server

We use the following code to set up a route for the homepage:

//set a simple for homepage routeapp.get('/', (req, res) => { res.send('welcome to a


simple HTTP cookie server');});

Step 4 - Set a port number

This is the port number that the server should listen to when it is running. This will
help us access our server locally. In this example, the server will listen to port 8000,
as shown below.

//server listening to port 8000app.listen(8000, () => console.log('The server is running


port 8000...'));

Now we have a simple server set. Run node app.js to test if it is working.

And if you access the localhost on port 8000 (http://localhost:8000/), you should get
an HTTP response sent by the server. Now we’re ready to start implementing cookies.

Setting cookies

Let’s add routes and endpoints that will help us create, update and delete a cookie.

Step 1 - Set a cookie

We will set a route that will save a cookie in the browser. In this case, the cookies will
be coming from the server to the client browser. To do this, use the res object and
pass cookie as the method, i.e. res.cookie() as shown below.

//a get route for adding a cookieapp.get('/setcookie', (req, res) =>


{ res.cookie(`Cookie token name`,`encrypted cookie string Value`);
res.send('Cookie have been saved successfully');});

When the above route is executed from a browser, the client sends a get request to the
server. But in this case, the server will respond with a cookie and save it in the
browser.

UNIT-III 3. 50
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Go ahead and run node app.js to serve the above endpoint. Open
http://localhost:8000/getcookie your browser and access the route.

To confirm that the cookie was saved, go to your browser’s inspector tool

Step 2 - Using the req.cookies method to check the saved cookies

If the server sends this cookie to the browser, this means we can iterate the incoming
requests through req.cookies and check the existence of a saved cookie. You can log
this cookie to the console or send the cookie request as a response to the browser.
Let’s do that.

// get the cookie incoming requestapp.get('/getcookie', (req, res) => { //show the
saved cookies console.log(req.cookies) res.send(req.cookies);});

Again run the server using node app.js to expose the above route
(http://localhost:8000/getcookie) and you can see the response on the browser.

As well as on your console logs.

Step 3 - Secure cookies

One precaution that you should always take when setting cookies is security. In the
above example, the cookie can be deemed insecure.

UNIT-III 3. 51
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

For example, you can access this cookie on a browser console using JavaScript
(document.cookie). This means that this cookie is exposed and can be exploited
through cross-site scripting.

You can see the cookie when you open the browser inspector tool and execute the
following in the console.

document.cookie

The saved cookie values can be seen through the browser console.

As a precaution, you should always try to make your cookies inaccessible on the
client-side using JavaScript.

We can add several attributes to make this cookie more secure.

 HTTPonly ensures that a cookie is not accessible using the JavaScript code.
This is the most crucial form of protection against cross-scripting attacks.
 A secure attribute ensures that the browser will reject cookies unless the
connection happens over HTTPS.
 sameSite attribute improves cookie security and avoids privacy leaks.

By default, sameSite was initially set to none (sameSite = None). This allowed third
parties to track users across sites. Currently, it is set to Lax (sameSite = Lax) meaning
a cookie is only set when the domain in the URL of the browser matches the domain
of the cookie, thus eliminating third party’s domains. sameSite can also be set to Strict
(sameSite = Strict). This will restrict cross-site sharing even between different
domains that the same publisher owns.

 You can also add the maximum time you want a cookie to be available on the
user browser. When the set time elapses, the cookie will be automatically
deleted from the browser.

//a get route for adding a cookieapp.get('/setcookie', (req, res) =>


{ res.cookie(`Cookie token name`,`encrypted cookie string
Value`,{ maxAge: 5000, // expires works the same as the maxAge
expires: new Date('01 12 2021'), secure: true, httpOnly: true,
sameSite: 'lax' }); res.send('Cookie have been saved successfully');});

UNIT-III 3. 52
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

In this case, we are accessing the server on localhost, which uses a non-HTTPS secure
origin. For the sake of testing the server, you can set secure: false. However, always
use true value when you want cookies to be created on an HTTPS secure origin.

If you run the server again (node app.js) and navigate to


http://localhost:8000/setcookie on the browser, you can see that the values of the
cookie have been updated with security values.

Furthermore, you cannot access the cookie using JavaScript, i.e., document.cookie.

Step 4 - Deleting a cookie

Typically, cookies can be deleted from the browser depending on the request that a
user makes. For example, if cookies are used for login purposes, when a user decides
to log out, the request should be accompanied by a delete command.

Here is how we can delete the cookie we have set above in this example. Use
res.clearCookie() to clear all cookies.

// delete the saved cookieapp.get('/deletecookie', (req, res) => { //show the saved
cookies res.clearCookie() res.send('Cookie has been deleted successfully');});

Open http://localhost:8000/deletecookie, and you will see that the saved cookie has
been deleted.

UNIT-III 3. 53
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Handling user authentication with NodeJS

What is authentication and authorization

Authentication and authorization are used in security, particularly when it comes to


getting access to a system. Yet, there is a significant distinction between gaining entry
into a house (authentication) and what you can do while inside (authorization).

Authentication

Authentication is the process of verifying a user’s identification through the


acquisition of credentials and using those credentials to confirm the user’s identity.
The authorization process begins if the credentials are legitimate. The authorization
process always follows the authentication procedure.

You were already aware of the authentication process because we all do it daily,
whether at work (logging into your computer) or at home (logging into a website).
Yet, the truth is that most “things” connected to the Internet require you to prove your
identity by providing credentials.

Authorization

Authorization is the process of allowing authenticated users access to resources by


determining whether they have system access permissions. By giving or denying
specific licenses to an authenticated user, authorization enables you to control access
privileges.

So, authorization occurs after the system authenticates your identity, granting you
complete access to resources such as information, files, databases, funds, places, and
anything else. That said, authorization affects your capacity to access the system and
the extent to which you can do so.

What is JWT

JSON Web Tokens (JWT) are an RFC 7519 open industry standard for representing
claims between two parties. For example, you can use jwt.io to decode, verify, and
produce JWT.

JWT specifies a compact and self-contained method for communicating information


as a JSON object between two parties. Because it is signed, this information can be
checked and trusted. JWTs can be signed using a secret (using the HMAC algorithm)
or an RSA or ECDSA public/private key combination. In a moment, we’ll see some
examples of how to use them.

API development using JWT token for authentication in Node.js

To get started, we’ll need to set up our project.

Open Visual Studio Code by navigating to a directory of your choice on your machine
and opening it on the terminal.

UNIT-III 3. 54
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Then execute:

code.

Step 1 - Create a directory and initialize npm

Create a directory and initialize npm by typing the following command:

 Windows power shell

mkdir jwt-projectcd jwt-projectnpm init -y

 Linux

mkdir jwt-projectcd jwt-projectnpm init -y

Step 2 - Create files and directories

In step 1, we initialized npm with the command npm init -y, which automatically
created a package.json.

We need to create the model, middleware, config directory and their files, for example
user.js,auth.js,database.js using the commands below.

mkdir model middleware configtouch config/database.js middleware/auth.js


model/user.js

We can now create the index.js and app.js files in the root directory of our project
with the command.

touch app.js index.js

As shown in the image below:

UNIT-III 3. 55
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Step 3 - Install dependencies

We’ll install several dependencies like mongoose, jsonwebtoken, express dotenv


bcryptjs and development dependency like nodemon to restart the server as we make
changes automatically.

We will install mongoose because I will be using MongoDB in this tutorial.

We will validate user credentials against what we have in our database. So the whole
authentication process is not limited to the database we’ll be using in this article.

npm install mongoose express jsonwebtoken dotenv bcryptjsnpm install nodemon -D

Step 4 - Create a Node.js server and connect your database

Now, let’s create our Node.js server and connect our database by adding the following
snippets to your app.js, index.js , database.js .env in that order.

In our database.js.

config/database.js:

const mongoose = require("mongoose");const { MONGO_URI } =


process.env;exports.connect = () => { // Connecting to the database
mongoose .connect(MONGO_URI, { useNewUrlParser: true,
useUnifiedTopology: true, useCreateIndex: true, useFindAndModify:
false, }) .then(() => { console.log("Successfully connected to
database"); }) .catch((error) => { console.log("database connection
failed. exiting now..." ); console.error(error); process.exit(1); });};

In our app.js:

jwt-project/app.js

require("dotenv").config();require("./config/database").connect();const express =
require("express");const app = express();app.use(express.json());// Logic goes
heremodule.exports = app;

In our index.js:

jwt-project/index.js

const http = require("http");const app = require("./app");const server =


http.createServer(app);const { API_PORT } = process.env;const port =
process.env.PORT || API_PORT;// server listening server.listen(port, () =>
{ console.log(`Server running on port ${port}`);});

If you notice, our file needs some environment variables. You can create a new .env
file if you haven’t and add your variables before starting our application.

UNIT-III 3. 56
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

In our .env.

API_PORT=4001MONGO_URI= //Your database URI here

To start our server, edit the scripts object in our package.json to look like the one
shown below.

"scripts": { "start": "node index.js", "dev": "nodemon index.js", "test":


"echo \"Error: no test specified\" && exit 1" }

The snippet above has been successfully inserted into app.js, index.js, and database.js.
First, we built our node.js server in index.js and imported the app.js file with routes
configured.

Then, as indicated in database.js, we used mongoose to create a connection to our


database.

Execute the command npm run dev.

Both the server and the database should be up and running without crashing.

Step 5 - Create user model and route

We’ll define our schema for the user details when signing up for the first time and
validate them against the saved credentials when logging in.

Add the following snippet to user.js inside the model folder.

model/user.js

const mongoose = require("mongoose");const userSchema = new


mongoose.Schema({ first_name: { type: String, default: null }, last_name: { type:
String, default: null }, email: { type: String, unique: true }, password: { type:
String }, token: { type: String },});module.exports = mongoose.model("user",
userSchema);

Now let’s create the routes for register and login, respectively.

In app.js in the root directory, add the following snippet for the registration and login.

app.js

// importing user contextconst User = require("./model/user");//


Registerapp.post("/register", (req, res) => {// our register logic goes here...});//
Loginapp.post("/login", (req, res) => {// our login logic goes here});

UNIT-III 3. 57
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Step 6 - Implement register and login functionality

We’ll be implementing these two routes in our application. We will be using JWT to
sign the credentials and bycrypt to encrypt the password before storing them in our
database.

From the /register route, we will:

 Get user input.


 Validate user input.
 Validate if the user already exists.
 Encrypt the user password.
 Create a user in our database.
 And finally, create a signed JWT token.

Modify the /register route structure we created earlier to look as shown below.

app.js

// ...app.post("/register", async (req, res) => { // Our register logic starts here try
{ // Get user input const { first_name, last_name, email, password } =
req.body; // Validate user input if (!(email && password && first_name &&
last_name)) { res.status(400).send("All input is required"); } // check
if user already exist // Validate if user exist in our database const oldUser =
await User.findOne({ email }); if (oldUser) { return
res.status(409).send("User Already Exist. Please Login"); } //Encrypt user
password encryptedPassword = await bcrypt.hash(password, 10); // Create
user in our database const user = await User.create({ first_name,
last_name, email: email.toLowerCase(), // sanitize: convert email to lowercase
password: encryptedPassword, }); // Create token const token =
jwt.sign( { user_id: user._id, email }, process.env.TOKEN_KEY,
{ expiresIn: "2h", } ); // save user token user.token =
token; // return new user res.status(201).json(user); } catch (err)
{ console.log(err); } // Our register logic ends here});// ...

Using Postman to test the endpoint, we’ll get the response shown below after
successful registration.

UNIT-III 3. 58
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

For the /login route, we will:

 Get user input.


 Validate user input.
 Validate if the user exists.
 Verify user password against the password we saved earlier in our database.
 And finally, create a signed JWT token.

Modify the /login route structure we created earlier to look like shown below.

// ...app.post("/login", async (req, res) => { // Our login logic starts here try
{ // Get user input const { email, password } = req.body; // Validate user
input if (!(email && password)) { res.status(400).send("All input is
required"); } // Validate if user exist in our database const user = await
User.findOne({ email }); if (user && (await bcrypt.compare(password,
user.password))) { // Create token const token =
jwt.sign( { user_id: user._id, email }, process.env.TOKEN_KEY,
{ expiresIn: "2h", } ); // save user token
user.token = token; // user res.status(200).json(user); }
res.status(400).send("Invalid Credentials"); } catch (err) { console.log(err); }
// Our register logic ends here});// ...

Using Postman to test, we’ll get the response shown below after a successful login.

Step 7 - Create middleware for authentication

We can successfully create and log in a user. Still, we’ll create a route that requires a
user token in the header, which is the JWT token we generated earlier.

UNIT-III 3. 59
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

Add the following snippet inside auth.js.

middleware/auth.js

const jwt = require("jsonwebtoken");const config = process.env;const verifyToken =


(req, res, next) => { const token = req.body.token || req.query.token ||
req.headers["x-access-token"]; if (!token) { return res.status(403).send("A
token is required for authentication"); } try { const decoded =
jwt.verify(token, config.TOKEN_KEY); req.user = decoded; } catch (err)
{ return res.status(401).send("Invalid Token"); } return
next();};module.exports = verifyToken;

Now let’s create the /welcome route and update app.js with the following snippet to
test the middleware.

app.js

const auth = require("./middleware/auth");app.post("/welcome", auth, (req, res) =>


{ res.status(200).send("Welcome );});

See the result below when we try to access the /welcome route we just created without
passing a token in the header with the x-access-token key.

We can now add a token in the header with the key x-access-token and re-test.

See the image below for the response.

UNIT-III 3. 60
lOMoAR cPSD| 2583814

Paavai Engineering College Department of MCA

UNIT-III 3. 61

You might also like