MC4201 - Full Stack Web Development
MC4201 - Full Stack Web Development
INTRODUCTION TO WEB
INTRODUCTION TO WEB:
"World Wide Web" or simple "Web" is the name given to all the resources of internet.
The special software or application program with which you can access web is called
"Web Browser".
Web consists of billions of clients and server connected through wires and wireless
networks. The web clients make requests to web server.
The web server receives the request, finds the resources and returns the response to the
client. When a server answers a request, it usually sends some type of content to the
client.
The client uses web browser to send request to the server. The server often sends
response to the browser with a set of instructions written in HTML (Hypertext Mark
up Language).
All browsers know how to display HTML page to the client.
SERVER
The Server is responsible for serving the web pages depending on the client/end-user
requirement. It can be either static or dynamic.
CLIENT
A client is a party that requests pages from the server and displays them to the end-
user. In general a client program is a web browser.
SERVERS
1
1.DEFINE SERVER (PART A)
EXPLAIN ABOUT SERVER (PART B/C)
A server is a computer or system that provides resources, data, services, or programs to other
computers, known as clients, over a network. In theory, whenever computers share resources
with client machines they are considered servers. There are many types of servers, including
web servers, mail servers, and virtual servers.
An individual system can provide resources and use them from another system at the same
time. This means that a device could be both a server and a client at the same time.
Later, servers were often single, powerful computers connected over a network to a set of
less-powerful client computers. This network architecture is often referred to as the client-
server model, in which both the client computer and the server possess computing power, but
certain tasks are delegated to servers.
Hardware
Those more familiar with such rooms will also know the chill
resulting from the heavily air conditioned atmosphere and
droning whir of fans that typically accompany them. This, however, is not a requirement nor
an accurate portrayal of a great deal of servers connected to the Internet. With the addition of
the right software (assuming you are consuming this text digitally), the device you are using
to read this with could become an internet connected server.
Even though we have reached this point, it is difficult to forget the mental picture conjured
by the thoughts of the data center. In the current “traditional” model, thin, physically
compact servers are stacked vertically. These are referred to as rack mount hardware. Many
rack mount systems today contain hardware similar to what we have in our desktops, despite
the difference in appearance.
3
A number of companies, including Google, Yahoo, and Facebook, are looking to reinvent
this concept. Google for instance has already used custom-built servers in parts of its
network in an effort to improve efficiency and reduce costs.
Software
Operating system
Web server
A database
Scripting language.
There are many combinations of solutions that meet these features, resulting in a number of
variations of the acronym, such as WAMP for Windows, Apache, MySQL, PHP or MAMP,
identical with exception of Mac (or, rightfully, a Macintosh developed operating system).
Among the plethora of combinations, the use of LAMP prevails as the catch all reference to
a server with these types of services.
All that is ultimately required to convey static pages to an end user are the operating system
and HTTP server, the first half of the WAMP acronym. The balance adds the capability for
interactivity and for the information to change based on the result of user interactions.
4
The server side needs programming mostly related to data retrieval, security and
performance. Some of the tools used here include ASP, Lotus Notes, PHP, Java and
MySQL.
There are certain tools/platforms that aid in both client- and server-side programming.
Server-side Programming:
It is the program that runs on server dealing with the generation of content of web page.
Querying the database
Operations over databases
Access/Write a file on server.
Interact with other servers.
Structure web applications.
Process user input.
For example if user input is a text in search box, run a search algorithm on data
stored on server and send the results.
CLIENT
A client application can be written using Java, C, C++, Visual Basic, or any compatible
programming language.
A client application sends a request to an application server at a given URL. The server
receives the request, processes it, and returns a response.
These client programs execute remote procedures and functions in an application server
instance.
5
o AJAX
The most important protocols for data transmission across the Internet are TCP
(Transmission Control Protocol) and IP (Internet Protocol). Using these jointly (TCP/IP),
we can link devices that access the network; some other communication
protocols associated with the Internet are POP, SMTP and HTTP.
We use these practically every day, although most users don't know it, and don't
understand how they work. These protocols allow the transfer of data from our devices so
that we can browse websites, send emails, listen to music online, etc.
There are several types of network protocols:
6
A group of network protocols that work together at the top and bottom levels are
commonly referred to as a protocol family.
The OSI model (Open System Interconnection) conceptually organizes network protocol
families into specific network layers.
This Open System Interconnection aims to establish a context to base the communication
architectures between different systems.
HTTP
The Hypertext Transfer Protocol (HTTP) is an application-level protocol for
distributed, collaborative, hypermedia information systems. Basically, HTTP is a
TCP/IP based communication protocol, that is used to deliver data (HTML files,
image files, query results, etc.) on the World Wide Web.
The default port is TCP 80, but other ports can be used as well. It provides a
standardized way for computers to communicate with each other.
HTTP specification specifies how clients' request data will be constructed and sent
to the server, and how the servers respond to these requests.
What is HTTP?
Every website address begins with “http://” (or “https://”). This refers to the HTTP
protocol, which your web browser uses to request a website.
Basic Features
HTTP is connectionless: The HTTP client, i.e., a browser initiates an HTTP request
and after a request is made, the client waits for the response.
The server processes the request and sends a response back after which client
disconnect the connection.
So client and server knows about each other during current request and response only.
HTTP is media independent: It means, any type of data can be sent by HTTP as long
as both the client and the server know how to handle the data content.
HTTP is stateless: As mentioned above, HTTP is connectionless and it is a direct
result of HTTP being a stateless protocol. The server and client are aware of each
other only during a current request. Afterwards, both of them forget about each other.
Connectionless: The client establishes a connection to the server, sends a request, the
server responds, and then the connection is terminated. For the next request, the client
will have to re-establish the connection.
This is inconvenient because a website usually consists of several files and each of
them has to be retrieved using a separate request.
Stateless: The two parties (i.e. the client and server) “forget” about each other
immediately. The next time the client logs on to the server, the server will not
remember that a client previously sent a request.
Media-independent: Any type of file can be sent via HTTP as long as both parties
know how to handle the respective file type.
7
Purpose of HTTP
If you enter an internet address in your web browser and a website is displayed
shortly thereafter, your browser has communicated with the web server via HTTP.
HTTP is the language your web browser uses to speak with the web server in order to
inform it of a request.
How does HTTP work?
The user types example.com into the address bar of their internet browser.
The browser sends the respective request (i.e. the HTTP Request to the web server
that manages the domain example.com.
Usually, the request is, “Please send me the file.” Alternatively, the client can also
ask, “Do you have this file?”.
The web server receives the HTTP request, searches for the desired file (in this
example, the homepage example.com, meaning the file index.html), and begins by
sending back the header which informs the requesting client of the search result with
a status code.
If the file was found and the client wants it to be sent (and did not just wish to know
whether it existed), the server sends the message body after the header (i.e. the actual
content).
In our example, this is the file index.html.
The browser receives the file and displays it as a website.
HTTP Transactions
The above figure shows the HTTP transaction between client and server. The client initiates
a transaction by sending a request message to the server. The server replies to the request
message by sending a response message.
Messages
8
HTTP messages are of two types: request and response. Both the message types follow the
same message format.
Request Message: The request message is sent by the client that consists of a request line,
headers, and sometimes a body.
Response Message: The response message is sent by the server to the client that consists of
a status line, headers, and sometimes a body.
o Method: The method is the protocol used to retrieve the document from a server. For
example, HTTP.
o Host: The host is the computer where the information is stored, and the computer is
given an alias name. Web pages are mainly stored in the computers and the computers
are given an alias name that begins with the characters "www". This field is not
mandatory.
o Port: The URL can also contain the port number of the server, but it's an optional
field. If the port number is included, then it must come between the host and path and
it should be separated from the host by a colon.
o Path: Path is the pathname of the file where the information is stored. The path itself
contains slashes that separate the directories from the subdirectories and files.
HTML stands for Hypertext Markup Language, and it is the most widely used
language to write Web Pages.
o Hypertext refers to the way in which Web pages (HTML documents) are linked
together. Thus, the link available on a webpage is called Hypertext.
o As its name suggests, HTML is a Markup Language which means you use
HTML to simply "mark-up" a text document with tags that tell a Web browser
how to structure it to display.
Originally, HTML was developed with the intent of defining the structure of
documents like headings, paragraphs, lists, and so forth to facilitate the sharing of
scientific information between researchers.
Now, HTML is being widely used to format web pages with the help of different tags
available in HTML language. Basic HTML Document In its simplest form, following
is an example of an HTML document:
Example:
10
<html>
<head>
<title>This is document title </title>
</head>
<body>
<h1>
This is the heading tags
</h1>
<p>Document content goes here……</p>
</body>
</html>
1. <!DOCTYPE>
2. <html>
3. <head>
4. <title>
5. <body>
The <!DOCTYPE> DECLARATION
The <!DOCTYPE> declaration tag is used by the web browser to understand the version of
the HTML used in the document. Current version of HTML is 5 and it makes use of the
following declaration:
<!DOCTYPE html>
There are many other declaration types which can be used in HTML document depending on
what version of HTML is being used. We will see more details on this while discussing tag
along with other HTML tags.
11
An HTML document has two main parts:
head. The head element contains title and meta data of a web document.
body. The body element contains the information that you want to display on a web
page.
To make your web pages compatible with HTML 4, you need to add a document type
declaration (DTD) before the HTML element. Many web authoring software add DTD and
basic tags automatically when you create a new web page.
TAG ATTRIBUTES
Tags can have attributes. Attributes can provide additional information about the HTML
elements on your page.
This tag defines the body element of your HTML page: <BODY> . With an added bgcolor
attribute, you can tell the browser that the background color of your page should be red, like
this:
<BODY BGCOLOR=”RED”>
This tag defines an HTML table: <TABLE>. With an added border attribute, you can tell
the browser that the table should have no borders: <TABLE BORDER=”0”>. Attributes
always come in name/value pairs like this: name="value". Attributes are always added to
the start tag of an HTML element.
TAG LIST
<!DOCTYPE html>: This tag is used to tells the HTML version. This currently tells
that the version is HTML 5.0
<html> </html> : <html> is a root element of html. It’s a biggest and main element
in complete html language, all the tags , elements and attributes enclosed in it or we
can say wrap init , which is used to structure a web page. <html> tag is parent tag of
<head> and <body> tag , other tags enclosed within <head > and <body>. In <html
> tag we use “lang” attributes to define languages of html page such as <html
12
lang=”en”> here en represents English language. some of them are : es = Spanish ,
zh-Hans = Chinese, fr= french and el= Greek etc.
<head>: Head tag contains metadata, title, page CSS etc. Data stored in the <head>
tag is not displayed to the user, it is just written for reference purposes and as a
watermark of the owner.
<tittle> = to store website name or content to be displayed.
<link> = To add/ link css( cascading style sheet) file.
<meta> = 1. to store data about website, organisation , creator/ owner
2. for responsive website via attributes
3. to tell compatibility of html with browser
<script> = to add javascript file.
<body>: A body tag is used to enclose all the data which a web page has from texts
to links. All the content that you see rendered in the browser is contained within this
element. Following tags and elements used in the body.
Headings are defined with the <H1> to <H6> tags. <H1> defines the largest heading.
<H6> defines the smallest heading.
13
HTML links: HTML uses a hyperlink to link to another document on the Web. HTML
uses the (anchor) tag to create a link to another document. An anchor can point to any
resource on the Web: an HTML page, an image, a sound file, a movie, etc.
HTML LISTS: HTML supports ordered and unordered list. They are,
Unordered Lists: An unordered list is a list of items. The list items are marked with bullets
(typically small black circles). An unordered list starts with the <ul> tag. Each list item starts
with the <li> tag.
Example:
<ul>
<li>coffee</li>
<li> tea</li>
</ul>
Ordered Lists: An ordered list is also a list of items. The list items are marked with
numbers. An ordered list starts with the <ol> tag. Each list item starts with the <li> tag.
Example:
<ol>
<li>coffee</li>
<li> tea</li>
</ol>
INTRODUCTION TO CSS
7.EXPAND CSS (PART A)
EXPLAIN THE USE OF CSS (PART A)
DESCRIBE CSS WITH EXAMPLE (PART B)
CSS stands for Cascading Style Sheets. CSS describes how HTML elements
14
are to be displayed on screen, paper, or in other media. CSS saves a lot of work. It
can control the layout of multiple web pages all at once. External style sheets are
stored in “.css” files. CSS is used to define styles for your web pages, including the
design, layout and variations in display for different devices and screen sizes.
Types of CSS:
15
Example
<html>
<head>
<link rel = “stylesheet” type = “text/css” href = “external.css”>
</head>
<body>
<p>This is Internal CSS.</p>
</body>
</html>
external.css:
p
{
text-align: justify; background-
color: red;
}
16
The difference between the two sets of starter code is that in Method 1, the text
content is the child element of the div with the background image, and in Method
2, these two elements are sibling elements.
1. Add text on image
You can add text as part of the image with a photo editor or a slide application like
PowerPoint, Google Slides, or KeyNote. However, this method doesn’t work in this design as
the content gets cut off on smaller screens.
2. Adjust opacity or clarity of image
You can adjust the opacity and clarity of the background image with either a photo editor or
in CSS. Let’s focus on the latter.
If your text is nested inside the element with the background image, your text gets faded
when you adjust the opacity of the image. Likewise, text will be blurred if you apply
the blur property on the image. That probably isn’t the effect you want.
17
3. Add background colour to text
You can add a white background to the text. However, some may not like this design as the
contrast between the text and the background is too stark. You can’t use opacity, either, as it
will make the text fade out.
18
Lastly, instead of keeping to black text and light background, you can darken the image
with linear-gradient and use white text.
Rounded Images
Example
Rounded Image:
img {
border-radius: 8px; }
Example
circle Image:
img {
border-radius: 50%;
}
CSS SELECTORS
19
A CSS selector is the first part of a CSS Rule. It is a pattern of elements and other terms
that tell the browser which HTML elements should be selected to have the CSS property
values inside the rule applied to them.
The cascade part of CSS means that more than one style sheet can be attached to a
document, and all of them can influence the presentation.
For example, a designer can have a global style sheet for the whole site, but a local one for
say, controlling the link color and background of a specific page. Or, a user can use own
style sheet if s/he has problems seeing the page, or if s/he just prefers a certain look
1. Selector: A selector is an HTML tag at which a style will be applied. This could be any tag
like <H1> ,<P> OR <TABLE>etc.
2. Property: A property is a type of attribute of HTML tag. Put simply, all the HTML
attributes are converted into CSS properties. They could be color, border, bgcolor etc.
3. Value: Values are assigned to properties. For example, color property can have the value
either red or #F1F1F1 etc.
Here h1 is a selector , color and font-size are properties and the given value red, and 15px are
the value of that property.
A style sheet consists of one or more rules that describe how document elements should be
displayed. A rule in CSS has two parts: the selector and the declaration. The declaration also
has two parts, the property and the value.
20
Let's take a look at a rule for a heading 1 style: h1 { font-family: verdana, "sans serif"; font-
size: 1.3em } This expression is a rule that says every h1 tag will be verdana or other sans-
serif font and the font size will be 1.3em.
The declaration contains the property and value for the selector. The property is the attribute
you wish to change and each property can take a value. The property and value are separated
by a colon and surrounded by curly braces:
body { background-color: black}
If the value of a property is more than one word, put quotes around that value: body { font-
family: "sans serif"; } If you wish to specify more than one property, you must use a semi-
colon to separate each property. This rule defines a paragraph that will have blue text that is
centered.
p { text-align: center; color: blue }
You can group selectors. Separate each selector with a comma. The example below groups
headers 1, 2, and 3 and makes them all yellow. h1, h2, h3 { color: yellow}
TYPES OF SELECTORS
Element Type Selectors
Descendant Selectors
Class selectors
Id Selectors
Child Selectors
Adjacent sibling selectors
Pseudo Selectors
Universal Selectors
Element Type Selectors
A CSS declaration always ends with a semicolon, and declaration groups are surrounded by
curly brackets: Example
p {color:red;text-align:center;}
To make the CSS more readable, you can put one declaration on each line, like this:
p
{
21
color:red;
text-align:center;
}
Descendant Selectors
Match an element that is a descendant of another element. This uses two separate selectors,
separated by a space.
For example, if we wanted all emphasized text in our paragraphs to be green text, we
would use the following CSS rule:
EXAMPLE
p em { color: green; }
Class Selectors
Match an element that has the specified class.
To match a specific class attribute, we always start the selector with a period, to signify that
we are looking for a class value. The period is followed by the class attribute value we want
to match.
For example, if we wanted all elements with a class of "highlight" to have a different
background color, we would use the following CSS rule:
EXAMPLE 1
.highlight { background-color: #ffcccc; }
EXAMPLE 2
.center {
text-align: center;
color: red;
}
Id Selectors
The id selector is used to specify a style for a single, unique element.The id selector uses the
id attribute of the HTML element, and is defined with a "#".
The hash is followed by the id attribute value we want to match. Remember, we can only use
the same id attribute value once, so the id selector will always only match one element in our
document.
Example
Imagine within the body element of our html page, we have the following paragraph element
<p id=”welcome”>Welcome to the 1st CSS Document </p>
We can then create a CSS rule with the id selector:
#welcome
22
{
color:red;
text-align:center;
}
Child selectors
Match an element that is an immediate child of another element. For example, if we
wanted all emphasized text in our paragraphs's to have green text, but not emphasized
text in other elements, we would use the following CSS rule:
Example: p > em{color: green;}
Adjacent sibling selectors
Match an element that is immediately after another element, but not a child of it.
For example, if we wanted all paragraphs that immediately followed an h4 to have green
text, but not other paragraphs, we would use the following CSS rule:
h4 + p {color: green;}
Attribute Selector
You can also apply styles to HTML elements with particular attributes. The style rule below
will match all the input elements having a type attribute with a value of text:
input[type="text"]
{
color: #000000;
}
The advantage to this method is that the <input type="submit" /> element is unaffected, and
the color applied only to the desired text fields.There are following rules applied to attribute
selector.
p[lang] - Selects all paragraph elements with a lang attribute.
p[lang="fr"] - Selects all paragraph elements whose lang attribute has a value of exactly
"fr".
p[lang~="fr"] - Selects all paragraph elements whose lang attribute contains the word
"fr".
p[lang|="en"] - Selects all paragraph elements whose lang attribute contains values that
are exactly "en", or begin with "en-"
Pseudo Selectors
An Aside about Link States
Anchor elements are special. You can style the <a> element with an Element Type
Selector, but it might not do exactly what you expect.
This is because links have different states, that relate to how they are interacted with.
The four primary states of a link are: link, visited, hover, active.
Pseudo selectors come in different sizes and shapes. By far the most common pseudo
selectors are used to style our links. There are four different pseudo selectors to be
used in conjunction with links:
23
:link
A link that has not been previously visited (visited is defined by the browser history)
:visited
A link that has been visited
:hover
A link that the mouse cursor is "hovering" over
:active
A link that is currently being clicked
Universal Selector
Matches every element on the page. For example, if we wanted every element to have a
solid 1px wide border, we would use the following CSS rule:
Example:
*{ border: 1 px solid blue;}
The CSS Grouping Selector
The grouping selector selects all the HTML elements with the same style definitions. Look at
the following CSS code (the h1, h2, and p elements have the same style definitions):
h1 {
text-align: center;
color: red;
}
h2 {
text-align: center;
color: red;
}
p{
text-align: center;
color: red;
}
It will be better to group the selectors, to minimize the code. To group selectors, separate
each selector with a comma.
24
h1, h2, p {
text-align: center;
color: red;
}
Why Flexbox?
For a long time, the only reliable cross-browser compatible tools available for creating CSS
layouts were features like floats and positioning. These work, but in some ways they're also
limiting and frustrating.
To determine the position and dimensions of the boxes, in CSS, you can use one of the
layout modes available
25
Alignment: Using Flexbox, you can align the contents of the webpage with respect
to their container.
Resize: Using Flexbox, you can increase or decrease the size of the items in the
page to fit in available space.
Supporting browsers
Following are the browsers that support Flexbox.
Chrome 29+
Firefox 28+
Internet Explorer 11+
Opera 17+
Safari 6.1+
Android 4.4+
iOS 7.1+
THE FLEX MODEL
The main axis is the axis running in the direction the flex items are laid out in (for
example, as rows across the page, or columns down the page.) The start and end of
this axis are called the main start and main end.
The cross axis is the axis running perpendicular to the direction the flex items are laid
out in. The start and end of this axis are called the cross start and cross end.
The parent element that has display: flex set on it (the <section> in our example) is
called the flex container.
The items laid out as flexible boxes inside the flex container are called flex
items (the <article> elements in our example).
FLEX CONTAINERS
FLEX
To use Flexbox in your application, you need to createdefine a flex container using the
display property.
26
Usage: display: flex | inline-flex
Flex containers are not block containers, and so some properties that were designed with the
assumption of block layout don’t apply in the context of flex layout. In particular:
float and clear do not create floating or clearance of flex item, and do not take it out-
of-flow.
vertical-align has no effect on a flex item.
the ::first-line and ::first-letter pseudo-elements do not apply to flex containers,
and flex containers do not contribute a first formatted line or first letter to their
ancestors.
EXAMPLE USING FLEX
<!doctype html>
<html lang="en">
<style>
.box1{background:green;}
.box2{background:blue;}
.box3{background:red;}
.box4{background:magenta;}
.box5{background:yellow;}
.box6{background:pink;}
.container{
display:flex;}
.box{
font-size:35px;
padding:15px;
}
</style>
<body>
<div class="container">
<div class="box box1">One</div>
27
<div class="box box2">two</div>
<div class="box box3">three</div>
<div class="box box4">four</div>
<div class="box box5">five</div>
<div class="box box6">six</div>
</div>
</body>
<html>
Since we have given the value flex to the display property, the container uses the width of
the container (browser). You can observe this by adding a border to the container as shown
below.
.container{ display:inline-flex; border:3px solid black;
}
INLINE FLEX
On passing this value to the display property, an inline level flex container will be created.
Itjust takes the place required for the content.
The following example demonstrates how to create an inline flex container. Here, we are
creating six boxes with different colors and we have used the inline-flex container to hold
them.
<!doctype html>
<html lang="en">
<style>
.box1{background:green;}
.box2{background:blue;}
.box3{background:red;}
.box4{background:magenta;}
.box5{background:yellow;}
.box6{background:pink;}
.container{ display:inline-flex;
border:3px solid black;
}
28
.box{
font-size:35px;
padding:15px;
}
</style>
<body>
<div class="container">
<div class="box box1">One</div>
<div class="box box2">two</div>
<div class="box box3">three</div>
<div class="box box4">four</div>
<div class="box box5">five</div>
<div class="box box6">six</div>
</div>
</body>
<html>
Flex Properties:
flex-direction
flex-wrap
flex-flow
justify-content
align-items
align-content
flex-direction: The flex-direction is used to define the direction of flexible item. The
default axis is horizontal in flexbox, so the items flow into a row.
Syntax:
// Stacking flex items column wise
flex-direction: column;
29
flex-direction: row-reverse;
EXAMPLE
gfg_flex {
display: flex;
flex-direction: row;
background-color: green;
text-align:center;
}
flex-wrap: The flex-wrap property is used to define the wrap of flex-items. If flex-wrap
property set to wrap then then browser’s window set the box. If browser window is smaller
than elements then elements go down to the next line.
Syntax:
// Wrap flex items when necessary
flex-wrap: wrap;
.gfg_flex {
display: flex;
flex-wrap: wrap;
text-align:center;
background-color: green;
}
justify-content: The justify-content property is used to align the flex items according to the
main axis within a flexbox container.
Syntax:
// Aligns the flex items at the center
justify-content: center;
.flex1 {
display: flex;
justify-content: center;
background-color: green;
}
align-items: This property is used to align flex items vertically according to the cross axis.
Syntax:
// Aligns the flex items in the middle of the container
align-items: center;
.flex1 {
display: flex;
height: 200px;
align-items: center;
background-color: green;
}
align-content: This property defines how each flex line is aligned within a flexbox and it
only applicable if flex-wrap: wrap is applied i.e. if there are multiple lines of flexbox items
present.
Syntax :
// Displays the flex lines with equal space between them
align-content: space-between;
31
// Displays the flex lines at the start of the container
align-content: flex-start;
.main-container {
display: flex;
height: 400px;
flex-wrap: wrap;
align-content: space-between;
background-color: green;
}
JAVASCRIPT
12.EXPLAIN INDETAIL JAVASCRIPT DATATYPES (PART B)
EXPLAIN INDETAIL JAVASCRIPT VARIABLES (PART B)
33
<p>JavaScript has dynamic types. This means that the same variable can be used to hold
different data types:</p>
<p id="demo"></p>
<script>
var x; // Now x is undefined
x = 5; // Now x is a Number
x = "John"; // Now x is a String
document.getElementById("demo").innerHTML = x;
</script>
</body>
</html>
JavaScript Strings
JavaScript has dynamic types. A string (or a text string) is a series of characters like
"John Doe".
Strings are written with quotes. You can use single or double quotes:
This means that the same variable can be used to hold different data types:
<!DOCTYPE html>
<html>
<body>
<h2>JavaScript Strings</h2>
<p>Strings are written with quotes. You can use single or double quotes:</p>
<p id="demo"></p>
<script>
var carName1 = "Volvo XC60";
var carName2 = 'Volvo XC60‘;
document.getElementById("demo").innerHTML =carName1 + "<br>" + carName2;
</script>
</body>
</html>
JavaScript Numbers
JavaScript has only one type of numbers. Numbers can be written with, or without decimals:
EXAMPLE
let x1 = 34.00; // Written with decimals
let x2 = 34; // Written without decimals
JavaScript Function
<!DOCTYPE html>
<html>
<head>
<script>
function JEEVA()
34
{
document.getElementById("demo").innerHTML = "Paragraph changed.";
}
</script>
</head>
<body>
<h2>JavaScript in Head</h2>
<p id="demo">A Paragraph.</p>
<button type="button" onclick=“JEEVA()">Try it</button>
</body>
</html>
External JavaScript
Scripts can also be placed in external files:
External file: myScript.js
functionmyFunction() {
document.getElementById("demo").innerHTML = "Paragraph changed.";
}
External scripts are practical when the same code is used in many different web pages.
JavaScript files have the file extension .js.
To use an external script, put the name of the script file in the src (source) attribute of
a <script> tag:
Example
<script src="myScript.js"></script>
JavaScript Syntax
JavaScript syntax is the set of rules, how JavaScript programs are constructed:
var x, y, z; // Declare Variables
x = 5; y = 6; // Assign Values
z = x + y; // Compute Values
JavaScript Values
The JavaScript syntax defines two types of values:
Fixed values
Variable values
Fixed values are called Literals.
Variable values are called Variables.
JavaScript Literals
The two most important syntax rules for fixed values are:
36
1. Numbers are written with or without decimals:
10.50
1001
2. Strings are text, written within double or single quotes:
"John Doe"
'John Doe'
JavaScript Variables
In a programming language, variables are used to store data values.
JavaScript uses the var keyword to declare variables.
An equal sign is used to assign values to variables.
In this example, x is defined as a variable. Then, x is assigned (given) the value 6:
var x;
x = 6;
FUNCTIONS
13.EXPLAIN IN DETAIL ABOUT JAVASCRIPT FUNCTIONS (PART B)
Functions are a fundamental building block for JavaScript programs and a common
feature in almost all programming languages. We may already be familiar with the
concept of a function under a name such as subroutine or procedure.
A function is a block of JavaScript code that is defined once but may be executed, or
invoked, any number of times.
A JavaScript function is a block of code designed to perform a particular task.
A JavaScript function is executed when "something" invokes it (calls it).
JavaScript functions are parameterized: a function definition may include a list of
identifiers, known as parameters that work as local variables for the body of the
function. Function invocations provide values, or arguments, for the function’s
parameters. Functions often use their argument values to
Compute a return value that becomes the value of the function invocation expression.
In addition to the arguments, each invocation has another value—the invocation
context—that is the value of the keyword.
Function Declarations
Function declarations consist of the function keyword, followed by these components:
1. An identifier that names the function. The name is a required part of function
declarations: it is used as the name of a variable, and the newly defined function object
is assigned to the variable.
2. A pair of parentheses around a comma-separated list of zero or more identifiers. These
identifiers are the parameter names for the function, and they behave like local
variables within the body of the function.
3. A pair of curly braces with zero or more JavaScript statements inside. These
statements are the body of the function: they are executed whenever the function is
invoked.
37
Example
function myFunction(p1, p2) {
return p1 * p2; // The function returns the product of p1 and p2
}
NAMING CONVENTION
A JavaScript function is defined with the function keyword, followed by a name,
followed by parentheses ().
Function names can contain letters, digits, underscores, and dollar signs (same rules as
variables).
The parentheses may include parameter names separated by commas:
(parameter1, parameter2, ...).
The code to be executed, by the function, is placed inside curly brackets: {}
We can define functions using a particularly compact syntax known as “arrow functions.”
This syntax is reminiscent of mathematical notation and uses an => “arrow” to separate the
function parameters from the function body. The function keyword is not used, and, since
arrow functions are expressions instead of statements, there is no need for a function name,
either.
Invoking Functions
38
The JavaScript code that makes up the body of a function is not executed when the function
is defined, but rather when it is invoked. JavaScript functions can be invoked in five ways:
As functions
As methods
As constructors
Indirectly through their call() and apply() methods
Implicitly, via JavaScript language features that do not appear like normal function
invocations
Function Invocation
The code inside the function will execute when "something" invokes (calls) the
function:
An invocation expression consists of a function expression that evaluates to a function
object followed by an open parenthesis, a comma-separated list of zero or more
argument expressions, and a close parenthesis.
When an event occurs (when a user clicks a button)
When it is invoked (called) from JavaScript code Automatically (self invoked)
Function Return
When JavaScript reaches a return statement, the function will stop executing.
If the function was invoked from a statement, JavaScript will "return" to execute the
code after the invoking statement.Functions often compute a return value. The return
value is "returned" back to the "caller":
Example
Calculate the product of two numbers, and return the result:
let x = myFunction(4, 3); // Function is called, return value will end up in x
functionmyFunction(a, b)
{
return a * b; // Function returns the product of a and b
}
The result in x will be: 12
EVENTS
14.EXPLAIN IN DETAIL ABOUT JAVASCRIPT EVENTS (PART B)
HTML events are "things" that happen to HTML elements.
When JavaScript is used in HTML pages, JavaScript can "react" on these events.
Client-side JavaScript programs use an asynchronous event-driven programming
model. In this style of programming, the web browser generates an event whenever
something interesting happens to the
document or browser or to some element or object associated with it.
39
For example, the web browser generates an event when it finishes loading a document,
when the user moves the mouse over a hyperlink, or when the user strikes a key on the
keyboard.
In client-side JavaScript, events can occur on any element within an HTML document,
and this fact makes the event model of web browsers significantly more complex than
Node’s event model.
EVENT CATEGORIES
1.Device-dependent input events
These events are directly tied to a specific input device, such as the mouse or keyboard.
They include event types such as mouse down, mouse move, mouse up, touch start,
touch move, touch end, key down, and key up.
2.Device-independent input events
These input events are not directly tied to a specific input device.
Example: The “click” event, for example, indicates that a link or button (or other document
element) has been activated. This is often done via a mouse click, but it could also be done
by keyboard or (on touch sensitive devices) with a tap.
3.User interface events
UI events are higher-level often on HTML form elements that define a user interface for
a web application. They include
1. Focus event
2. Change event
3. Submit event
4.State-change events
Some events are not triggered directly by user activity, but by network or browser
activity, and indicate some kind of life-cycle or state-related change.
1. Load event
2. Network state changes (online or offline)
3. Popstate event (back button)
5.API-specific events
A number of web APIs defined by HTML and related specifications include their own
event types.
The HTML <video> and <audio> elements define a long list of associated event types
such as “waiting,” “playing,” “seeking,” “volume change,” and so on, and you can use
them to customize media playback.
REGISTERING EVENT HANDLERS
There are two basic ways to register event handlers. The first, from the early days of
the web, is to set a property on the object or document element that is the event target. The
second (newer and more general) technique is to pass the handler to the addEventListener()
method of the object or element.
40
SETTING EVENT HANDLER PROPERTIES
The simplest way to register an event handler is by setting a property of the event
target to the desired event handler function.
Event name followed by the event name: onclick, onchange, onload, onmouseover,
and so on.
SETTING EVENT HANDLER ATTRIBUTES
The event handler properties of document elements can also be defined directly in the
HTML file as attributes on the corresponding HTML tag.
When defining an event handler as an HTML attribute, the attribute value should be a
string of JavaScript code. That code should be the body of the event handler function,
not a complete function declaration.
HTML event handler code should not be surrounded by curly braces and prefixed with
the function keyword.
For example:
<button onclick="console.log('Thank you');">Please Click</button>
ADDEVENTLISTENER()
Any object that can be an event target—this includes the Window and Document
objects and all document Elements—defines a method named addEventListener() that you
can use to register an event handler for that target.
It takes three arguments.
1. The first is the event type for which the handler is being registered. The event type
(or name) is a string that does not include the “on” prefix used when setting event
handler properties.
2. The second argument to addEventListener() is the function that should be invoked
when the specified type of event occurs.
3. The third argument is optional
Example:
<button id="mybutton">Click me</button>
<script>
let b = document.querySelector("#mybutton");
b.onclick = function() { console.log("Thanks for clicking
me!"); };
b.addEventListener("click", () => { console.log("Thanks
again!"); });
</script>
addEventListener() is paired with a removeEventListener() method that expects the
same two arguments (plus an optional third) but removes an event handler function
from an object rather than adding it.
EVENT HANDLER ARGUMENT
41
Event handlers are invoked with an Event object as their single argument. The properties of
the Event object provide details about the event:
1. type
2. target
3. currentTarget
4. timeStamp
5. isTrusted
AJAX
15.WHAT IS AJAX? (PART A)
EXPLAIN IN DETAIL ABOUT AJAX (PART B)
AJAX is an acronym for Asynchronous JavaScript and XML. AJAX is a new
technique for creating better, faster and interactive web applications with the help of
JavaScript, DOM, XML, HTML, CSS etc.
AJAX allows you to send and receive data asynchronously without reloading the
entire web page. So it is fast. AJAX allows you to send only important information to
the server not the entire page. So only valuable data from the client side is routed to
the server side. It makes your application interactive and faster.
Ajax is the most viable Rich Internet Application(RIA) technique so far
Where it is used?
There are too many web applications running on the web that are using AJAX Technology.
Some are : 1. Gmail 2. Face book 3. Twitter 4. Google maps 5. YouTube etc.,
AJAX is Based on Internet Standards
AJAX is based on internet standards, and uses a combination of:
1. XMLHttpRequest object (to exchange data asynchronously with a server)
2. JavaScript/DOM (to display/interact with the information)
3. CSS (to style the data)
4. XML (often used as the format for transferring data)
AJAX Components
AJAX is not a technology but group of inter-related technologies. AJAX
Technologies includes:
1. HTML/XHTML and CSS
2. DOM
3. XML or JSON(JavaScript Object Notation)
4. XMLHttpRequest Object
5. JavaScript
Understanding XMLHttpRequest
It is the heart of AJAX technique. An object of XMLHttpRequest is used for
asynchronous communication between client and server.it provides a set of useful methods
42
and properties that are used to send HTTP Request to and retrieve data from the web
server. It performs following operations:
1. Sends data from the client in the background
2. Receives the data from the server
3. Updates the webpage without reloading it.
Methods of XMLHttpRequest object
Method Description
void open(method, URL) Opens the request specifying get or
post method and url.
void open(method, URL, Same as above but specifies
async) asynchronous or not.
void open(method, Same as above but specifies
URL, async, username and password.
username, password)
void send() Sends GET request.
void send(string) Sends POST request.
setRequestHeader(header,va It adds request headers.
lue)
Syntax of open() method:
xmlHttp.open(“GET”,”conn.php”,tru e); which takes three attributes
1. An HTTP method such as GET ,POST , or HEAD
2. The URL of the Server resource
43
onReadyStateChan It is called whenever readystate attribute changes. It
ge must not be used with synchronous requests.
REFERENCE:
1. Professional JavaScript for Web Developers, 4th Edition – Matt Frisbie
2. Javascript-The-Definitive-Guide – David Flanagan
***************************UNIT I COMPLETED**********************
44
POSSIBLE QUESTIONS
UNIT I
PART A
1. Define server
2. Expand css
3. Explain the use of css
4. Define selectors
5. Define css flexbox
6. What is ajax?
PART B
45
VASAVI VIDYA TRUST GROUP OF INSTITUTIONS, SALEM – 103
DEPARTMENT OF MCA
CLASS: I MCA UNIT: II
SUBJECT: FULL STACK WEB DEVELOPMENT SUBJECTCODE: MC4201
Web server is a computer where the web content is stored. Basically web server is used to host the web
sites but there exists other web servers also such as gaming, storage, FTP, email etc.
Web site is collection of web pages while web server is software that responds to the request for web
resources.
A web server can be referred to as either the hardware (the computer) or the software (the computer
application) that helps to deliver content that can be accessed through the Internet.
A web server is what makes it possible to be able to access content like web pages or other data from
anywhere as long as it is connected to the internet. The hardware houses the content, while the software
makes the content accessible through the internet.
The most common use of web servers is to host websites but there are other uses like data storage or for
running enterprise applications. There are also different ways to request content from a web server. The
most common request is the Hypertext Transfer Protocol (HTTP), but there are also other requests like the
Internet Message Access Protocol (IMAP) or the File Transfer Protocol (FTP).
Architecture
Web Server Architecture follows the following two approaches:
1. Concurrent Approach
2. Single-Process-Event-Driven Approach.
1.Concurrent Approach
Concurrent approach allows the web server to handle multiple client requests at the same time. It can be
achieved by following methods:
1. Multi-process
2. Multi-threaded
3. Hybrid method.
Multi-processing
In this a single process parent process initiates several single-threaded child processes and distribute
incoming requests to these child processes. Each of the child processes are responsible for handling
single request.
It is the responsibility of parent process to monitor the load and decide if processes should be killed
or forked.
Multi-threaded
Unlike Multi-process, it creates multiple single-threaded processes.
Hybrid
It is combination of above two approaches. In this approach multiple process are created and each
process initiates multiple threads. Each of the threads handles one connection. Using multiple threads
in single process results in fewer loads on system resources.
47
TYPES OF SERVER
1. Application server a server dedicated to running certain software applications.
2. Catalog server a central search point for information across a distributed network.
3. Communications server carrier-grade computing platform for communications networks.
4. Compute server, a server intended for intensive (esp. scientific) computations.
5. Database server provides database services to other computer programs or computers.
6. Fax server provides fax services for clients.
7. File server provides remote access to files.
8. Game server a server that video game clients connect to in order to play online together.
9. Home server a server for the home.
10. Mail server handles transport of and access to email.
11. Mobile Server or Server on the Go is an Intel Xeon processor based server class laptop form factor
computer.
12. Name server or DNS.
13. Print server Provides printer services.
14. Proxy server acts as an intermediary for requests from clients seeking resources from other servers.
15. Sound server provides multimedia broadcasting, streaming.
16. Stand-alone server a server on a Windows network that neither belongs to nor governs a Windows
domain
17. Web server a server that HTTP clients connect to in order to send commands and receive responses
along with data contents
JAVASCRIPT IN THE DESKTOP WITH NODE JS
What is Node.js?
Node.js is a server-side platform built on Google Chrome's JavaScript Engine (V8 Engine). Node.js was
developed by Ryan Dahl in 2009 and its latest version is v0.10.36.
Node.js is a platform built on Chrome's JavaScript runtime for easily building fast and scalable network
applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and
efficient, perfect for data-intensive real-time applications that run across distributed devices.
Node.js is an open source, cross-platform runtime environment for developing server-side and
networking applications. Node.js applications are written in JavaScript, and can be run within the
Node.js runtime on OS X, Microsoft Windows, and Linux.
Node.js = Runtime Environment + JavaScript Library
In the Node.js ecosystem, there are two major frameworks for creating desktop apps: NW.js and
Electron. both have large communities around them, and both share similar approaches to building
desktop apps.
Node.js is an open-source server side runtime environment built on Chrome's V8 JavaScript engine. It
provides an event driven, non-blocking (asynchronous) I/O and cross-platform runtime environment
for building highly scalable server-side application using JavaScript.
48
Node.js can be used to build different types of applications such as command line application, web
application, real-time chat application, REST API server etc. However, it is mainly used to build
network programs like web servers, similar to PHP, Java, or ASP.NET.
Node.js eliminates the waiting, and simply continues with the next request.
Node.js runs single-threaded, non-blocking, asynchronous programming, which is very memory
efficient.
FEATURES OF NODE.JS
Following are some of the important features that make Node.js the first choice of software architects.
Asynchronous and Event Driven − All APIs of Node.js library are asynchronous, that is, non-
blocking. It essentially means a Node.js based server never waits for an API to return data. The server
moves to the next API after calling it and a notification mechanism of Events of Node.js helps the
server to get a response from the previous API call.
Very Fast − Being built on Google Chrome's V8 JavaScript Engine, Node.js library is very fast in
code execution.
Single Threaded but Highly Scalable − Node.js uses a single threaded model with event looping.
Event mechanism helps the server to respond in a non-blocking way and makes the server highly
scalable as opposed to traditional servers which create limited threads to handle requests. Node.js uses
a single threaded program and the same program can provide service to a much larger number of
requests than traditional servers like Apache HTTP Server.
No Buffering − Node.js applications never buffer any data. These applications simply output the data
in chunks.
License − Node.js is released under the MIT license.
49
ADVANTAGES OF NODE.JS
1. Node.js is an open-source framework under MIT license. (MIT license is a free software license
originating at the Massachusetts Institute of Technology (MIT).)
2. Uses JavaScript to build entire server side application.
3. Lightweight framework that includes bare minimum modules. Other modules can be included as per
the need of an application.
4. Asynchronous by default. So it performs faster than other frameworks.
5. Cross-platform framework that runs on Windows, MAC or Linux
In the traditional web server model, each request is handled by a dedicated thread from the thread pool. If
no thread is available in the thread pool at any point of time then the request waits till the next available
thread. Dedicated thread executes a particular request and does not return to thread pool until it
completes the execution and returns a response.
Node.js processes user requests differently when compared to a traditional web server model. Node.js
runs in a single process and the application code runs in a single thread and thereby needs less resources
than other platforms. All the user requests to your web application will be handled by a single thread and
all the I/O work or long running job is performed asynchronously for a particular request. So, this single
50
thread doesn't have to wait for the request to complete and is free to handle the next request. When
asynchronous I/O work completes then it processes the request further and sends the response.
An event loop is constantly watching for the events to be raised for an asynchronous job and executing
callback function when the job completes. Internally, Node.js uses libev for the event loop which in turn
uses internal C++ thread pool to provide asynchronous I/O.
The following figure illustrates asynchronous web server model using Node.js.
Node.js process model increases the performance and scalability with a few caveats. Node.js is not fit for
an application which performs CPU-intensive operations like image processing or other heavy
computation work because it takes time to process a request and thereby blocks the single thread.
Install Node.js
Node.js development environment can be setup in Windows, Mac, Linux and Solaris. The following
tools/SDK are required for developing a Node.js application on any platform.
1. Node.js
2. Node Package Manager (NPM)
3. IDE (Integrated Development Environment) or TextEditor
NPM (Node Package Manager) is included in Node.js installation since Node version 0.6.0., so there is
no need to install it separately.
COMPONENTS OF NODE.JS
51
A Node.js application consists of the following three important components:
1. Import required modules: We use the require directive to load Node.js modules.
2. Create server: A server which will listen to client's requests similar to Apache HTTP Server.
3. Read request and return response: The server created in an earlier step will read the HTTP request made
by the client which can be a browser or a console and return the response.
NPM
4.WHAT IS NPM? (PART A)
EXPLAIN ABOUT NPM IN DETAIL(PART B)
What is NPM?
NPM – or "Node Package Manager" – is the default package manager for JavaScript's runtime Node.js.
NPM consists of two main parts:
a CLI (command-line interface) tool for publishing and downloading packages, and
an online repository that hosts JavaScript packages
Official website: https://www.npmjs.com
NPM is included with Node.js installation. After you install Node.js, verify NPM installation by
writing the following command in terminal or command prompt.
C:\> npm -v
2.11.3
If you have an older version of NPM then you can update it to the latest version using the following
command.
To access NPM help, write npm help in the command prompt or terminal window.
NPM performs the operation in two modes: global and local. In the global mode, NPM performs
operations which affect all the Node.js applications on the computer whereas in the local mode,
NPM performs operations for the particular local directory which affects an application in that
directory only.
For example, the following command will install ExpressJS into MyNodeProj folder.
52
All the modules installed using NPM are installed under node_modules folder. The above
command will create ExpressJS folder under node_modules folder in the root folder of your project
and install Express.js there.
Node.js makes it easy to create a simple web server that processes incoming requests asynchronously.
The following example is a simple Node.js web server contained in server.js file
var http = require('http'); // 1 - Import Node.js core module
var server = http.createServer(function (req, res) { // 2 - creating server
//handle incomming requests here..
});
server.listen(5000); //3 - listen for any incoming requests
console.log('Node.js web server at port 5000 is running..')
Run the above web server by writing node server.js command in command prompt or terminal
window and it will display message as shown below.
C:\> node server.js
Node.js web server at p C:\> node server.js
Node.js web server at port 5000 is running.ort 5000 is running..
The http.createServer() method includes request and response parameters which is supplied by Node.js.
The request object can be used to get information about the current HTTP request e.g., url, request
header, and data. The response object can be used to send a response for a current HTTP request.
The following example demonstrates handling HTTP request and response in Node.js.
HFor Windows users, point your browser to http://localhost:5000 and see the following result.
55
Middleware functions are functions that have access to the request object (req), the response
object (res), and the next middleware function in the application’s request-response cycle. The next
middleware function is commonly denoted by a variable named next.
Middleware functions can perform the following tasks:
If the current middleware function does not end the request-response cycle, it must call next() to
pass control to the next middleware function. Otherwise, the request will be left hanging.
An Express application can use the following types of middleware:
1. Application-level middleware
2. Router-level middleware
3. Error-handling middleware
4. Built-in middleware
5. Third-party middleware
56
var server = app.listen(8000, function () {
var host = server.address().address;
var port = server.address().port;
console.log('Example app listening at http://%s:%s', host, port);
});
SERVER SIDE RENDERING WITH TEMPLATE ENGINE
57
Why we need?
Modern JavaScript frameworks / libraries that focus on creating interactive websites or Single Page
Applications the way that pages are displayed to a visitor has changed a lot.
There’s one more important thing you should consider when going with a client-side rendered
app: search engines and social networks presence.
Solution
A — Consider having your key pages as static
When you’re creating an platform that requires the users to login, and not providing the content to not-
signed in users you might decide to create your public facing sites (like the index, “about us”, “contact
us” etc.) pages as static HTML, and not have them rendered by JS.
B — Generate parts of your application as HTML pages when running the build process
Libraries like react-snapshot can be added to your project, used to generate HTML copies of your
application pages and save them to a specified folder.
C — Create a server-side rendered application in JS
One of the big selling point of the current generation of JS applications is the fact, that they can be ran on
both the client (browser) and on server — this allows us to generate HTML for pages that are more dynamic
— which content is not known at build time.
Template Engine
Template engine helps us to create an HTML template with minimal code. Also, it can inject data
into HTML template at client side and produce the final HTML.
Templates also enable fast rendering of the server-side data that needs to be passed to the
application. For example, you might want to have components such as body, navigation, footer,
dashboard, etc.
The following figure illustrates how template engine works in Node.js.
58
As per the above figure, client-side browser loads HTML template, JSON/XML data and template engine
library from the server. Template engine produces the final HTML using template and data in client's
browser. However, some HTML templates process data and generate final HTML page at server side also.
There are many template engines available for Node.js. Each template engine uses a different language to
define HTML template and inject data into it.
The following is a list of important (but not limited) template engines for Node.js
1. Jade
2. Vash
3. EJS
4. Mustache
5. Dust.js
6. Nunjucks
7. Handlebars
8. atpl
9. haml
Advantages of Template engine in Node.js
1. Improves developer's productivity.
2. Improves readability and maintainability.
3. Faster performance.
4. Maximizes client side processing.
5. Single template for multiple pages.
6. Templates can be accessed from CDN (Content Delivery Network).
Using template engines with Express
Template engine makes you able to use static template files in your application. To render template files
you have to set the following application setting properties:
59
Views: It specifies a directory where the template files are located.
For example: app.set('views', './views').
view engine: It specifies the template engine that you use. For example, to use the Pug template engine:
app.set('view engine', 'pug').
The pug template engine takes the input in a simple way and produces the output in HTML. See how it
renders HTML:
Simple input:
doctype html
html
head
title A simple pug example
body
h1 This page is produced by pug template engine
p some paragraph here..
60
Express.js can be used with any template engine. Let's take an example to deploy how pug template creates
HTML page dynamically.
STATIC FILES
9.HOW TO SERVE STATIC FILES USING NODE JS (PART B)
Static files are files that clients download as they are from the server.
Static files are files that don’t change when your application is running.
These files do a lot to improve your application, but they aren’t dynamically generated by your
Python web server like a usual HTML response.
In a typical web application, your most common static files will be the following types:
1. Cascading Style Sheets, CSS
2. JavaScript
3. Images
How static content works
Fetching a static asset from a server is one of the basic functions of the web. For example, typing the
following URL in a web browser (http://www.example.com/index.html) fetches the
file index.html from the server hosting example.com.
There are three steps to requesting static content from a server:
A user sends a request for a file to the web server.
The web server retrieves the file from disk.
The web server sends the file to the user.
Benefits of static content
Static content does not change. Once a static file is uploaded to a server, it does not change until
you replace it with another file. In the meantime, users who return to your website will see exactly
the same content.
Static content is easier to cache. Although there are tricks to caching dynamic content, it often can’t
be cached effectively because it’s hard to predict when it’s needed. Since static content is the same
for all users, it can be cached very easily.
Static content is less power-hungry. Dynamic websites contain layers of application logic that
run before the user receives a response. Static websites only need to pull files from the disk.
Additionally, techniques such as compression only need to be applied once to static content,
making it very resource-efficient.
Create a new directory, public. Express, by default does not allow you to serve static files. You need to
enable it using the following built-in middleware.
app.use(express.static('public'));
Serving static files in Express
To serve static files such as images, CSS files, and
JavaScript files, use the express.static built-in
middleware function in Express.
The function signature is:
express.static(root, [options])
The root argument specifies the root directory
from which to serve static assets. For more
information on the options argument,
see express.static.
For example, use the following code to serve
images, CSS files, and JavaScript files in a
directory named public:
app.use(express.static('public'))
61
Now, you can load the files that are in the public directory:
http://localhost:3000/images/kitten.jpg
http://localhost:3000/css/style.css
http://localhost:3000/js/app.js
http://localhost:3000/images/bg.png
http://localhost:3000/hello.html
Express looks up the files relative to the static directory, so the name of the static directory is not part of the
URL.
To use multiple static assets directories, call the express.static middleware function multiple times:
app.use(express.static('public'))
app.use(express.static('files'))
Express looks up the files in the order in which you set the static directories with
the express.static middleware function.
To create a virtual path prefix (where the path does not actually exist in the file system) for files that are
served by the express.static function, specify a mount path for the static directory, as shown below:
app.use('/static', express.static('public'))
Now, you can load the files that are in the public directory from the /static path prefix.
http://localhost:3000/static/images/kitten.jpg
http://localhost:3000/static/css/style.css
http://localhost:3000/static/js/app.js
http://localhost:3000/static/images/bg.png
http://localhost:3000/static/hello.html
However, the path that you provide to the express.static function is relative to the directory from where you
launch your node process. If you run the express app from another directory, it’s safer to use the absolute
path of the directory that you want to serve:
const path = require('path')
app.use('/static', express.static(path.join(__dirname, 'public')))
Example
Multiple Static Directories
We can also set multiple static assets directories using the following program −
var express = require('express');
var app = express();
app.use(express.static('public'));
app.use(express.static('images'));
app.listen(3000);
Virtual Path Prefix
We can also provide a path prefix for serving static files. For example, if you want to provide a path prefix
like '/static', you need to include the following code in your index.js file −
var express = require('express');
var app = express();
app.use('/static', express.static('public'));
app.listen(3000);
Now whenever you need to include a file, for example, a script file called main.js residing in your public
directory, use the following script tag −
<script src = "/static/main.js" />
62
This technique can come in handy when providing multiple directories as static files. These prefixes can
help distinguish between multiple directories.
Absolute Path to Static Files Directory
// Set up Express
var express = require('express');
var app = express();
// Serve files from the absolute path of the directory
app.use(express.static(__dirname + '/public'));
// Start Express server
app.listen(3030);
Absolute Path to Directory & Virtual Path Prefix
// Set up Express
var express = require('express');
var app = express();
/* Serve from the absolute path of the directory that you want to serve with a
*/ virtual path prefix
app.use('/static', express.static(__dirname + '/public'));
// Start Express server
app.listen(3030);
ASYNC/AWAIT
10DEFINE ASYNC (PART A)
DEFINE AWAIT (PART A)
async programming is critical to learn if you want to use JavaScript and Node.js to build web
applications and servers – because JS code is asynchronous by default.
The Node.js Event Loop
Node.js is single-threaded. However, to be exact, only the event loop in Node.js, which interacts with
a pool of background C++ worker threads, is single-threaded.
There are four important components to the Node.js processing model:
1. Event Queue: Tasks that are declared in a program, or returned from the processing thread pool
via callbacks. (The equivalent of this in our Santa's workshop is the pile of letters for Santa.)
2. Event Loop: The main Node.js thread that facilitates event queues and worker thread pools to carry
out operations – both async and synchronous.
3. Background thread pool: These threads do the actual processing of tasks, which
might be I/O blocking.
63
console.log("Hello");
https.get("https://httpstat.us/200", (res) => {
console.log(`API returned status: ${res.statusCode}`);
});
console.log("from the other side");
If we execute the above piece of code, we would get this in our standard output:
Output
Hello
from the other side
API returned status: 200
Before Node version 7.6, the callbacks were the only official way provided by Node to run one
function after another. As Node architecture is single-threaded and asynchronous, the community
devised the callback functions, which would fire (or run) after the first function (to which the
callbacks were assigned) run is completed.
Example of a Callback:
app.get('/', function(){
function1(arg1, function(){
...
})
});
The problem with this kind of code is that this kind of situations can cause a lot of trouble and the code
can get messy when there are several functions. This situation is called what is commonly known as
a callback hell.
So, to find a way out, the idea of Promises and function chaining was introduced.
Example: Before async/await
function fun1(req, res){
return request.get('http://localhost:3000')
.catch((err) =>{
console.log('found error');
}).then((res) =>{
console.log('get request returned.');
});
64
Explanation:
The above code demos a function implemented with function chaining instead of callbacks. It can be
observed that the code is now more easy to understand and readable. The code basically says that GET
localhost:3000, catch the error if there is any; if there is no error then implement the following statement:
console.log(‘get request returned.’);
With Node v8, the async/await feature was officially rolled out by the Node to deal with Promises and
function chaining. The functions need not to be chained one after another, simply await the function that
returns the Promise. But the function async needs to be declared before awaiting a function returning a
Promise. The code now looks like below.
Example: After async/await
async function fun1(req, res){
let response = await request.get('http://localhost:3000');
if (response.err) { console.log('error');}
else { console.log('fetched response');
}
Explanation:
The code above basically asks the javascript engine running the code to wait for
the request.get() function to complete before moving on to the next line to execute it.
The request.get() function returns a Promise for which user will await . Before async/await, if it
needs to be made sure that the functions are running in the desired sequence, that is one after the
another, chain them one after the another or register callbacks.
Code writing and understanding becomes easy with async/await as can be observed from both the
examples.
Async: It simply allows us to write promises based code as if it was synchronous and it checks that we are
not breaking the execution thread. It operates asynchronously via the event-loop. Async functions will
always return a value. It makes sure that a promise is returned and if it is not returned then JavaScript
automatically wraps it in a promise which is resolved with its value.
Example-1:
Await: Await function is used to wait for the promise. It could be used within the async block only. It
makes the code wait until the promise returns a result. It only makes the async block wait.
Example-2:
65
console.log(1);
getData();
console.log(2);
Method 1: Using require method: The simplest method to read a JSON file is to require it in a
node.js file using require() method.
Syntax:
const data = require('path/to/file/filename');
Example: Create a users.json file in the same directory where index.js file present. Add following
data to the json file.
users.json file:
[ {
"name": "John",
"age": 21,
"language": ["JavaScript", "PHP", "Python"]
},
{
"name": "Smith",
"age": 25,
"language": ["PHP", "Go", "JavaScript"]
}]
66
// Requiring users file
const users = require("./users");
console.log(users);
node index.js
Output:
Method 2: Using the fs module: We can also use node.js fs module to read a file. The fs module
returns a file content in string format so we need to convert it into JSON format by
using JSON.parse() in-built method.
Add the following code into your index.js file:
index.js file:
const fs = require("fs");
// Converting to JSON
const users = JSON.parse(data);
Now run the file again and you’ll see an output like this:
Output:
67
Writing to a JSON file: We can write data into a JSON file by using the node.js fs module. We can
use writeFile method to write data into a file.
Syntax:
fs.writeFile("filename", data, callback);
Example: We will add a new user to the existing JSON file, we have created in the previous example.
This task will be completed in three steps:
Read the file using one of the above methods.
Add the data using .push() method.
Write the new data to the file using JSON.stringify() method to convert data into string.
const fs = require("fs");
Run the file again and you will see a message into the console:
68
Now check your users.json file it will looks something like below:
REFERENCE:
PART B
1. Explain in detail about web servers
2. Explain Javascript in node JS
3. Explain node JS process model
4. Explain about NPM in detail
5. How to serve a file using http module
6. Discuss in detail about express framework.
7. Explain server side rendering with template engine
8. How to serve static files using node JS
9. How we fetch JSON from express
PART C
1. Discuss about how to create a node JS server that serves the following
a. Static files
b. JSON file
2. Discuss about how to create a server USING EXPRESS
************************
69
VASAVI VIDYA TRUST GROUP OF INSTITUTIONS, SALEM – 103
DEPARTMENT OF MCA
CLASS: I MCA UNIT: III
SUBJECT: FULL STACK WEB DEVELOPMENT SUBJECT CODE: MC4201
Why NoSQL?
The concept of NoSQL databases became popular with Internet giants like Google,
Facebook, Amazon, etc. who deal with huge volumes of data. The system response time
becomes slow when you use RDBMS for massive volumes of data.
To resolve this problem, we could “scale up” our systems by upgrading our existing
hardware. This process is expensive.
The alternative for this issue is to distribute database load on multiple hosts whenever
the load increases. This method is known as “scaling out.”
70
NoSQL database is non-relational, so it scales out better than relational databases as they are
designed with web applications in mind.
2.EXPLAIN HISTORY OF NOSQL DATABASE (PART B)
EXPLAIN FEATURES OF NOSQL DATABASE (PART B)
Brief History of NoSQL Databases
1998- Carlo Strozzi use the term NoSQL for his lightweight, open-source relational
database
2000- Graph database Neo4j is launched
2004- Google BigTable is launched
2005- CouchDB is launched
2007- The research paper on Amazon Dynamo is released
2008- Facebooks open sources the Cassandra project
2009- The term NoSQL was reintroduced
Features of NoSQL
Non-relational
NoSQL databases never follow the relational model
Never provide tables with flat fixed-column records
Work with self-contained aggregates or BLOBs
Doesn’t require object-relational mapping and data normalization
No complex features like query languages, query planners,referential integrity joins,
ACID
Schema-free
NoSQL databases are either schema-free or have relaxed schemas
Do not require any sort of definition of the schema of the data
Offers heterogeneous structures of data in the same domain
NoSQL is Schema-Free
Simple API
71
Offers easy to use interfaces for storage and querying data provided
APIs allow low-level data manipulation & selection methods
Text-based protocols mostly used with HTTP REST with JSON
Mostly used no standard based NoSQL query language
Web-enabled databases running as internet-facing services
Distributed
72
Key Value Pair Based
Data is stored in key/value pairs. It is designed in such a way to
handle lots of data and heavy load.
Key-value pair storage databases store data as a hash table where each
key is unique, and the value can be a JSON, BLOB(Binary Large
Objects), string, etc.
It is one of the most basic NoSQL database example. This kind of
NoSQL database is used as a collection, dictionaries, associative arrays, etc. Key value
stores help the developer to store schema-less data. They work best for shopping cart
contents.
Redis, Dynamo, Riak are some NoSQL examples of key-value store DataBases. They
are all based on Amazon’s Dynamo paper.
Column-based
Column-oriented databases work on columns and are based on
BigTable paper by Google. Every column is treated separately.
Values of single column databases are stored contiguously.
They deliver high performance on aggregation queries like
SUM, COUNT, AVG, MIN etc. as the data is readily available
in a column.
Column-based NoSQL databases are widely used to manage
data warehouses, business intelligence, CRM, Library card catalogs,
HBase, Cassandra, HBase, Hypertable are NoSQL query examples of column based
database.
Document-Oriented:
Document-Oriented NoSQL DB stores and retrieves data as a key value pair but the value
part is stored as a document. The document is stored in JSON or XML formats. The value is
understood by the DB and can be queried.
73
The document type is mostly used for CMS systems, blogging platforms, real-time
analytics & e-commerce applications. It should not use for complex transactions which
require multiple operations or queries against varying aggregate structures.
Amazon SimpleDB, CouchDB, MongoDB, Riak, Lotus Notes, MongoDB, are popular
Document originated DBMS systems.
Graph-Based
A graph type database stores entities as well the relations amongst those entities. The
entity is stored as a node with the relationship as edges. An edge gives a relationship
between nodes. Every node and edge has a unique identifier.
Compared to a relational database where tables are loosely connected, a Graph database
is a multi-relational in nature. Traversing relationship is fast as they are already captured
into the DB, and there is no need to calculate them.
Graph base database mostly used for social networks, logistics, spatial data.
Neo4J, Infinite Graph, OrientDB, FlockDB are some popular graph-based databases.
Availability:
The database should always be available and responsive. It should not have any
downtime.
Availability means that each read or write request for a data item will either be
processed successfully or will receive a message that the operation cannot be
completed.
Partition Tolerance:
Partition Tolerance means that the system should continue to function even if the
communication among the servers is not stable. For example, the servers can be
partitioned into multiple groups which may not communicate with each other. Here, if
part of the database is unavailable, other parts are always unaffected.
The use of the word consistency in CAP and its use in ACID do not refer to the same
identical concept.
In CAP, the term consistency refers to the consistency of the values in different copies
of the same data item in a replicated distributed system. In ACID, it refers to the fact
that a transaction will not violate the integrity constraints specified on the database
schema.
75
CA(Consistency and Availability)-
The system prioritizes availability over consistency and can respond with possibly stale
data.
Example databases: Cassandra, CouchDB, Riak, Voldemort.
AP(Availability and Partition Tolerance)-
The system prioritizes availability over consistency and can respond with possibly stale
data.
The system can be distributed across multiple nodes and is designed to operate reliably
even in the face of network partitions.
Example databases: Amazon DynamoDB, Google Cloud Spanner.
CP(Consistency and Partition Tolerance)-
The system prioritizes consistency over availability and responds with the latest updated
data.
The system can be distributed across multiple nodes and is designed to operate reliably
even in the face of network partitions.
Example databases: Apache HBase, MongoDB, Redis.
Eventual Consistency
The term “eventual consistency” means to have copies of data on multiple machines to
get high availability and scalability. Thus, changes made to any data item on one
machine has to be propagated to other replicas.
Data replication may not be instantaneous as some copies will be updated immediately
while others in due course of time. These copies may be mutually, but in due course of
time, they become consistent. Hence, the name eventual consistency.
DEFINE BASE (PART A)
BASE: Basically Available, Soft state, Eventual consistency
Basically, available means DB is available all the time as per CAP theorem
76
Soft state means even without an input; the system state may change
Eventual consistency means that the system will become consistent over time
77
MONGODB SYSTEM OVERVIEW
6.DEFINE MONGODB (PART A)
EXPLAIN ABOUT MONGODB SYSTEM (PART B)
MongoDB is an open source NoSQL database management program. NoSQL (Not only
SQL) is used as an alternative to traditional relational databases.
Like any other database management language, MongoDB is based on a NoSQL
database that is used for storing data in a key-value pair.
Its working is based on the concept of document and collection.
Collections, the equivalent of SQL tables, contain document sets. MongoDB offers
support for many programming languages, such as C, C++, C#, Go, Java, Python, Ruby
and Swift.
It is also an open-source, a document-oriented, cross-platform database system that is
written using C++.
It also provides high availability, high performance, along with automatic scaling.
This open-source product was developed by the company - 10gen in October 2007, and
the company also maintains it.
MongoDB exists under the General Public License (GPL) as a free database
management tool as well as available under Commercial license as of the manufacturer.
MongoDB was also intended to function with commodity servers. Companies of
different sizes all over the world across all industries are using MongoDB as their
database.
Here are some key terminologies that you must know to get into the in-depth of
MongoDB:
What is a Database?
In MongoDB, a database can be defined as a physical container for collections of data.
Database is a physical container for collections.
Here, on the file system, every database has its collection of files residing. Usually, a
MongoDB server contains numerous databases.
What are Collections?
Collections can be defined as a cluster of MongoDB documents that exist within a
single database. You can relate this to that of a table in a relational database
management system.
78
Collection is a group of MongoDB documents. It is the equivalent of an RDBMS table.
A collection exists within a single database. Collections do not enforce a schema.
Documents within a collection can have different fields. Typically, all documents in a
collection are of similar or related purpose.
What are documents?
A document is a set of key-value pairs. Documents have dynamic schema.
Dynamic schema means that documents in the same collection do not need to have the
same set of fields or structure, and common fields in a collection's documents may hold
different types of data.
Here is a table showing the relation between the terminologies used in RDBMS and
MongoDB:
RDBMS MongoDB
Database Database
Table Collection
Tuple or Row Document
Column Field
Table Join Embedded Documents
Primary Key Primary key / Default key
Mysqld / Oracle mongod
Adobe
McAfee
LinkedIn
FourSquare
MetLife
eBay
SAP
8.WHERE IS MONGODB USED? (PART A)
Where Is MongoDB Used?
Beginners need to know the purpose and requirement of why to use MongoDB or what is the
need of it in contrast to SQL and other database systems. In simple words, it can be said that
every modern-day application involves the concept of big data, analyzing different forms of
data, fast features improvement in handling data, deployment flexibility, which old database
systems are not competent enough to handle. Hence, MongoDB is the next choice.
Why Use MongoDB?
Some basic requirements are supported by this NoSQL database, which is lacking in other
database systems. These collective reasons make MongoDB popular among other database
systems:
79
Storage. MongoDB can store large structured and unstructured data volumes and is
scalable vertically and horizontally. Indexes are used to improve search performance.
Searches are also done by field, range and expression queries.
Data integration. This integrates data for applications, including for hybrid and multi-
cloud applications.
Complex data structures descriptions. Document databases enable the embedding of
documents to describe nested structures (a structure within a structure) and can tolerate
variations in data.
Load balancing. MongoDB can be used to run over multiple servers.
9.EXPLAIN FEATURE OF MONGODB (PART B)
Features of MongoDB:
Document Oriented: MongoDB stores the main subject in the minimal number of
documents and not by breaking it up into multiple relational structures like RDBMS.
For example, it stores all the information of a computer in a single document called
Computer and not in distinct relational structures like CPU, RAM, Hard disk, etc.
Indexing: Without indexing, a database would have to scan every document of a
collection to select those that match the query which would be inefficient. So, for
efficient searching Indexing is a must and MongoDB uses it to process huge volumes of
data in very less time.
Scalability: MongoDB scales horizontally using sharding (partitioning data across
various servers). Data is partitioned into data chunks using the shard key, and these data
chunks are evenly distributed across shards that reside across many physical servers.
Also, new machines can be added to a running database.
Replication and High Availability: MongoDB increases the data availability with
multiple copies of data on different servers. By providing redundancy, it protects the
database from hardware failures. If one server goes down, the data can be retrieved
easily from other active servers which also had the data stored on them.
Aggregation: Aggregation operations process data records and return the computed
results. It is similar to the GROUPBY clause in SQL. A few aggregation expressions are
sum, avg, min, max, etc
80
Third-party support. MongoDB supports several storage engines and provides pluggable
storage engine APIs that let third parties develop their own storage engines for
MongoDB.
Aggregation. The DBMS also has built-in aggregation capabilities, which lets users
run MapReduce code directly on the database rather than running MapReduce
on Hadoop. MongoDB also includes its own file system called GridFS, akin to
the Hadoop Distributed File System. The use of the file system is primarily for storing
files larger than BSON's size limit of 16 MB per document. These similarities let
MongoDB be used instead of Hadoop, though the database software does integrate with
Hadoop, Spark and other data processing frameworks.
Disadvantages of MongoDB
Continuity. With its automatic failover strategy, a user sets up just one master node in a
MongoDB cluster. If the master fails, another node will automatically convert to the
new master. This switch promises continuity, but it isn't instantaneous -- it can take up
to a minute. By comparison, the Cassandra NoSQL database supports multiple master
nodes. If one master goes down, another is standing by, creating a highly available
database infrastructure.
Write limits. MongoDB's single master node also limits how fast data can be written to
the database. Data writes must be recorded on the master, and writing new information
to the database is limited by the capacity of that master node.
Data consistency. MongoDB doesn't provide full referential integrity through the use of
foreign-key constraints.
Security. In addition, user authentication isn't enabled by default in MongoDB databases
BASIC QUERYING WITH MOGODB SHELL
81
Step 1 — Connecting to the MongoDB Server
To open up the MongoDB shell, run the mongo command from your server prompt. By
default, the mongo command opens a shell connected to a locally-installed MongoDB
$mongo
This will print
Output
connecting to:
mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
82
Most commands in MongoDB database are executed on a database or on a collection in
a selected database. The currently-selected database is represented by the db object
accessible through the shell. You can check which database is currently selected by
typing db into the shell:
db
On a freshly-connected shell instance, the selected database is always called test:
Output
test
You can safely use this database to experiment with MongoDB and the MongoDB shell.
To switch to another database, you can run the use command followed by the new
database name. Try switching to a database called fruits:
use fruits
The shell will inform you that you’re now using the new database:
Output
switched to db fruits
Type the following line into the MongoDB shell and press ENTER. Notice the
highlighted collection name (apples):
db.apples.insert(
Pressing ENTER after an open parenthesis will start a multi-line command prompt,
allowing you to enter longer commands in more than one line. The insert command
won’t register as complete until you enter a closing parenthesis. Until you do, the
prompt will change from the greater-than sign to an ellipsis (...).
You don’t need to break up MongoDB commands into multiple lines like this, but doing
so can make long commands easier to read and understand.
On the next line, enter the object within a pair of curly brackets ({ and }). This example
document has only one field and value pair:
{name: 'Red Delicious'}
you can end your input and run the operation by entering a closing parenthesis and
pressing ENTER:
)
This time, the Mongo shell will register the ending of the insert command and execute
the whole statement.
Output
WriteResult({ "nInserted" : 1 })
Create a Sample Database
Before the start, we will create a sample DB with some sample data to perform all
operations.
We will create a database with name myDB and will create a collection with
name orders. For this, the statement would be as follows.
> use myDB
>db.createCollection("orders")
>
MongoDB doesn't use the rows and columns. It stores the data in a document format. A
collection is a group of documents.
You can check all collections in a database by using the following statement.
83
> use myDB
>show collections
orders
Let's insert some documents by using the following statement.
>db.orders.insert([
{
Customer: "abc",
Address:{"City":"Jaipur","Country":"India"},
PaymentMode":"Card",
Email:"[email protected]",
OrderTotal: 1000.00,
OrderItems:[
{"ItemName":"notebook","Price":"150.00","Qty":10},
{"ItemName":"paper","Price":"10.00","Qty":5},
{"ItemName":"journal","Price":"200.00","Qty":2},
{"ItemName":"postcard","Price":"10.00","Qty":500}
]
},
{
Customer: "xyz",
Address:{"City":"Delhi","Country":"India"},
PaymentMode":"Cash",
OrderTotal: 800.00,
OrderItems:[
{"ItemName":"notebook","Price":"150.00","Qty":5},
{"ItemName":"paper","Price":"10.00","Qty":5},
{"ItemName":"postcard","Price":"10.00","Qty":500}
]
},
{
Customer: "ron",
Address:{"City":"NewYork","Country":"USA"},
PaymentMode":"Card",
Email:"[email protected]",
OrderTotal: 800.00,
OrderItems:[
{"ItemName":"notebook","Price":"150.00","Qty":5},
{"ItemName":"postcard","Price":"10.00","Qty":00}
]
}
])
Query Documents
find() method
We need to use the find() method to query documents from MongoDB collections. The
following statement will retrieve all documents from the collection.
>db.orders.find()
84
db.movies.find( { "title": "Titanic" } )
This operation corresponds to the following SQL statement:
SELECT * FROM movies WHERE title = "Titanic
Filter the Documents by Specifying a Condition
Now we will learn how we can fetch the documents that match a specified condition.
MongoDB provides many comparison operators for this.
1. $eq Operator
The $eq operator checks the equality of the field value with the specified value. To fetch
the order where PaymentMode is 'Card' you can use the following statement
>db.orders.find( { PaymentMode: { $eq: "Card" } } )
This query can be written also like below
>db.orders.find( { PaymentMode: "Card" } )
$eq Operator with embedded document
You may have noticed that we inserted an embedded document Address in
the Orders collection. If you want to fetch the order where Country is 'India' you can use
a dot notation like the following statement.
>db.Orders.find( { "Address.Country": { $eq: "India" } } )
This query can be written also like below
>db.Orders.find( { "Address.Country":"India" } )
$gt Operator
You can use the $gt operator to retrieve the documents where a field’s value is greater
than the specified value. The following statement will fetch the documents
where OrderTotal is greater than 800.
>db.orders.find( { OrderTotal: { $gt: 800.00 } } )
$gte Operator
You can use the $gte operator to retrieve the documents where a field’s value is greater
than or equal to the specified value. The following statement will fetch the documents
where OrderTotal is greater than or equal to 800.
>db.orders.find( { OrderTotal: { $gte: 800.00 } } )
$lt Operator
You can use the $lt operator to retrieve the documents where a field’s value is less than
the specified value. The following statement will fetch the documents
where OrderTotal is less than 800.
>db.orders.find( { OrderTotal: { $lt: 800.00 } } )
lte Operator
You can use the $lte operator to retrieve the documents where a field’s value is less than
or equal to the specified value. Following statement will fetch the documents
where OrderTotal is less than or equal to 800.
db.orders.find( { OrderTotal: { $lte: 800.00 } } )>
$ne Operator
You can use the $ne operator to retrieve the documents where a field’s value is not
equal to the specified value.
>db.orders.find( { PaymentMode: { $ne: "Card" } } )
85
$in Operator
You can use the $in operator to retrieve the documents where a field’s value is equal to
any value in the specified array.
>db.orders.find( { OrderItems.ItemName: { $in: ["journal","paper"] } } )
$nin Operator
You can use the $nin operator to retrieve the documents where a field’s value is not
equal to any value in the specified array. It will also select the documents where the
field does not exist.
>db.orders.find( { OrderItems.ItemName: { $nin: ["journal","paper"] } } )
Indexing
We know that indexing is very important if we are performing the queries on a large
database. Without indexing execution of a query can be expensive. We can add a simple
ascending index on a single field by using the following statement.
>db.Orders.createIndex({"Customer":1})
Select All Documents in a Collection
To select all documents in the collection, pass an empty document as the query filter
parameter to the find method. The query filter parameter determines the select criteria:
db.inventory.find( {} )
12.WHAT IS THE MONGODB MONGO SHELL?(PART A)
MongoDB Shell
This operation corresponds to the following SQL statement:
SELECT*FROM inventory
What is the MongoDB Mongo shell?
MongoDB Mongo shell is an interactive JavaScript interface that allows you to interact
with MongoDB instances through the command line. The shell can be used for:
Data manipulation
Administrative operations such as maintenance of database instances
MongoDB Mongo shell features
MongoDB Mongo shell is the default client for the MongoDB database server. It’s a
command-line interface (CLI), where the input and output are all console-based. The
Mongo shell is a good tool to manipulate small sets of data.
Here are the top features that Mongo shell offers:
1. Run all MongoDB queries from the Mongo shell.
2. Manipulate data and perform administration operations.
3. Mongo shell uses JavaScript and a related API to issue commands.
4. See previous commands in the mongo shell with up and down arrow keys.
5. View possible command completions using the tab button after partially entering a
command.
6. Print error messages, so you know what went wrong with your commands.
7. MongoDB has recently introduced a new mongo shell known as mongosh. It has some
additional features, such as extensibility and embeddability—that is, the ability to use it
inside other products such as VS Code.
MongoDB Shell (mongosh)
86
The MongoDB Shell, mongosh, is a fully functional JavaScript and Node.js 16.x for
interacting with MongoDB deployments. You can use the MongoDB Shell to test
queries and operations directly with your database.
mongosh is available as a standalone package in the MongoDB Download Center.
Download and Install mongosh
Connect to a MongoDB Deployment
Once you have installed the MongoDB Shell and added it to your system PATH, you
can connect to a MongoDB deployment. To learn more, see Connect to a
Deployment.The MongoDB Shell versus the Legacy mongo Shell
The new MongoDB Shell, mongosh, offers numerous advantages over the
legacy mongo shell, such as:
Improved syntax highlighting.
Improved command history.
Improved logging.
Currently mongosh supports a subset of the mongo shell methods.
Achieving feature parity between mongosh and the mongo shell is an ongoing effort.
To maintain backwards compatibility, the methods that mongosh supports use the same
syntax as the corresponding methods in the mongo shell.
What Is Body-parser?
Body Parser is a middleware of Node JS used to handle HTTP POST request. Body
Parser can parse string based client request body into JavaScript Object which we can
use in our application.
Body-parser parses is an HTTP request body that usually helps when you need to know
more than just the URL being hit.
Specifically in the context of a POST HTTP request where the information you want is
contained in the body.
Using body-parser allows you to access req.body from within routes and use that data.
For example: To create a user in a database.
What Body Parser Do?
Parse the request body into JS Object
Put above JS Object in req.body so that middleware can use the data.
87
Installation
API
To include body-parser in our application, use following code. The bodyParser object
has two methods, bodyParser.json() and bodyParser.urlencoded(). The data will be
available in req.body property.
var bodyParser = require('body-parser');
req.body
req.body contains data ( in key-value form ) submitted in request body. The default value
is undefined.
req.json
const express=require('express');
const app=express();
const bodyParser=require('body-parser');
// parse application/json
app.use(bodyParser.json());
app.post('post',(req,res)=>{
console.log(req.body);
res.json(req.body);
});
Form Data
Use bodyParser to parse HTML Form Data received through HTTP POST method.
Create a separate HTML Form with inputs. Use name attributes in all input controls. Set
the method (in html form ) to POST and action to path of Post request.
const express=require('express');
const app=express();
const bodyParser=require('body-parser');
// parse application/x-www-form-urlencoded
app.use(bodyParser.urlencoded({ extended: false }));
app.post('formdata',(req,res)=>{
console.log(req.body);
res.json(req.body);
});
HTML Page
<form method="post" action="127.0.0.01:3000/formdata">
88
<input type="text" name="username" required>
<input type="password" name="userpass" required>
<button>Send</button>
<form>
Errors
The middlewares provided by this module create errors using the http-errors module.
The errors will typically have a status/status Code property that contains the suggested
HTTP response code, an expose property to determine if the message property should
be displayed to the client, a type property to determine the type of error without
matching against the message, and a body property containing the read body, if
available.
The following are the common errors created, though any error can come through for
various reasons.
1. Content encoding unsupported
2. Entity parse failed
3. Entity verify failed
4. Request aborted
5. Request entity too large
6. Request size did not match content length
7. Stream encoding should not be set
8. Stream is not readable
9. Too many parameters
NODEJS AND MONGODB CONNECTION
14.HOW TO CONNECT MOGODB WITH NODEJS(PART B)
The MongoDB Node.js Driver allows you to easily interact with MongoDB databases
from within Node.js applications.
MongoDB is a NoSQL database for nodejs. We will use MongoDB driver for nodejs to
manage MongoDB database. MongoDB uses binary JSON to store data. We will also
use the mongoose tool to connect MongoDB with Node js and manage the database (i.e.
create, read, update, and delete documents). Alike traditional databases, MongoDB is
easy to use and saves time.
Introduction
Consider, We have a table schema defined in a relational database as shown below. We
cannot insert new data into the table that contains the new field phoneNumber. Because
Field phoneNumber is not defined in the table schema.
+---------------+--------------+------+-----+---------+
| Field | Type | Null | Key | Default |
+---------------+--------------+------+-----+---------+
| id | int | NO | PRI | NULL |
| firstName | varchar(255) | YES | | NULL |
| lastName | varchar(255) | YES | | NULL |
+---------------+--------------+------+-----+---------+
But, MongoDB doesn't need a pre-defined schema. We can insert new data in the object
format with any additional fields. MongoDB store data as a document shown below:
89
// document 1
{
id:'1',
firstName:'Suhani',
lastName:'Singh'
}
// document 2
{
id:'2',
firstName:'Modi',
lastName:'Kumar',
phoneNumber:'+91999999'
}
// document 3
{
id:'3',
email:'[email protected]',
firstName:'Ambesh',
lastName:'Yadav',
phoneNumber:['+9199999999','+9199999999'],
}
Install the MongoDB Node.js Driver
Create a new file named mongodb-nodejs and go to the current folder mongodb-nodejs
with CLI (Command Line Interface) as shown below:
mkdir mongodb-nodejs
cd mongodb-nodejs
Create a new node project with npm that add the package.json file inside mongodb-
nodejs folder
npm init -y
Install the mongodb driver for nodejs to use the MongoDB database with nodejs
npm install mongodb --save
mongodb driver helps us to connect and easily manage queries in MongoDB with
nodejs.
Connecting to The Local MongoDB Database
We will define a path to store data for MongoDB on the local machine. We will add
the path C:\Program Files\MongoDB\data\db. Also, We make sure that the specified
path/folder exists.
mongod --dbpath 'C:\Program Files\MongoDB\data\db'
In the above code block, We are using --dbpath to add a path for the MongoDB
database and start the server locally.
Configuring The MongoDB Node.js Connection
Create a file named server.js and add the following code to the server.js file
const { MongoClient } = require('mongodb')
90
// Create Instance of MongoClient for mongodb
const client = new MongoClient('mongodb://localhost:27017')
// Connect to database
client.connect()
.then(() => console.log('Connected Successfully'))
.catch(error => console.log('Failed to connect', error))
Run the node server.js command and We will see the following output
$ node server.js
Connected Successfully!
Closing The Connection
We will replace the previous code of the server.js file with the following code:
const { MongoClient } = require('mongodb')
// Connect to database
client.connect()
.then(() => {
console.log('Connected Successfully!')
"name": "abc",
"email": "[email protected]",
"_id": "635a46cf9e89ce7cd5e1473f",
"__v": 0
}
We will use the get() method to fetch all the students from the local MongoDB
database.
// Get all students
app.get('/students', (req, res) => {
Student.find({})
.then(docs => {
console.log(docs)
res.json(docs)
93
})
.catch(err => console.log(err))
})
In the above code block, We are using the find() method to retrieve students' data.
When we hit the URL http://localhost:4000/students, It will display the output as
shown below:
[
{
"_id": "635a4a55b9e33f7ab6e09ed4",
"name": "abc",
"email": "[email protected]",
"__v": 0
},
{
"_id": "635a4a8dc7824ed2d957b882",
"name": "xyz",
"email": "[email protected]",
"__v": 0
},
{
"_id": "635a4b2e0c67336d6d866ca1",
"name": "hi",
"email": "[email protected]",
"__v": 0
}
]
Insert Documents
const { MongoClient } = require('mongodb')
// Insert to database
client.db('students').collection('students').insertOne({
name: 'Amyport',
email: '[email protected]'
})
.then((res) => {
console.log(res)
client.close()
})
.catch((err) => console.log(err))
Update/Delete Documents
const { MongoClient } = require('mongodb')
// Insert to database
client.db('students').collection('students')
.updateOne({ name: 'Amyporter' },
{
$set:
{ email: '[email protected]' }
})
.then((res) => {
console.log(res)
client.close()
})
.catch((err) => console.log(err))
In the above code block, We are using the updateOne() method to select the student
using name or email and update it with the $set variable
const { MongoClient } = require('mongodb')
// Insert to database
client.db('students').collection('students')
.deleteOne({ name: 'Amyporter' })
.then((res) => {
console.log(res)
client.close()
})
.catch((err) => console.log(err))
In the above code block, We are using the deleteOne() method to select the student
using name or email and delete it.
Mongoose
Mongoose is an ODM(Object Data Modeling) tool that helps to define the model
based on the schema. A schema is a kind of structure that defines how we can store
data in the database. It helps us to validate types like objects, strings, booleans,
numbers, etc.
Consider, We want to make a list of npm libraries, then we will use the following
format to store information in the database.
const npmSchema = {
id:'',
packageName:'',
homePage:'',
repository:''
}
95
In this way, We can store data in the MongoDB database but it is not validated. Hence,
We use mongoose to define a schema, validate data and create a model as shown
below
// Define schema and validate using mongoose
const npmList = new mongoose.Schema({
id: { type: Number },
packageName: { type: String },
homePage: { type: String },
repository: { type: String }
})
1. Manage the connection pooling – Here is where you can specify the number of
MySQL database connections that should be maintained and saved by Node.js.
2. Create and close a connection to a database. In either case, you can provide a callback
function which can be called whenever the “create” and “close” connection methods
are executed.
3. Queries can be executed to get data from respective databases to retrieve data.
4. Data manipulation, such as inserting data, deleting, and updating data can also be
achieved with these modules.
Using MongoDB and Node.js
Database name: EmployeeDB
Collection name: Employee
Documents
{
{Employeeid : 1, Employee Name : Guru99},
{Employeeid : 2, Employee Name : Joe},
{Employeeid : 3, Employee Name : Martin},
96
}
Code Explanation:
1. The first step is to include the mongoose module, which is done through the require
function. Once this module is in place, we can use the necessary functions available in
this module to create connections to the database.
2. Next, we specify our connection string to the database. In the connect string, there are
3 key values which are passed.
The first is ‘mongodb’ which specifies that we are connecting to a mongoDB database.
The next is ‘localhost’ which means we are connecting to a database on the local
machine.
The next is ‘EmployeeDB’ which is the name of the database defined in our MongoDB
database.
The next step is to actually connect to our database. The connect function takes in our
URL and has the facility to specify a callback function. It will be called when the
connection is opened to the database. This gives us the opportunity to know if the
database connection was successful or not.
In the function, we are writing the string “Connection established” to the console to
indicate that a successful connection was created.
Finally, we are closing the connection using the db.close statement.
If the above code is executed properly, the string “Connected” will be written to the console
as shown below.
97
3. Querying for data in a MongoDB database – Using the MongoDB driver we can
also fetch data from the MongoDB database.The below section will show how we can
use the driver to fetch all of the documents from our Employee collection in our
EmployeeDB database. This is the collection in our MongoDB database, which
contains all the employee-related documents. Each document has an object id,
Employee name, and employee id to define the values of the document.
Code Explanation:
1. In the first step, we are creating a cursor (A cursor is a pointer which is used to point
to the various records fetched from a database. The cursor is then used to iterate
through the different records in the database. Here we are defining a variable name
called cursor which will be used to store the pointer to the records fetched from the
database. ) which points to the records which are fetched from the MongoDb
collection. We also have the facility of specifying the collection ‘Employee’ from
which to fetch the records. The find() function is used to specify that we want to
retrieve all of the documents from the MongoDB collection.
2. We are now iterating through our cursor and for each document in the cursor we are
going to execute a function.
3. Our function is simply going to print the contents of each document to the console.
For example, suppose if you just wanted to fetch the record
which has the employee name as raj, then this statement can
be written as follows
var cursor=db.collection('Employee').find({EmployeeName:
"raj"})
If the above code is executed successfully, the following
output will be displayed in your console.
98
4. Inserting documents in a collection – Documents can be inserted into a collection
using the insertOne method provided by the MongoDB library. The below code
snippet shows how we can insert a document into a mongoDB collection.
Code Explanation:
1. Here we are using the insertOne method from the MongoDB library to insert a
document into the Employee collection.
2. We are specifying the document details of what needs to be inserted into the
Employee collection.
If you now check the contents of your MongoDB database, you will find the record with
Employeeid of 4 and EmployeeName of “NewEmployee” inserted into the Employee
collection.
To check that the data has been properly inserted in the database, you need to execute the
following commands in MongoDB
1. Use EmployeeDB
2. db.Employee.find({Employeeid :4 })
The first statement ensures that you are connected to the EmployeeDb database. The second
statement searches for the record which has the employee id of 4.
Code Explanation:
99
1. Here we are using the “updateOne” method from the MongoDB library, which is used
to update a document in a mongoDB collection.
2. We are specifying the search criteria of which document needs to be updated. In our
case, we want to find the document which has the EmployeeName of
“NewEmployee.”
3. We then want to set the value of the EmployeeName of the document from
“NewEmployee” to “Mohan”.
If you now check the contents of your MongoDB database, you will find the record with
Employeeid of 4 and EmployeeName of “Mohan” updated in the Employee collection.
To check that the data has been properly updated in the database, you need to execute the
following commands in MongoDB
1. Use EmployeeDB
2. db.Employee.find({Employeeid :4 })
The first statement ensures that you are connected to the EmployeeDb database. The second
statement searches for the record which has the employee id of 4.
Code Explanation:
1. Here we are using the “deleteOne” method from the MongoDB library, which is used
to delete a document in a mongoDB collection.
2. We are specifying the search criteria of which document needs to be deleted. In our
case, we want to find the document which has the EmployeeName of “Mohan” and
delete this document.
If you now check the contents of your MongoDB database, you will find the record with
Employeeid of 4 and EmployeeName of “Mohan” deleted from the Employee collection.
To check that the data has been properly updated in the database, you need to execute the
following commands in MongoDB
1. Use EmployeeDB
2. db.Employee.find()
100
The first statement ensures that you are connected to the EmployeeDb database. The second
statement searches and display all of the records in the employee collection. Here you can
see if the record has been deleted or not.
How to build a node express app with MongoDB to store and serve content
Building an application with a combination of both using express and MongoDB is quite
common nowadays. When working with JavaScript web based applications, one will
normally here of the term MEAN stack.
The term MEAN stack refers to a collection of JavaScript based technologies used to
develop web applications.
MEAN is an acronym for MongoDB, ExpressJS, AngularJS, and Node.js.
Step 1) Define all the libraries which need to be used in our application, which in our case is
both the MongoDB and express library.
Code Explanation:
1. We are defining our ‘express’ library, which will be used in our application.
2. We are defining our ‘MongoDB’ library, which will be used in our application for
connecting to our MongoDB database.
3. Here we are defining the URL of our database to connect to.
4. Finally, we are defining a string which will be used to store our collection of employee
id which need to be displayed in the browser later on.
Step 2) In this step, we are now going to get all of the records in our ‘Employee’ collection
and work with them accordingly.
Code Explanation:
101
variable called cursor. Using this cursor variable, we will be able to browse through all
of the records of the collection.
3. We are now using the cursor.each() function to navigate through all of the records of
our collection. For each record, we are going to define a code snippet on what to do
when each record is accessed.
4. Finally, we see that if the record returned is not null, then we are taking the employee
via the command “item.Employeeid”. The rest of the code is just to construct a proper
HTML code which will allow our results to be displayed properly in the browser.
Step 3) In this step, we are going to send our output to the web page and make our
application listen on a particular port.
Code Explanation:
1. Here we are sending the entire content which was constructed in the earlier step to our
web page. The ‘res’ parameter allows us to send content to our web page as a
response.
2. We are making our entire Node.js application listen on port 3000.
Output:
});
});
});
var server = app.listen(5000, function () {
console.log('Server is running..');
});
In the above example, we have imported mssql module and called connect() method to
connect with our SchoolDB database. We have passed config object which includes
database information such as userName, password, database server and database name. On
successful connection with the database, use sql.request object to execute query to any
database table and fetch the records.
Run the above example using node server.js command and point your browser
to http://localhost:5000 which displays an array of all students from Student table.
103
Example
1. Create an empty folder and name it node.js mysql.
2. Open the newly created directory in VS Code inside the terminal, and type npm init to
initialize the project. Press Enter to leave the default settings as they are.
index.js
Create a file called index.js in the project directory. Since our main goal is to
understand the connection between Node.js and MySQL database, this will be the only
file that we create and work on in this project.
// Create connection
const db = mysql.createConnection({
host: "localhost",
user: "root",
104
password: "simplilearn",
database: "nodemysql",
});
// Connect to MySQL
db.connect((err) => {
if (err) {
throw err;
}
console.log("MySql Connected");
});
Since we are using the Express module to create the web server, we create a variable named
app that behaves as an object of the express module.
We use the GET API request to create the database. Notice that we have first defined the
SQL query for creating the database. We then pass this query to the query() method along
with the error callback method. The latter throws an error if the query wasn’t successfully
executed.
// Create DB
app.get("/createdb", (req, res) => {
let sql = "CREATE DATABASE nodemysql";
db.query(sql, (err) => {
if (err) {
throw err;
}
res.send("Database created");
});
});
After the database is created, create a table called “employee” to store the employee details
into the database. This table will have the attributes “id”, “name”, and “designation.” For the
last step, we pass this query to the query() method and the query executes.
// Create table
app.get("/createemployee", (req, res) => {
let sql =
"CREATE TABLE employee(id int AUTO_INCREMENT, name VARCHAR(255),
designation VARCHAR(255), PRIMARY KEY(id))";
db.query(sql, (err) => {
if (err) {
throw err;
}
res.send("Employee table created");
105
});
});
Now that we have created both the database and the table, let’s add an employee to the
database. We start by writing the relevant query for adding the record into the table. The
record is added and we get a message stating that employee 1 has been added if the query
gets successfully executed. Otherwise, we get an error message.
// Insert employee 1
app.get("/employee1", (req, res) => {
let post = { name: "Jake Smith", designation: "Chief Executive Officer" };
let sql = "INSERT INTO employee SET ?";
let query = db.query(sql, post, (err) => {
if (err) {
throw err;
}
res.send("Employee 1 added");
});
});
We can also update an employee record that resides in the database. For example, to update
the name of a particular employee record, we can use a GET request on that employee id,
and the name will get updated if the query executes successfully.
// Update employee
app.get("/updateemployee/:id", (req, res) => {
let newName = "Updated name";
let sql = `UPDATE employee SET name = '${newName}' WHERE id = ${req.params.id}`;
let query = db.query(sql, (err) => {
if (err) {
throw err;
}
res.send("Post updated...");
});
});
//Delete employee
app.get("/deleteemployee/:id", (req, res) => {
let sql = `DELETE FROM employee WHERE id = ${req.params.id}`;
let query = db.query(sql, (err) => {
if (err) {
throw err;
}
res.send("Employee deleted");
});
});
app.listen("3000", () => {
106
console.log("Server started on port 3000");
});
In the end, we set the server to listen at Port 3000.
app.listen("3000", () => {
console.log("Server started on port 3000");
});
107
Third-party cookies - are used by websites that show ads on their pages or track
website traffic. They grant access to external parties to decide the types of ads to show
depending on the user’s previous preferences.
The major difference between sessions and cookies is that sessions live on the server-
side (the webserver), and cookies live on the client-side (the user browser). Sessions
have sensitive information such as usernames and passwords. This is why they are
stored on the server. Sessions can be used to identify and validate which user is
making a request.
As we have explained, cookies are stored in the browser, and no sensitive information
can be stored in them. They are typically used to save a user’s preferences.
Setting up cookies with Node.js
We will use the following NPM packages:
Express - this is an opinionated server-side framework for Node.js that helps you
create and manage HTTP server REST endpoints.
cookie-parser - cookie-parser looks at the headers in between the client and the server
transactions, reads these headers, parses out the cookies being sent, and saves them in
a browser. In other words, cookie-parser will help us create and manage cookies
depending on the request a user makes to the server.
Run the following command to install these NPM packages:
npm install express cookie-parser
To set up a server and save cookies, import the cookie parser and express modules to your
project. This will make the necessary functions and objects accessible.
const express = require('express')
const cookieParser = require('cookie-parser')
You need to use the above modules as middleware inside your application, as shown below.
//setup express app
const app = express()
108
Step 4 - Set a port number
This is the port number that the server should listen to when it is running. This will help us
access our server locally. In this example, the server will listen to port 8000, as shown
below.
//server listening to port 8000
app.listen(8000, () => console.log('The server is running port 8000...'));
Now we have a simple server set. Run node app.js to test if it is working.
And if you access the localhost on port 8000 (http://localhost:8000/), you should
get an HTTP response sent by the server. Now we’re ready to start implementing cookies.
SETTING COOKIES
Add routes and endpoints that will help us create, update and delete a cookie.\
We will set a route that will save a cookie in the browser. In this case, the cookies will be
coming from the server to the client browser.
app.get('/setcookie', (req, res) => {
res.cookie(`Cookie token name`,`encrypted cookie string Value`);
res.send('Cookie have been saved successfully');
});
run node app.js to serve the above endpoint. Open http://localhost:8000/getcookie your
browser and access the route. To confirm that the cookie was saved, go to your browser’s
inspector tool 🡆 select the application tab 🡆 cookies 🡆 select your domain URL.
If the server sends this cookie to the browser, this means we can iterate the incoming
requests through req.cookies and check the existence of a saved cookie.
// get the cookie incoming request
app.get('/getcookie', (req, res) => {
//show the saved cookies
console.log(req.cookies)
res.send(req.cookies);
});
Again run the server using node app.js to
expose the above route
(http://localhost:8000/getcookie)
and you can see the response on the browser.
109
18.HOW TO AUTHENTICATE USER IN NODEJS(PART B)
WHAT IS AUTHENTICATION AND HOW IT WORKS?(PART A)
In authentication, the user or computer has to prove its identity to the server or
client. Usually, authentication by a server entails the use of a user name and password.
Other ways to authenticate can be through cards, retina scans, voice recognition, and
fingerprints.
Authentication vs Authorization
Authentication
It can appear that authorization and authentication are the same things. But there is a
significant distinction between entering a building (authentication) and what you can do
inside (authorization).
The act of authenticating a user involves getting credentials and utilizing those credentials
to confirm the person's identity. The authorization process starts if the certificates are
legitimate.
Authentication Techniques
1. Using a password for authentication
2. Password-free identification
A link or OTP (one-time password) is sent to the user's registered mobile number or
phone number in place of a password in this procedure. Authentication based on an OTP
can also be mentioned.
3. 2FA/MFA
The highest level of authentication is known as 2FA/MFA or two-factor
authentication/multi-factor authentication. Authenticating the user requires additional PIN
or security questions.
4. One-time password
Access to numerous applications can be made possible by using single sign-on or SSO.
Once logged in, the user can immediately sign into all other online apps using the same
centralized directory.
5. Social Authentication
Social authentication checks the user using the already-existing login credentials for the
relevant social network; it does not call for additional security.
Authorization
Allowing authenticated users access to resources involves authorizing them after
determining whether they have system access permissions. You can also limit access
rights by approving or rejecting particular licenses for authenticated users.
After the system confirms your identification, authorization takes place, giving you
complete access to all the system's resources, including data, files, databases, money,
places, and anything else.
110
Authorization Methods
1. Role-based access controls (RBAC):
This type of authorization grants users access to data in accordance with their positions
within the company. For instance, all employees inside a corporation might have access to
personal data, such as pay, vacation time, but not be able to edit it. However, HR may be
granted access to all employee HR data and be given the authority to add, remove, and
modify this information. Organizations may make sure each user is active while limiting
access to sensitive information by granting permissions based on each person's function.
2. Attribute-based access control (ABAC):
ABAC uses several distinct attributes to grant users authorization at a finer level than
RBAC. User attributes including the user's name, role, organization, ID, and security
clearance may be included. Environmental factors including the time of access, the
location of the data, and the level of organizational danger currently in effect may be
included. Additionally, it might contain resource information like the resource owner, file
name, and data sensitivity level. The aim of ABAC, which is a more intricate authorization
process than RBAC, is to further restrict access. For instance, to preserve strict security
boundaries, access can be restricted to specific geographic regions or times of the day
rather than allowing all HR managers in a business to update employees' HR data.
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use(upload.array());
app.use(cookieParser());
app.use(session({secret: "Your secret key"}));
111
app.post('/signup', function(req, res){
if(!req.body.id || !req.body.password){
res.status("400");
res.send("Invalid details!");
} else {
Users.filter(function(user){
if(user.id === req.body.id){
res.render('signup', {
message: "User Already Exists! Login or choose another user id"});
}
});
var newUser = {id: req.body.id, password: req.body.password};
Users.push(newUser);
req.session.user = newUser;
res.redirect('/protected_page');
}
});
app.listen(3000);
What is Pug used for?
Pug (formerly known as Jade) is a preprocessor which simplifies the task of writing HTML.
Using Pug is just as easy. Here are the three steps:
1. Install Pug into your project: npm install pug --save
2. Set up your view engine: app.set(‘view engine’, ‘pug’)
3. Create a .pug file
SIGNUP
html
head
title Signup
body
if(message)
h4 #{message}
form(action = "/signup" method = "POST")
input(name = "id" type = "text" required placeholder = "User ID")
input(name = "password" type = "password" required placeholder = "Password")
button(type = "Submit") Sign me up!
Check if this page loads by visiting localhost:3000/signup.
We have set the required attribute for both fields, so HTML5 enabled browsers will not let us
submit this form until we provide both id and password. If someone tries to register using a
curl request without a User ID or Password, an error will be displayed.
REFERENCE:
1.Learning nodejsbook - Marc Wandschneider
2 Node.js in Practice - Alex Young, Marc Harter
***********************UNIT III COMPLETED************************
112
POSSIBLE QUESTIONS
UNIT III
PART A
1. What is nosql?
2. What is the cap theorem?
3. Define base
4. Define MONGODB
5. What is a database?
6. What are collections?
7. Where is MONGODB used?
8. Define MONGODB shell
9. What is the MONGODB mongo shell?
10. Define body parser
11. What is authentication and how it works?
PART B
**************************
113
VASAVI VIDYA TRUST GROUP OF INSTITUTIONS, SALEM – 103
DEPARTMENT OF MCA
CLASS: I MCA UNIT: IV
SUBJECT: FULL STACK WEB DEVELOPMENT SUBJECT CODE: MC4201
React is a declarative, efficient, and flexible JavaScript library for building user interfaces. It lets
you compose complex UIs from small and isolated pieces of code called “components
Features of React.js: There are unique features are available on React because that it is widely popular.
Use JSX: It is faster than normal JavaScript as it performs optimizations while translating to regular
JavaScript. It makes it easier for us to create templates.
Virtual DOM: Virtual DOM exists which is like a lightweight copy of the actual DOM. So for every
object that exists in the original DOM, there is an object for that in React Virtual DOM. It is exactly
the same, but it does not have the power to directly change the layout of the document. Manipulating
DOM is slow, but manipulating Virtual DOM is fast as nothing gets drawn on the screen.
One-way Data Binding: This feature gives you better control over your application.
114
Component: A Component is one of the core building blocks of React. In other words, we can say
that every application you will develop in React will be made up of pieces called components.
Components make the task of building UIs much easier. You can see a UI broken down into multiple
individual pieces called components and work on them independently and merge them all in a parent
component which will be your final UI.
Performance: React.js use JSX, which is faster compared to normal JavaScript and HTML. Virtual
DOM is a less time taking procedure to update web pages content.
REACT DOM
3.WHAT IS DOM? (PART A)
What is DOM?
DOM, abbreviated as Document Object Model, is a World Wide Web Consortium standard
logical representation of any webpage.
In easier words, DOM is a tree-like structure that contains all the elements and it’s properties of a
website as its nodes. DOM provides a language-neutral interface that allows accessing and
updating of the content of any element of a webpage.
Before React, Developers directly manipulated the DOM elements which resulted in frequent
DOM manipulation, and each time an update was made the browser had to recalculate and
repaint the whole view according to the particular CSS of the page, which made the total process
to consume a lot of time.
React brought into the scene the virtual DOM. The Virtual DOM can be referred to as a copy of
the actual DOM representation that is used to hold the updates made by the user and finally
reflect it over to the original Browser DOM at once consuming much lesser time.
What is ReactDOM?
ReactDOM is a package that provides DOM specific methods that can be used at the top level of a web app
to enable an efficient way of managing DOM elements of the web page.
In order to work with React in the browser, we need to include two libraries: React and
ReactDOM. React is the library for creating views.
ReactDOM is the library used to actually render the UI in the browser.
The browser DOM is made up of DOM elements. Similarly, the React DOM is made up of React
elements. DOM elements and React elements may look the same, but they’re actually quite
different. A React element is a description of what the actual DOM element should look like. In
other words, React elements are the instructions for how the browser DOM should be created.
We can create a React element to represent an h1 using
React.createElement("h1", { id: "recipe-0" }, "Baked Salmon");
The first argument defines the type of element we want to create. In
This case, we want to create an h1 element. The second argument represents the element’s
properties. This h1 currently has an id of recipe-0. The third argument represents the element’s
children: any nodes that are inserted between the opening and closing tag
During rendering, React will convert this element to an actual DOM
element:
<h1 id="recipe-0">Baked Salmon</h1>
ReactDOM contains the tools necessary to render React elements in the browser. ReactDOM is where we’ll
find the render method.
We can render a React element, including its children, to the DOM with ReactDOM.render. The element we
want to render is passed as the first argument, and the second argument is the target node, where
we should render the element:
const dish = React.createElement("h1", null, "Baked Salmon");
ReactDOM.render(dish, document.getElementById("root"));
Children
React renders child elements using props. children. we rendered a text element as a child of the h1 element,
and thus props. children was set to Baked Salmon. We could render other React elements as children, too,
115
creating a tree of elements. This is why we use the term element tree: the tree has one root element from
which many branches grow.
Let’s consider the unordered list that contains ingredients:
<ul>
<li>2 lb salmon</li>
<li>5 sprigs fresh rosemary</li>
<li>2 tablespoons olive oil</li>
<li>2 small lemons</li>
<li>1 teaspoon kosher salt</li>
<li>4 cloves of chopped garlic</li>
</ul>
In this sample, the unordered list is the root element, and it has six children. We can represent this ul and its
children with React.createElement:
React.createElement( "ul", null,
React.createElement("li", null, "2 lb salmon"),
React.createElement("li", null, "5 sprigs fresh rosemary"),
React.createElement("li", null, "2 tablespoons olive oil"),
React.createElement("li", null, "2 small lemons"),
React.createElement("li", null, "1 teaspoon kosher salt"),
React.createElement("li", null, "4 cloves of chopped garlic")
);
ReactDOM provides the developers with an API containing the following methods and a few more.
render()
findDOMNode()
unmountComponentAtNode()
hydrate()
createPortal()
To use the ReactDOM in any React web app we must first import ReactDOM from the react-dom package
by using the following code snippet:
import ReactDOM from 'react-dom'
hydrate() Function
This method is equivalent to the render() method but is implemented while using server-side rendering.
Syntax:
ReactDOM.hydrate(element, container, callback)
Parameters: This method can take a maximum of three parameters as described below.
element: This parameter expects a JSX expression or a React Component to be rendered.
container: This parameter expects the container in which the element has to be rendered.
callback: This is an optional parameter that expects a function that is to be executed once the render is
complete.
Return Type: This function attempts to attach event listeners to the existing markup and returns a
reference to the component or null if a stateless component was rendered.
createPortal() Function
Usually, when an element is returned from a component’s render method, it’s mounted on the DOM as a
child of the nearest parent node which in some cases may not be desired. Portals allow us to render a
component into a DOM node that resides outside the current DOM hierarchy of the parent component.
Syntax:
ReactDOM.createPortal(child, container)
Parameters: This method takes two parameters as described below.
child: This parameter expects a JSX expression or a React Component to be rendered.
container: This parameter expects the container in which the element has to be rendered.
JSX
5. DISCUSS JSX(PART B)
React JSX
JSX(JavaScript Extension), is a React extension which allows writing JavaScript code that looks like
HTML. In other words, JSX is an HTML-like syntax used by React that extends ECMAScript so
that HTML-like syntax can co-exist with JavaScript/React code. The syntax is used by preprocessors (i.e.,
transpilers like babel) to transform HTML-like syntax into standard JavaScript objects that a JavaScript
engine will parse.
JSX provides you to write HTML/XML-like structures (e.g., DOM-like tree structures) in the same file
where you write JavaScript code, then preprocessor will transform these expressions into actual JavaScript
code. Just like XML/HTML, JSX tags have a tag name, attributes, and children.
Example
Here, we will write JSX syntax in JSX file and see the corresponding JavaScript code which transforms by
preprocessor(babel).
JSX File
1. <div>Hello JavaTpoint</div>
Corresponding Output
1. React.createElement("div", null, "Hello JavaTpoint");
117
The above line creates a react element and passing three arguments inside where the first is the name of
the element which is div, second is the attributes passed in the div tag, and last is the content you pass
which is the "Hello JavaTpoint."
o It is faster than regular JavaScript because it performs optimization while translating the code to
JavaScript.
o Instead of separating technologies by putting markup and logic in separate files, React uses
components that contain both. We will learn components in a further section.
o It is type-safe, and most of the errors can be found at compilation time.
o It makes easier to create templates.
To use more than one element, you need to wrap it with one container element. Here, we use div as a
container element which has three nested elements inside it.
App.JSX
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
4. return(
5. <div>
6. <h1>JavaTpoint</h1>
7. <h2>Training Institutes</h2>
8. <p>This website contains the best CS tutorials.</p>
9. </div>
10. );
11. }
12. }
13. export default App;
JSX Attributes
JSX use attributes with the HTML elements same as regular HTML. JSX uses camelcase naming
convention for attributes rather than standard naming convention of HTML such as a class in HTML
becomes className in JSX because the class is the reserved keyword in JavaScript. We can also use our
own custom attributes in JSX. For custom attributes, we need to use data- prefix. In the below example, we
have used a custom attribute data-demoAttribute as an attribute for the <p> tag.
Example
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
118
4. return(
5. <div>
6. <h1>JavaTpoint</h1>
7. <h2>Training Institutes</h2>
8. <p data-demoAttribute = "demo">This website contains the best CS tutorials.</p>
9. </div>
10. );
11. }
12. }
13. export default App;
Example
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
4. return(
5. <div>
6. <h1 className = "hello" >JavaTpoint</h1>
7. <p data-demoAttribute = "demo">This website contains the best CS tutorials.</p>
8. </div>
9. );
10. }
11. }
12. export default App;
2. As Expressions: We can specify the values of attributes as expressions using curly braces {}:
1. var element = <h2 className = {varName}>Hello JavaTpoint</h2>;
Example
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
4. return(
5. <div>
6. <h1 className = "hello" >{25+20}</h1>
7. </div>
8. );
119
9. }
10. }
11. export default App;
JSX Comments
JSX allows us to use comments that begin with /* and ends with */ and wrapping them in curly braces {}
just like in the case of JSX expressions. Below example shows how to use comments in JSX.
Example
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
4. return(
5. <div>
6. <h1 className = "hello" >Hello JavaTpoint</h1>
7. {/* This is a comment in JSX */}
8. </div>
9. );
10. }
11. }
12. export default App;
JSX Styling
React always recommends to use inline styles. To set inline styles, you need to use camelCase syntax.
React automatically allows appending px after the number value on specific elements. The following
example shows how to use styling in the element.
Example
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
4. var myStyle = {
5. fontSize: 80,
6. fontFamily: 'Courier',
7. color: '#003300'
8. }
9. return (
10. <div>
11. <h1 style = {myStyle}>www.javatpoint.com</h1>
12. </div>
13. );
120
14. }
15. }
16. export default App;
Example
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
4. var i = 5;
5. return (
6. <div>
7. <h1>{i == 1 ? 'True!' : 'False!'}</h1>
8. </div>
9. );
10. }
11. }
12. export default App;
COMPONENTS
6. EXPLAIN COMPONENTS IN REACT (PART B)
No matter its size, its contents, or what technologies are used to create it, a user interface is made up
of parts. Buttons, Lists, Headings. All of these parts, when put together, make up a user interface.
Consider a recipe application with three different recipes. The data is different in each box, but the
parts needed to create a recipe are the same.
In React, we describe each of these parts as a component. Components allow us to reuse the same
structure, and then we can populate those structures with different sets of data.
When considering a user interface you want to build with React, look for opportunities to break
down your elements into reusable pieces.
121
React Components
Earlier, the developers write more than thousands of lines of code for developing a single page application.
These applications follow the traditional DOM structure, and making changes in them was a very
challenging task. If any mistake found, it manually searches the entire application and update accordingly.
The component-based approach was introduced to overcome an issue. In this approach, the entire
application is divided into a small logical group of code, which is known as components.
A Component is considered as the core building blocks of a React application. It makes the task of building
UIs much easier. Each component exists in the same space, but they work independently from one another
and merge all in a parent component, which will be the final UI of your application.
Every React component have their own structure, methods as well as APIs. They can be reusable as per your
need. For better understanding, consider the entire UI as a tree. Here, the root is the starting component, and
each of the other pieces becomes branches, which are further divided into sub-branches.
Functional Components
In React, function components are a way to write components that only contain a render method and don't
have their own state. They are simply JavaScript functions that may or may not receive data as parameters.
We can create a function that takes props(properties) as input and returns what should be rendered. A valid
functional component can be shown in the below example.
1. function WelcomeMessage(props) {
122
2. return <h1>Welcome to the , {props.name}</h1>;
3. }
The functional component is also known as a stateless component because they do not hold or manage state.
It can be explained in the below example.
Example
1. import React, { Component } from 'react';
2. class App extends React.Component {
3. render() {
4. return (
5. <div>
6. <First/>
7. <Second/>
8. </div>
9. );
10. }
11. }
12. class First extends React.Component {
13. render() {
14. return (
15. <div>
16. <h1>JavaTpoint</h1>
17. </div>
18. );
19. }
20. }
21. class Second extends React.Component {
22. render() {
23. return (
24. <div>
25. <h2>www.javatpoint.com</h2>
26. <p>This websites contains the great CS tutorial.</p>
27. </div>
28. );
29. }
30. }
31. export default App;
Class Components
123
Class components are more complex than functional components. It requires you to extend from React.
Component and create a render function which returns a React element. You can pass data from one class to
other class components. You can create a class by defining a class that extends Component and has a render
function. Valid class component is shown in the below example.
1. class MyComponent extends React.Component {
2. render() {
3. return (
4. <div>This is main component.</div>
5. );
6. }
7. }
The class component is also known as a stateful component because they can hold or manage local state. It
can be explained in the below example.
Example
In this example, we are creating the list of unordered elements, where we will dynamically insert
StudentName for every object from the data array. Here, we are using ES6 arrow syntax (=>) which looks
much cleaner than the old JavaScript syntax. It helps us to create our elements with fewer lines of code. It is
especially useful when we need to create a list with a lot of items.
1. import React, { Component } from 'react';
2. class App extends React.Component {
3. constructor() {
4. super();
5. this.state = {
6. data:
7. [
8. {
9. "name":"Abhishek"
10. },
11. {
12. "name":"Saharsh"
13. },
14. {
15. "name":"Ajay"
16. }
17. ]
18. }
19. }
20. render() {
21. return (
22. <div>
23. <StudentName/>
124
24. <ul>
25. {this.state.data.map((item) => <List data = {item} />)}
26. </ul>
27. </div>
28. );
29. }
30. }
31. class StudentName extends React.Component {
32. render() {
33. return (
34. <div>
35. <h1>Student Name Detail</h1>
36. </div>
37. );
38. }
39. }
40. class List extends React.Component {
41. render() {
42. return (
43. <ul>
44. <li>{this.props.data.name}</li>
45. </ul>
46. );
47. }
48. }
49. export default App;
PROPERTIES
React Props
React Props are like function arguments in JavaScript and attributes in HTML.
To send props into a component, use the same syntax as HTML attributes:
function Car(props) {
return <h2>I am a { props.brand }!</h2>;
125
}
Pass Data
Props are also how you pass data from one component to another, as parameters.
Send the "brand" property from the Garage component to the Car component:
import React from 'react';
import ReactDOM from 'react-dom/client';
function Car(props) {
return <h2>I am a { props.brand }!</h2>;
}
function Garage() {
return (
<>
<h1>Who lives in my garage?</h1>
<Car brand="Ford" />
</>
);
}
FETCH API
Below is the stepwise implementation of how we fetch the data from an API in react. We will use the
fetch function to get the data from the API.
Step by step implementation to fetch data from an api in react.
Step 1: Create React Project
npm create-react-app MY-APP
Step 2: Change your directory and enter your main folder charting as
cd MY-APP
Step 3: API endpoint
https://jsonplaceholder.typicode.com/users
126
Step 4: Write code in App.js to fetch data from API and we are using fetch function
APP.CS
import React from "react";
import './App.css';
class App extends React.Component {
// Constructor
constructor(props) {
super(props);
this.state = {
items: [],
DataisLoaded: false
};
}
// ComponentDidMount is used to
// execute the code
componentDidMount() {
fetch(
"https://jsonplaceholder.typicode.com/users")
.then((res) => res.json())
.then((json) => {
this.setState({
items: json,
DataisLoaded: true
});
})
}
render() {
const { DataisLoaded, items } = this.state;
if (!DataisLoaded) return <div>
<h1> Pleses wait some time.... </h1> </div> ;
127
return (
<div className = "App">
<h1> Fetch data from an api in react </h1> {
items.map((item) => (
<ol key = { item.id } >
User_Name: { item.username },
Full_Name: { item.name },
User_Email: { item.email }
</ol>
))
}
</div>
);
}
}
APP.CSS
.App {
text-align: center;
color: Green;
}
.App-header {
background-color: #282c34;
min-height: 100vh;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
font-size: calc(10px + 2vmin);
color: white;
}
.App-link {
color: #61dafb;
}
@keyframes App-logo-spin {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
}
Step to run the application: Open the terminal and type the following command.
npm start
Output: Open the browser and our project is shown in the URL http://localhost:3000/
128
STATES AND LIFECYCLE
React Components can be broadly classified into Functional and Class Components. It is also seen that
Functional Components are faster and much simpler than Class Components. The primary difference
between the two is the availability of the State.
React State
The state is an updatable structure that is used to contain data or information about the component.
The state in a component can change over time. The change in state over time can happen as a
response to user action or system event. A component with the state is known as stateful
components. It is the heart of the react component which determines the behavior of the component
and how it will render. They are also responsible for making a component dynamic and interactive.
A state must be kept as simple as possible. It can be set by using the setState() method and calling
setState() method triggers UI updates. A state represents the component's local state or information.
It can only be accessed or modified inside the component or by the component directly. To set an
initial state before any interaction occurs, we need to use the getInitialState() method.
For example, if we have five components that need data or information from the state, then we need to
create one container component that will keep the state for all of them.
Defining State
To define a state, you have to first declare a default set of values for defining the component's initial state.
To do this, add a class constructor which assigns an initial state using this.state. The 'this.state' property can
be rendered inside render() method.
Example
The below sample code shows how we can create a stateful component using ES6 syntax.
1. import React, { Component } from 'react';
2. class App extends React.Component {
3. constructor() {
4. super();
5. this.state = { displayBio: true };
6. }
7. render() {
129
8. const bio = this.state.displayBio ? (
9. <div>
10. <p><h3>Javatpoint is one of the best Java training institute in Noida, Delhi, Gurugram, Ghaziaba
d and Faridabad. We have a team of experienced Java developers and trainers from multinational companies
to teach our campus students.</h3></p>
11. </div>
12. ) : null;
13. return (
14. <div>
15. <h1> Welcome to JavaTpoint!! </h1>
16. { bio }
17. </div>
18. );
19. }
20. }
21. export default App;
We can change the component state by using the setState() method and passing a new state object as the
argument. Now, create a new method toggleDisplayBio() in the above example and bind this keyword to the
toggleDisplayBio() method otherwise we can't access this inside toggleDisplayBio() method.
this.toggleDisplayBio = this.toggleDisplayBio.bind(this);
Example
In this example, we are going to add a button to the render() method. Clicking on this button triggers the
toggleDisplayBio() method which displays the desired output.
1. import React, { Component } from 'react';
2. class App extends React.Component {
3. constructor() {
4. super();
5. this.state = { displayBio: false };
6. console.log('Component this', this);
7. this.toggleDisplayBio = this.toggleDisplayBio.bind(this);
8. }
9. toggleDisplayBio(){
10. this.setState({displayBio: !this.state.displayBio});
11. }
12. render() {
13. return (
130
14. <div>
15. <h1>Welcome to JavaTpoint!!</h1>
16. {
17. this.state.displayBio ? (
18. <div>
19. <p><h4>Javatpoint is one of the best Java training institute in Noida, Delhi, Gurugram, G
haziabad and Faridabad. We have a team of experienced Java developers and trainers from multinational co
mpanies to teach our campus students.</h4></p>
20. <button onClick={this.toggleDisplayBio}> Show Less </button>
21. </div>
22. ):(
23. <div>
24. <button onClick={this.toggleDisplayBio}> Read More </button>
25. </div>
26. )
27. }
28. </div>
29. )
30. }
31. }
32. export default App;
LIFE CYCLE
In ReactJS, every component creation process involves various lifecycle methods. These lifecycle methods
are termed as component's lifecycle. These lifecycle methods are not very complicated and called at various
points during a component's life. The lifecycle of the component is divided into four phases. They are:
1. Initial Phase
2. Mounting Phase
3. Updating Phase
4. Unmounting Phase
Each phase contains some lifecycle methods that are specific to the particular phase. Let us discuss each of
these phases one by one.
1. Initial Phase
It is the birth phase of the lifecycle of a ReactJS component. Here, the component starts its journey on a
way to the DOM. In this phase, a component contains the default Props and initial State. These default
properties are done in the constructor of a component. The initial phase only occurs once and consists of the
following methods.
o getDefaultProps()
It is used to specify the default value of this.props. It is invoked before the creation of the component
or any props from the parent is passed into it.
131
o getInitialState()
It is used to specify the default value of this.state. It is invoked before the creation of the component.
2. Mounting Phase
In this phase, the instance of a component is created and inserted into the DOM. It consists of the following
methods.
o componentWillMount()
This is invoked immediately before a component gets rendered into the DOM. In the case, when you
call setState() inside this method, the component will not re-render.
o componentDidMount()
This is invoked immediately after a component gets rendered and placed on the DOM. Now, you can
do any DOM querying operations.
o render()
This method is defined in each and every component. It is responsible for returning a single
root HTML node element. If you don't want to render anything, you can return a null or false value.
3. Updating Phase
It is the next phase of the lifecycle of a react component. Here, we get new Props and change State. This
phase also allows to handle user interaction and provide communication with the components hierarchy. The
main aim of this phase is to ensure that the component is displaying the latest version of itself. Unlike the
Birth or Death phase, this phase repeats again and again. This phase consists of the following methods.
o componentWillRecieveProps()
It is invoked when a component receives new props. If you want to update the state in response to
prop changes, you should compare this.props and nextProps to perform state transition by
using this.setState() method.
o shouldComponentUpdate()
It is invoked when a component decides any changes/updation to the DOM. It allows you to control
the component's behavior of updating itself. If this method returns true, the component will update.
Otherwise, the component will skip the updating.
o componentWillUpdate()
It is invoked just before the component updating occurs. Here, you can't change the component state
by invoking this.setState() method. It will not be called, if shouldComponentUpdate() returns false.
o render()
It is invoked to examine this.props and this.state and return one of the following types: React
elements, Arrays and fragments, Booleans or null, String and Number. If shouldComponentUpdate()
returns false, the code inside render() will be invoked again to ensure that the component displays
itself properly.
o componentDidUpdate()
It is invoked immediately after the component updating occurs. In this method, you can put any code
132
inside this which you want to execute once the updating occurs. This method is not invoked for the
initial render.
4. Unmounting Phase
It is the final phase of the react component lifecycle. It is called when a component instance
is destroyed and unmounted from the DOM. This phase contains only one method and is given below.
o componentWillUnmount()
This method is invoked immediately before a component is destroyed and unmounted permanently.
It performs any necessary cleanup related task such as invalidating timers, event listener, canceling
network requests, or cleaning up DOM elements. If a component instance is unmounted, you cannot
mount it again.
JS LOCAL STORAGE
LocalStorage is a data storage type of web storage. This allows the JavaScript sites and apps to store
and access the data without any expiration date. This means that the data will always be persisted and
will not expire. So, data stored in the browser will be available even after closing the browser window.
In short, all we can say is that the localStorage holds the data with no expiry date, which is available to
the user even after closing the browser window. It is useful in various ways, such as remembering the
shopping cart data or user login on any website.
In the past days, cookies were the only option to remember this type of temporary and local
information, but now we have localStorage as well. Local storage comes with a higher storage limit
than cookies (5MB vs 4MB). It also does not get sent with every HTTP request. So, it is a better choice
now for client-side storage. Some essential points of localStorage need to be noted:
localStorage is not secure to store sensitive data and can be accessed using any code. So, it is quite
insecure.
It is an advantage of localStorage over cookies that it can store more data than cookies. You can
store 5MB data on the browser using localStorage.
localStorage stores the information only on browser instead in database. Thereby the localStorage is
not a substitute for a server-based database.
localStorage is synchronous, which means that each operation executes one after another.
LOCALSTORAGE METHODS
The localStorage offers some methods to use it. We will discuss all these localStorage methods with
examples. Before that, a basic overview of these methods are as follows:
Methods Description
setItem() This method is used to add the data through key and value to
localStorage.
getItem() It is used to fetch or retrieve the value from the storage using the key.
133
removeItem() It removes an item from storage by using the key.
Each of these methods is used with localStorage keyword connecting with dot(.) character. For
Example: localStorage.setItem().
1. Remember that localStorage property is read-only.
2. Following some codes given, which are used to add, retrieve, remove, and clear the data in localStorage.
Add data
To add the data in localStorage, both key and value are required to pass in setItem() function.
1. localStorage.setItem("city", "Noida");
Retrieve data
It requires only the key to retrieve the data from storage and a JavaScript variable to store the returned data.
1. const res = localStorage.getItem("city");
Remove data
It also requires only the key to remove the value attached with it.
1. localStorage.removeItem("city");
Clear localStorage
It is a simple clear() function of localStorage, which is used to remove all the localStorage data:
1. localStorage.clear()
Limitation of localStorage
As the localStorage allows to store temporary, local data, which remains even after closing the browser
window, but it also has few limitations. Below are some limitations of localStorage are given:
o Do not store sensitive information like username and password in localStorage.
o localStorage has no data protection and can be accessed using any code. So, it is quite insecure.
o You can store only maximum 5MB data on the browser using localStorage.
o localStorage stores the information only on browser not in server-based database.
o localStorage is synchronous, which means that each operation executes one after another.
Advantage of localStorage
The localStorage has come with several advantages. First and essential advantage of localStorage is that it
can store temporary but useful data in the browser, which remains even after the browser window closed.
Below is a list of some advantages:
o The data collected by localStorage is stored in the browser. You can store 5 MB data in the browser.
o There is no expiry date of data stored by localStorage.
134
o You can remove all the localStorage item by a single line code, i.e., clear().
o The localStorage data persist even after closing the browser window, like items in shopping cart.
o It also has advantages over cookies because it can store more data than cookies.
REACT EVENTS
An event is an action that could be triggered as a result of the user action or system generated event. For
example, a mouse click, loading of a web page, pressing a key, window resizes, and other interactions
are called events.
React has its own event handling system which is very similar to handling events on DOM elements.
The react event handling system is known as Synthetic Events. The synthetic event is a cross-browser
wrapper of the browser's native event.
3. In react, we cannot return false to prevent the default behavior. We must call preventDefault event
explicitly to prevent the default behavior. For example:
In plain HTML, to prevent the default link behavior of opening a new page, we can write:
1. <a href="#" onclick="console.log('You had clicked a Link.'); return false">
2. Click_Me
3. </a>
LIFTING STATE UP
13.DISCUSS LIFTING STATE UP WITH EXAMPLE (PART B)
136
As we know, every component in React has its own state. Because of this sometimes data can be
redundant and inconsistent. So, by Lifting up the state we make the state of the parent component as a
single source of truth and pass the data of the parent in its children.
Time to use Lift up the State: If the data in “parent and children components” or in “cousin
components” is Not in Sync.
Example 1: If we have 2 components in our App. A -> B where, A is parent of B. keeping the same data
in both Component A and B might cause inconsistency of data.
Example 2: If we have 3 components in our App.
A
/\
B C
Where A is the parent of B and C. In this case, If there is some Data only in component B but, component
C also wants that data. We know Component C cannot access the data because a component can talk only
to its parent or child (Not cousins).
Problem: Let’s Implement this with a simple but general example. We are considering the second
example.
Complete File Structure:
Approach: To solve this, we will Lift the state of component B and component C to component A. Make
A.js as our Main Parent by changing the path of App in the index.js file
Before:
import App from './App';
After:
import App from './A';
Filename- A.js:
import React,{ Component } from 'react';
import B from './B'
import C from './C'
137
constructor(props) {
super(props);
this.handleTextChange = this.handleTextChange.bind(this);
this.state = {text: ''};
}
handleTextChange(newText) {
this.setState({text: newText});
}
render() {
return (
<React.Fragment>
<B text={this.state.text}
handleTextChange={this.handleTextChange}/>
<C text={this.state.text} />
</React.Fragment>
);
}
}
export default A;
Filename- B.js:
import React,{ Component } from 'react';
constructor(props) {
super(props);
this.handleTextChange = this.handleTextChange.bind(this);
}
handleTextChange(e){
this.props.handleTextChange(e.target.value);
}
render() {
return (
<input value={this.props.text}
onChange={this.handleTextChange} />
);
}
}
export default B;
Filename- C.js:
import React,{ Component } from 'react';
render() {
return (
<h3>Output: {this.props.text}</h3>
138
);
}
} export default C;
139
}
class CreateUserName extends UserNameForm {
render() {
const parent = super.render();
return (
<div>
{parent}
<button>Create</button>
</div>
)
}
}
class UpdateUserName extends UserNameForm {
render() {
const parent = super.render();
return (
<div>
{parent}
<button>Update</button>
</div>
)
}
}
ReactDOM.render(
(<div>
< CreateUserName />
< UpdateUserName />
</div>), document.getElementById('root')
);
We extended the UserNameForm component and extracted its method in child component using
super.render();
Composition
In Object-Oriented Programming, composition is a well-known concept. It describes a class that can
refer to one or more objects of another class as instances rather than inheriting properties from a base
class.
For example, the composition can be used for building a car's engine.
What is the definition of composition in general?
It's all about the ingredients and how they're put together to become something more significant. A dish is
made up of food ingredients while cooking. The fruits are used to make the ideal smoothie. In a dance video,
140
it's the choreography of dancers. In programming, the internals of a function must be organized so that the
intended output is obtained.
The techniques for using several components together are done by composition and inheritance in
React. This facilitates code reuse. React recommends using composition instead of inheritance as far
as feasible, and inheritance should only be utilized in particular instances.
The ‘is-a relationship’ mechanism was used in inheritance. Derived components had to inherit the
properties of the base component, which made changing the behaviour of any component quite
difficult. The composition aspires to be better. Why not inherit only behaviour and add it to the
desired component instead of inheriting properties from other components?
Only the behaviour is passed down from composition without the inheritance of properties. Why is
this a plus point? It was challenging to add new behaviour via inheritance as the derived component
inherited all of the parent class's properties, making it impossible to add new behaviour. More use
cases had to be included. However, we only inherit behaviour in composition, and adding new
behaviour is relatively easy.
React proposes utilizing composition instead of inheritance to reuse code between components
because React has an advanced composition model. Between Composition and Inheritance in React,
we can distinguish the following points:
We can overuse ‘inheritance’.
‘Behavior’ composition can be made simpler and easier.
Composition is preferred over deep inheritance in React.
Inheritance inherits the properties of other components, whereas composition merely inherits the
behaviour of other components.
It was difficult to add new behaviour via inheritance since the derived component inherits all of the
parent class's properties, making it impossible to add new behaviour.
There are a few primary reasons why we should use composition over inheritance when developing React
apps.
REFERENCE:
1. www.geeksforgeeks.com
2. https://www.tutorialspoint.com/composition-vs-inheritance-in-react-js
3. https://www.geeksforgeeks.org/lifting-state-up-in-reactjs/
4. Learning react modern patterns for developing react apps 2nbsped_compress – Alex Banks &
Eve Porcello
*********************************UNIT IV COPLETED*****************************
142
POSSIBLE QUESTIONS
PART A
1.What is REACT?
2.What is DOM?
PART B
**********************************
143
VASAVI VIDYA TRUST GROUP OF INSTITUTIONS, SALEM – 103
DEPARTMENT OF MCA
CLASS: I MCA UNIT: V
SUBJECT: FULL STACK WEB DEVELOPMENT SUBJECT CODE: MC4201
INTRODUCTION
Cloud Computing provides us means of accessing the applications as utilities over the Internet. It
allows us to create, configure, and customize the applications online.
What is Cloud?
The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is something,
which is present at remote location. Cloud can provide services over public and private networks, i.e.,
WAN, LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship management (CRM) execute on
cloud.
What is Cloud Computing?
Cloud Computing refers to manipulating, configuring, and accessing the hardware and software
resources remotely. It offers online data storage, infrastructure, and application.
Cloud computing offers platform independency, as the software is not required to be installed locally
on the PC. Hence, the Cloud Computing is making our business applications mobile and collaborative.
CLOUD PROVIDERS OVERVIEW
1.EXPLAIN ABOUT CLOUD OVERVIEW (PART B)
DEFINE CLOUD (PART A)
Cloud computing is a computing paradigm, where a large pool of systems are connected in
private or public networks, to provide dynamically scalable infrastructure for application, data
and file storage. With the advent of this technology, the cost of computation, application hosting,
content storage and delivery is reduced significantly.
Cloud computing is a practical approach to experience direct cost benefits and it has the potential to
transform a data center from a capital-intensive set up to a variable priced environment.
144
Basic Concepts
There are certain services and models working behind the scene making the cloud computing feasible and
accessible to end users. Following are the working models for cloud computing:
Deployment Models
Service Models
Deployment Models
Deployment models define the type of access
to the cloud, i.e., how the cloud is located?
Cloud can have any of the four types of
access: Public, Private, Hybrid, and
Community.
Public Cloud
The public cloud allows systems and services
to be easily accessible to the general public.
Public cloud may be less secure because of its openness.
One of the advantages of a Public cloud is that they may be larger than an enterprises cloud, thus
providing the ability to scale seamlessly, on demand.
Private Cloud
The private cloud allows systems and services to be accessible within an organization. It is more secured
because of its private nature.
There are two variations to a private cloud:
- On-premise Private Cloud: On-premise private clouds, also known as internal clouds are
hosted within one‟s own data center. This model provides a more standardized process and
protection, but is limited in aspects of size and scalability. IT departments would also need to
incur the capital and operational costs for the physical resources. This is best suited for
applications which require complete control and configurability of the infrastructure and
security.
- Externally hosted Private Cloud: This type of private cloud is hosted externally with a cloud
provider, where the provider facilitates an exclusive cloud environment with full guarantee of
privacy. This is best suited for enterprises that don‟t prefer a public cloud due to sharing of
physical resources.
Community Cloud
The community cloud allows systems and services to be accessible by a group of organizations.
Hybrid Cloud
The hybrid cloud is a mixture of public and private cloud, in which the critical activities are performed
using private cloud while the non-critical activities are performed using public cloud.
Service Models
Cloud computing is based on service models. These are categorized into three basic service models which
are -
1. Infrastructure-as–a-Service (IaaS)
2. Platform-as-a-Service (PaaS)
3. Software-as-a-Service (SaaS)
4. Anything-as-a-Service (XaaS) is yet another service model, which includes Network-as-a-Service,
145
Business-as-a-Service, Identity-as-a-Service, Database-as-a-Service or Strategy-as-a-Service.
Cloud Computing Models
Cloud Providers offer services that can be grouped into three categories.
1. Software as a Service (SaaS): In this model, a complete application is offered to the customer,
as a service on demand. A single instance of the service runs on the cloud & multiple end users
are serviced. On the customers‟ side, there is no need for upfront investment in servers or
software licenses, while for the provider, the costs are lowered, since only a single application
needs to be hosted & maintained. Today SaaS is offered by companies such as Google,
Salesforce, Microsoft, Zoho, etc.
2. Platform as a Service (Paas): Here, a layer of software, or development environment is
encapsulated & offered as a service, upon which other higher levels of service can be built. The
customer has the freedom to build his own applications, which run on the provider‟s
infrastructure. To meet manageability and scalability requirements of the applications, PaaS
providers offer a predefined combination of OS and application servers, such as LAMP
platform (Linux, Apache, MySql and PHP), restricted J2EE, Ruby etc. Google‟s App Engine,
Force.com, etc are some of the popular PaaS examples.
3. Infrastructure as a Service (Iaas): IaaS provides basic storage and computing capabilities as
standardized services over the network. Servers, storage systems, networking equipment, data
centre space etc. are pooled and made available to handle workloads. The customer would
typically deploy his own software on the infrastructure. Some common examples are Amazon,
GoGrid, 3 Tera, etc.
1. Internet gateway
This VPC component is horizontally scaled and features high availability as well as robust redundancy.
VPCs use internet gateways to communicate with the internet at large. The two purposes of an internet
gateway are:
Executing network address translation for instances where a public IPv4 address has been
assigned
Setting a target in VPC route tables for internet-routable traffic
Internet gateways can support both forms of traffic—IPv4 and IPv6—without the risk of network traffic
being affected by bandwidth limitations or availability fluctuations. Normally, VPC vendors will provide
internet gateways to all clients without levying additional charges.
Egress-only internet gateways are a related component. Like an internet gateway, this component is also
horizontally scaled, features high availability, and is redundant in nature. Egress-only internet gateways
support IPv6-based outbound communication from VPC instances to the internet. At the same time, this
component prevents the internet from establishing an IPv6 link with VPC instances.
2. Carrier gateways
Network address translation devices support connections of private subnet instances to the internet, on-
premise networks, and even other VPCs. These instances can establish communication with external
services. However, they cannot receive connection requests that are unsolicited in nature.
Network address translation devices replace the original IPv4 address belonging to the instances with the
devices’ own address. The addresses are then translated back to the source IPv4 addresses when transmitting
response traffic to the instances.
AWS offers managed network address translation devices known as NAT gateways. These managed devices
offer higher availability and better bandwidth when compared to NAT instances (NAT devices created by
clients on EC2 instances). They also need less effort dedicated toward administration.
DHCP sets a standard for transmitting configuration data to TCP/IP network hosts. The DHCP message
contains an ‘options’ field for configuration parameters, such as NetBIOS-node-type, domain name, and
domain name server. Creating a VPC on AWS automatically leads to the creation of DHCP options that are
then associated with the VPC. Users can configure their own DHCP options set.
DNS sets the standard for resolving the names used on the internet according to their associated IP
addresses. DNS hostnames, which contain a domain name and a host name, are used to assign a unique
name to a computer. DNS servers process DNS hostnames and resolve them to their preset IP addresses.
Private IPv4 addresses are used for communicating within the network associated with the instance, while
public IPv4 addresses are used to communicate over the internet. AWS clients can use their own DNS server
by creating a fresh DHCP options set for their VPC.
6. Prefix lists
150
Prefix lists contain one or more classless inter-domain routing (CIDR) blocks and are used to configure and
manage route tables and security groups with ease. Prefix lists can be created from frequently used IP
addresses. They can be referenced as a set within routes and rules of security groups instead of being
referenced individually.
Security group rules with varying CIDR blocks but having the same protocol and port can be consolidated
into one rule that uses a prefix list. In networks that have been scaled and are required to transmit traffic
from a different CIDR block, the relevant prefix list can be updated to update all security groups that use the
prefix-list.
1. Amazon
2. Google cloud
3. IBM cloud
4. Microsoft azure virtual network
5. VM ware
151
The main benefit of the scalable architecture is performance and the ability to handle bursts of traffic or
heavy loads with little or no notice.
To scale horizontally (scaling in or out), you add more resources like virtual machines to your system to
spread out the workload across them. Horizontal scaling is especially important for companies that
need high availability services with a requirement for minimal downtime.
Horizontal scaling increases high availability because as long as you are spreading your infrastructure
across multiple areas, if one machine fails, you can just use one of the other ones.
Because you’re adding a machine, you need fewer periods of downtime and don’t have to switch the old
machine off while scaling. There may never be a need for downtime if you scale effectively.
And here are some simpler advantages of horizontal scaling:
The main disadvantage of horizontal scaling is that it increases the complexity of the maintenance and
operations of your architecture, but there are services in the AWS environment to solve this issue.
Through vertical scaling (scaling up or down), you can increase or decrease the capacity of existing
services/instances by upgrading the memory (RAM), storage, or processing power (CPU). Usually, this
means that the expansion has an upper limit based on the capacity of the server or machine being expanded.
152
Vertical scaling benefits
No changes have to be made to the application code and no additional servers need to be added;
you just make the server you have more powerful or downsize again.
Less complex network – when a single instance handles all the layers of your services, it will not
have to synchronize and communicate with other machines to work. This may result in faster responses.
Less complicated maintenance – the maintenance is easier and less complex because of the number
of instances you will need to manage.
A maintenance window with downtime is required – unless you have a backup server that can
handle operations and requests, you will need some considerable downtime to upgrade your machine.
Single point of failure – having all your operations on a single server increases the risk of losing all
your data if a hardware or software failure were to occur.
Upgrade limitations – there is a limitation to how much you can upgrade a machine/instance.
In the cloud, you will usually use both of these methods, but horizontal scaling is usually considered a long-
term solution, while vertical scaling is usually considered a short-term solution. The reason for this
distinction is that you can usually add as many servers to the infrastructure as you need, but sometimes
hardware upgrades are just not possible anymore.
153
Both horizontal and vertical scaling have their benefits and limitations. Here are some factors to consider:
Upgradability and flexibility – if you run your application layer on separate machines (horizontally
scaled), they are easier to decouple and upgrade without downtime.
Worldwide distribution – if you plan to have national or global customers, it is unreasonable to
expect them to access your services from one location. In this case, you need to scale resources
horizontally.
Reliability and availability – horizontal scaling can provide you with a more reliable system. It
increases redundancy and ensures that you are not dependent on one machine.
Performance – sometimes it’s better to leave the application as is and upgrade the hardware to meet
demand (vertically scale). Horizontal scaling may require you to rewrite code, which can add
complexity.
VIRTUAL MACHINE
7.DEFINE VM (PART A)
EXPLAIN VM IN DETAIL (PART B)
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
A cloud virtual machine is the digital version of a physical computer that can run in a cloud. Like a
physical machine, it can run an operating system, store data, connect to networks, and do all the
other computing functions.
154
Virtual machine is a software-based-computer that exists within the operating system of another
computer. In simpler terms, it is a virtualization of an actual computer, except that it exists on another
system.
Typically you will have a hypervisor running on the physical machine, and you will have virtual
machines running on top of the hypervisor. Hypervisor is a software layer that allows you to
virtualize the environment. The operating system running in the virtual machine is called as the
Guest Operating System.
Multiple virtual machines can share a single host; that is, they can run on the same physical machine.
This increases the usage efficiency of the physical machine and allows the virtual machines to be
completely oblivious to their physical environment. The number of virtual machines on a single host is
limited by the resources of the physical machine.
Advantages
There are many advantages to using cloud virtual machines instead of physical machines, including:
Low cost: It is cheaper to spin off a virtual machine in the clouds than to procure a physical machine.
Easy scalability: We can easily scale in or scale out the infrastructure of a cloud virtual machine
based on load.
Ease of setup and maintenance: Spinning off virtual machines is very easy as compared to buying
actual hardware. This helps us get set up quickly.
155
Shared responsibility: Disaster recovery becomes the responsibility of the Cloud provider. We
don’t need a different disaster recovery site incase our primary site goes down.
ETHERNET AND SWITCHES
8. EXPLAIN ETERNET AND SWITCHES IN CLOUD (PART B)
Ethernet is the traditional technology for connecting devices in a wired local area network
(LAN) or wide area network (WAN).
It enables devices to communicate with each other via a protocol, which is a set of rules or common
network language.
Ethernet describes how network devices format and transmit data so other devices on the same
LAN or campus network can recognize, receive and process the information. An Ethernet cable
is the physical, encased wiring over which the data travels.
Each frame is wrapped in a packet that contains several bytes of information to establish the
connection and mark where the frame starts.
Engineers at Xerox first developed Ethernet in the 1970s. Ethernet initially ran over coaxial cables.
Early Ethernet connected multiple devices into network segments through hubs -- Layer 1 devices
responsible for transporting network data -- using either a daisy chain or star topology. Currently, a
typical Ethernet LAN uses special grades of twisted-pair cables or fiber optic cabling.
If two devices that share a hub try to transmit data at the same time, the packets can collide and create
connectivity problems. To alleviate these digital traffic jams, IEEE developed the Carrier Sense Multiple
Access with Collision Detection (CSMA/CD) protocol. This protocol enables devices to check whether a
given line is in use before initiating new transmissions.
Later, Ethernet hubs largely gave way to network switches. Because a hub cannot discriminate between
points on a network segment, it can't send data directly from point A to point B. Instead, whenever a
network device sends a transmission via an input port, the hub copies the data and distributes it to all
available output ports.
In contrast, a switch intelligently sends any given port only the traffic intended for its devices rather than
copies of any and all the transmissions on the network segment, thus improving security and efficiency.
Like with other network types, involved computers must include a network interface card (NIC) to
156
connect to Ethernet.
Types of Ethernet
An Ethernet device with CAT5/CAT6 copper cables is connected to a fiber optic cable through fiber
optic media converters. The distance covered by the network is significantly increased by this
extension for fiber optic cable. There are some kinds of Ethernet networks, which are discussed below:
1. Fast Ethernet: This type of Ethernet is usually supported by a twisted pair or CAT5 cable, which has
the potential to transfer or receive data at around100 Mbps. They function at 100Base and
10/100Base Ethernet on the fiber side of the link if any device such as a camera, laptop, or other is
connected to a network. The fiber optic cable and twisted pair cable are used by fast Ethernet to create
communication. The 100BASE-TX,
100BASE-FX, and 100BASE-T4 are the three categories of Fast Ethernet.
2. Gigabit Ethernet: This type of Ethernet network is an upgrade from Fast Ethernet, which uses fiber
optic cable and twisted pair cable to create communication. It can transfer data at a rate of 1000
Mbps or 1Gbps. In modern times, gigabit Ethernet is more common. This network type also uses
CAT5e or other advanced cables, which can transfer data at a rate of 10 Gbps.
The primary intention of developing the gigabit Ethernet was to full fill the user's requirements, such as
faster transfer of data, faster communication network, and more.
1. 10-Gigabit Ethernet: This type of network can transmit data at a rate of 10 Gigabit/second, considered a
more advanced and high-speed network. It makes use of CAT6a or CAT7 twisted-pair cables and fiber
optic cables as well. This network can be expended up to nearly 10,000 meters with the help of using a
fiber optic cable.
2. Switch Ethernet: This type of network involves adding switches or hubs, which helps to improve
network throughput as each workstation in this network can have its own dedicated 10 Mbps connection
instead of sharing the medium. Instead of using a crossover cable, a regular network cable is used when
a switch is used in a network. For the latest Ethernet, it supports 1000Mbps to 10 Gbps and 10Mbps to
100Mbps for fast Ethernet.
SWITCHES
Network switch is an essential component in their networking building plan. In a network deployment,
switch channels incoming data from any of multiple input ports to the specific output port that will
take the data toward its intended destination. Besides, to achieve high performance level, there are
different types of switches in networking.
157
LAN Switch
Local area network switches or LAN switches are usually used to connect points on a company’s
internal LAN. It is also known as a data switch or an Ethernet switch. It blocks the overlap of data
packets running through a network by the economical allocation of bandwidth. The LAN switch
delivers the transmitted data packet before directing it to its planned receiver. These types of
switches reduce network congestion or bottlenecks by distributing a package of data only to its
intended recipient.
Unmanaged Switch
Unmanaged network switches are frequently used in home networks, small companies and
businesses. It permits devices on the network to connect with each other, such as computer to
computer or printer to computer in one location.
An unmanaged switch does not necessarily need to be configured or watched. It is simple and easy to
set up. If you want to add more Ethernet ports, you can use these plug and play types of switches in
networking.
Managed Switch
Compared to unmanaged switches, the advantage of managed switches is that they can be
customized to enhance the functionality of a certain network. They offer some features like QoS
(Quality of Service), Simple Network Management Protocol (SNMP) and so on.
These types of switches in networking can support a range of advanced features designed to be
controlled by a professional administrator.
In addition, there is smart switch, a type of managed switch. It has some features that managed
switch has, but are more limited. Smart network switch is usually used for the networking devices
such as VLANs.
PoE Switch
PoE Gigabit Ethernet switch is a network switch that utilizes Power over Ethernet technology.
When connected with multiple other network devices, PoE switches can support power and data
transmission over one network cable at the same time. This greatly simplifies the cabling process.
These types of switches in networking provide greater flexibility and you will never have to worry
about power outlet when deploying network devices.
Stackable Switch
Stackable switches provide a way to simplify and increase the availability of the network.
For example, instead of configuring, managing, and troubleshooting eight 48-port switches
individually, you can manage all eight like a single unit using a stackable Switches.
DOCKER CONTAINER
9.WHAT IS DOCKER IN CLOUD COMPUTING? (PART B)
Docker in cloud computing is a tool that is used to automate the deployment of applications in an
environment designed to manage containers. It is a container management service. These containers
help applications to work while they are being shifted from one platform to another. Docker’s
technology is distinctive because it focuses on the requirements of developers and systems. This modern
technology enables enterprises to create and run any product from any geographic location.
There are several problems associated with cloud environments and tries to solve those issues by
creating a systematic way to distribute and augment the application. It helps to separate the applications
from other containers resulting in a smooth flow. As its job, it is possible to manage our infrastructure,
in the same ways we use to manage our applications, with the help of Docker.
158
HOW DOES DOCKER WORK
A docker container image is structured in terms oflayers.’
Example: A process for building image
1. Tailor-made: Most industries want to use a purpose-built. The Docker in cloud computing enables its
clients to make use of Docker to organize their software infrastructure.
2. Accessibility: As the docker is a cloud framework, it is accessible from anywhere, anytime. Has high
efficiency.
3. Operating System Support: It takes less space. They are lightweight and can operate several containers
simultaneously.
4. Performance: Containers have better performance as they are hosted in a single docker engine.
5. Speed: No requirement for OS to boot. Applications are made online in seconds. As the business
environment is constantly changing, technological up-gradation needs to keep pace for smoother workplace
transitions. Docker helps organizations with the speedy delivery of service.
6. Flexibility: They are a very agile container platform. It is deployed easily across clouds, providing
users with an integrated view of all their applications across different environments. Easily portable
across different platforms.
7. Scalable: It helps create immediate impact by saving on recoding time, reducing costs, and limiting
the risk of operations. Containerization helps scale easily from the pilot stage to large-scale
production.
8. Automation: Docker works on Software as a service and Platform as a service model, which enables
organizations to streamline and automate diverse applications. Docker improves the efficiency of
operations as it works with a unified operating model.
9. Space Allocation: Data volumes can be shared and reused among multiple containers.
Even though there are a lot of benefits associated with docker, it has some limitations as well, which are
as follows:
1. Missing Features: Many features like container self-registration and self-inspects are in progress.
2. Provide cross-platform compatibility: One of the issues in docker is if an application is designed to run for
windows, then it cannot work on other operating systems.
KUBERNETS
159
Kubernetes is also known as 'k8s'. This word comes from the Greek language, which means
a pilot or helmsman.
Kubernetes is an extensible, portable, and open-source platform designed by Google in 2014. It is
mainly used to automate the deployment, scaling, and operations of the container-based applications
across the cluster of nodes. It is also designed for managing the services of containerized apps using
different methods which provide the scalability, predictability, and high availability.
It is actually an enhanced version of 'Borg' for managing the long-running processes and batch jobs.
Nowadays, many cloud services offer a Kubernetes-based infrastructure on which it can be deployed
as the platform-providing service. This technique or concept works with many container tools,
like docker, and follows the client-server architecture.
FEATURES OF KUBERNETES
The architecture of Kubernetes actually follows the client-server architecture. It consists of the following
two main components:
1. API Server
2. Scheduler
3. Controller Manager
4. ETCD
API Server
The Kubernetes API server receives the REST commands which are sent by the user. After receiving, it
validates the REST requests, process, and then executes them. After the execution of REST commands, the
resulting state of a cluster is saved in 'etcd' as a distributed key-value store.
161
Scheduler
The scheduler in a master node schedules the tasks to the worker nodes. And, for every worker node, it is
used to store the resource usage information.
In other words, it is a process that is responsible for assigning pods to the available worker nodes.
Controller Manager
The Controller manager is also known as a controller. It is a daemon that executes in the non-terminating
control loops. The controllers in a master node perform a task and manage the state of the cluster. In the
Kubernetes, the controller manager executes the various types of controllers for handling the nodes,
endpoints, etc.
ETCD
It is an open-source, simple, distributed key-value storage which is used to store the cluster data. It is a part
of a master node which is written in a GO programming language.
Now, we have learned about the functioning and components of a master node; let's see what is the function
of a slave/worker node and what are its components.
Worker/Slave node
The Worker node in a Kubernetes is also known as minions. A worker node is a physical machine that
executes the applications using pods. It contains all the essential services which allow a user to assign the
resources to the scheduled containers.
Following are the different components which are presents in the Worker or slave node:
Kubelet
This component is an agent service that executes on each worker node in a cluster. It ensures that the pods
and their containers are running smoothly. Every kubelet in each worker node communicates with the
master node. It also starts, stops, and maintains the containers which are organized into pods directly by the
master node.
Kube-proxy
It is a proxy service of Kubernetes, which is executed simply on each worker node in the cluster. The main
aim of this component is request forwarding. Each node interacts with the Kubernetes services
through Kube-proxy.
Pods
A pod is a combination of one or more containers which logically execute together on nodes. One worker
node can easily execute multiple pods.
REFERENCE
5. www.geeksforgeeks.com
6. https://www.tutorialspoint.com/
7. https://www.geeksforgeeks.org/
****************************UNIT V COMPLETED******************************
162
POSSIBLE QUESTIONS
PART A
1. Define cloud
2. What is a virtual private cloud (VPC)?
3. Define scaling
4. Define VM
PART B
1. Explain about cloud overview
2. Explain cloud computing models
3. Explain types of scaling in cloud
4. Explain VM in detail
5. Explain Ethernet and switches in cloud
6. What is Docker in cloud computing?
7. Discuss Kubernets
*******************
163