0% found this document useful (0 votes)
24 views163 pages

MC4201 - Full Stack Web Development

The document provides an introduction to web development, detailing the roles of servers and clients in the client-server model, and explaining how servers respond to requests from clients. It discusses various types of servers, server structures, and the software components necessary for web servers, including programming languages for both server-side and client-side development. Additionally, it covers the Hypertext Transfer Protocol (HTTP), its features, and the structure of HTML documents used in web pages.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views163 pages

MC4201 - Full Stack Web Development

The document provides an introduction to web development, detailing the roles of servers and clients in the client-server model, and explaining how servers respond to requests from clients. It discusses various types of servers, server structures, and the software components necessary for web servers, including programming languages for both server-side and client-side development. Additionally, it covers the Hypertext Transfer Protocol (HTTP), its features, and the structure of HTML documents used in web pages.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 163

DEPARTMENT OF MCA

CLASS: I MCA UNIT: I


SUBJECT: FULL STACK WEB DEVELOPMENT SUBJECTCODE: MC4201

INTRODUCTION TO WEB
INTRODUCTION TO WEB:
 "World Wide Web" or simple "Web" is the name given to all the resources of internet.
The special software or application program with which you can access web is called
"Web Browser".
 Web consists of billions of clients and server connected through wires and wireless
networks. The web clients make requests to web server.
 The web server receives the request, finds the resources and returns the response to the
client. When a server answers a request, it usually sends some type of content to the
client.
 The client uses web browser to send request to the server. The server often sends
response to the browser with a set of instructions written in HTML (Hypertext Mark
up Language).
 All browsers know how to display HTML page to the client.

SERVER
 The Server is responsible for serving the web pages depending on the client/end-user
requirement. It can be either static or dynamic.
CLIENT
 A client is a party that requests pages from the server and displays them to the end-
user. In general a client program is a web browser.
SERVERS
1
1.DEFINE SERVER (PART A)
EXPLAIN ABOUT SERVER (PART B/C)
A server is a computer or system that provides resources, data, services, or programs to other
computers, known as clients, over a network. In theory, whenever computers share resources
with client machines they are considered servers. There are many types of servers, including
web servers, mail servers, and virtual servers.
An individual system can provide resources and use them from another system at the same
time. This means that a device could be both a server and a client at the same time.
Later, servers were often single, powerful computers connected over a network to a set of
less-powerful client computers. This network architecture is often referred to as the client-
server model, in which both the client computer and the server possess computing power, but
certain tasks are delegated to servers.

How a server works


 To function as a server, a device must be configured to listen to requests from clients
on a network connection. This functionality can exist as part of the operating system
as an installed application, role, or a combination of the two.
 For example, Microsoft’s Windows Server operating system provides the functionality
to listen to and respond to client requests. Additionally installed roles or services
increase which kinds of client requests the server can respond to.
 In another example, an Apache web server responds to Internet browser requests via
an additional application, Apache, installed on top of an operating system.
 When a client requires data or functionality from a server, it sends a request over the
network. The server receives this request and responds with the appropriate
information. This is the request and response model of client-server networking, also
known as the call and response model.
 A server will often perform numerous additional tasks as part of a single request and
response, including verifying the identity of the requestor, ensuring that the client has
permission to access the data or resources requested, and properly formatting or
returning the required response in an expected way.
2.EXPLAIN THE TYPES OF SERVER (PART B)
Types of servers
There are many types of servers that all perform different functions. Many networks contain
one or more of the common server types:
 File servers
 Print servers
 Application servers
2
 DNS servers
 Mail servers
 Database servers
 Virtual server
 Proxy server
Server structures
The concept of servers is nearly as old as networking itself. After all, the point of a network
is to allow one computer to talk to another computer and distribute either work or resources.
Computing has evolved since then, resulting in several types of server structures and
hardware.
We could simply focus on how to create web pages and websites, none of this is possible
without the underlying hardware and software components that support the pages we create.

Hardware

The mention of phrases like data center, hosting provider, or


even big name companies like Microsoft and Google can
invoke mental images of large, sterile rooms full of tall racks
of hardware with blinking lights and a maze of wires.

Those more familiar with such rooms will also know the chill
resulting from the heavily air conditioned atmosphere and
droning whir of fans that typically accompany them. This, however, is not a requirement nor
an accurate portrayal of a great deal of servers connected to the Internet. With the addition of
the right software (assuming you are consuming this text digitally), the device you are using
to read this with could become an internet connected server.

Even though we have reached this point, it is difficult to forget the mental picture conjured
by the thoughts of the data center. In the current “traditional” model, thin, physically
compact servers are stacked vertically. These are referred to as rack mount hardware. Many
rack mount systems today contain hardware similar to what we have in our desktops, despite
the difference in appearance.
3
A number of companies, including Google, Yahoo, and Facebook, are looking to reinvent
this concept. Google for instance has already used custom-built servers in parts of its
network in an effort to improve efficiency and reduce costs.

Software

A typical web server today contains four elements in


addition to the physical hardware.

 Operating system
 Web server
 A database
 Scripting language.

One of the most popular combinations of these systems


has been abbreviated to LAMP, standing for Linux,
Apache, MySQL, and PHP, named in the same order.

There are many combinations of solutions that meet these features, resulting in a number of
variations of the acronym, such as WAMP for Windows, Apache, MySQL, PHP or MAMP,
identical with exception of Mac (or, rightfully, a Macintosh developed operating system).
Among the plethora of combinations, the use of LAMP prevails as the catch all reference to
a server with these types of services.

All that is ultimately required to convey static pages to an end user are the operating system
and HTTP server, the first half of the WAMP acronym. The balance adds the capability for
interactivity and for the information to change based on the result of user interactions.

Server-side Languages Example


There are several languages that can be used for server-side programming:
 PHP
 ASP.NET (C# OR Visual Basic)
 C++
 Java and JSP
 Python
 Ruby on Rails and so on.
 To improve user experience and related functionalities on the client side, JavaScript is
usually used. It is an excellent client-side platform for designing and implementing Web
applications.
 HTML5 and CSS3supports most of the client-side functionality provided by other
application frameworks.

4
 The server side needs programming mostly related to data retrieval, security and
performance. Some of the tools used here include ASP, Lotus Notes, PHP, Java and
MySQL.
 There are certain tools/platforms that aid in both client- and server-side programming.
Server-side Programming:
It is the program that runs on server dealing with the generation of content of web page.
 Querying the database
 Operations over databases
 Access/Write a file on server.
 Interact with other servers.
 Structure web applications.
 Process user input.
 For example if user input is a text in search box, run a search algorithm on data
stored on server and send the results.
CLIENT

3.BRIEFLY EXPLAIN ABOUT THE CLIENT SIDE PROGRAMMING (PART B)

 A client application can be written using Java, C, C++, Visual Basic, or any compatible
programming language.
 A client application sends a request to an application server at a given URL. The server
receives the request, processes it, and returns a response.
 These client programs execute remote procedures and functions in an application server
instance.

Client side programming


 It is the program that runs on the client machine (browser) and deals with the user
interface/display and any other processing that can happen on client machine like
reading/writing cookies.
 Interact with temporary storage
 Make interactive web pages
 Interact with local storage
 Sending request for data to server
 Send request to server
 work as an interface between server and user
 The Programming languages for client-side programming are :
o Javascript
o VBScript
o HTML
o CSS

5
o AJAX

COMMUNICATION PROTOCOL Hypertext Transfer Protocol (HTTP)


4.DISCUSS HTTP WITH NEAT DIAGRAM (PART B)
 The interconnection of systems and computer networks is the foundation of
communications today and is designed using multiple communication protocols.
 For example, there are many protocols when establishing an internet connection and
depending on the type that needs to be established, these protocols will vary.
 Furthermore, communication with the Internet is not the only type
of communication when we refer to data transmission and exchange of messages
across networks. In all cases, network protocols define the characteristics of the
connection.
 A protocol is a set of rules: the network protocols are formal standards and policies,
made up of restrictions, procedures, and formats, that define the exchange of
data packets to achieve communication between two servers or more devices over a
network.
 Network protocols include mechanisms for device identification and the establishment
of connections between them, as well as formatting rules that specify how packets and
data are structured in messages that are sent and received.
 Some protocols support message recognition and data compression, designed
for reliable, high-performance network communication.

Types of Network Protocols

The most important protocols for data transmission across the Internet are TCP
(Transmission Control Protocol) and IP (Internet Protocol). Using these jointly (TCP/IP),
we can link devices that access the network; some other communication
protocols associated with the Internet are POP, SMTP and HTTP.
We use these practically every day, although most users don't know it, and don't
understand how they work. These protocols allow the transfer of data from our devices so
that we can browse websites, send emails, listen to music online, etc.
There are several types of network protocols:

 Network communication protocols: Basic packet communication protocols such as


TCP/IP and HTTP.
 Network security protocols: they implement security in network communications
between servers; includes HTTPS, SSL, and SFTP.
 Network management protocols: these provide network maintenance and
governance, they include SNMP and ICMP.

6
 A group of network protocols that work together at the top and bottom levels are
commonly referred to as a protocol family.
 The OSI model (Open System Interconnection) conceptually organizes network protocol
families into specific network layers.
 This Open System Interconnection aims to establish a context to base the communication
architectures between different systems.
HTTP
 The Hypertext Transfer Protocol (HTTP) is an application-level protocol for
distributed, collaborative, hypermedia information systems. Basically, HTTP is a
TCP/IP based communication protocol, that is used to deliver data (HTML files,
image files, query results, etc.) on the World Wide Web.
 The default port is TCP 80, but other ports can be used as well. It provides a
standardized way for computers to communicate with each other.
 HTTP specification specifies how clients' request data will be constructed and sent
to the server, and how the servers respond to these requests.
What is HTTP?
 Every website address begins with “http://” (or “https://”). This refers to the HTTP
protocol, which your web browser uses to request a website.
Basic Features
 HTTP is connectionless: The HTTP client, i.e., a browser initiates an HTTP request
and after a request is made, the client waits for the response.
 The server processes the request and sends a response back after which client
disconnect the connection.
 So client and server knows about each other during current request and response only.
 HTTP is media independent: It means, any type of data can be sent by HTTP as long
as both the client and the server know how to handle the data content.
 HTTP is stateless: As mentioned above, HTTP is connectionless and it is a direct
result of HTTP being a stateless protocol. The server and client are aware of each
other only during a current request. Afterwards, both of them forget about each other.
 Connectionless: The client establishes a connection to the server, sends a request, the
server responds, and then the connection is terminated. For the next request, the client
will have to re-establish the connection.
 This is inconvenient because a website usually consists of several files and each of
them has to be retrieved using a separate request.
 Stateless: The two parties (i.e. the client and server) “forget” about each other
immediately. The next time the client logs on to the server, the server will not
remember that a client previously sent a request.
 Media-independent: Any type of file can be sent via HTTP as long as both parties
know how to handle the respective file type.
7
Purpose of HTTP
 If you enter an internet address in your web browser and a website is displayed
shortly thereafter, your browser has communicated with the web server via HTTP.
 HTTP is the language your web browser uses to speak with the web server in order to
inform it of a request.
How does HTTP work?
 The user types example.com into the address bar of their internet browser.
 The browser sends the respective request (i.e. the HTTP Request to the web server
that manages the domain example.com.
 Usually, the request is, “Please send me the file.” Alternatively, the client can also
ask, “Do you have this file?”.
 The web server receives the HTTP request, searches for the desired file (in this
example, the homepage example.com, meaning the file index.html), and begins by
sending back the header which informs the requesting client of the search result with
a status code.
 If the file was found and the client wants it to be sent (and did not just wish to know
whether it existed), the server sends the message body after the header (i.e. the actual
content).
 In our example, this is the file index.html.
 The browser receives the file and displays it as a website.

HTTP Transactions

The above figure shows the HTTP transaction between client and server. The client initiates
a transaction by sending a request message to the server. The server replies to the request
message by sending a response message.

Messages

8
HTTP messages are of two types: request and response. Both the message types follow the
same message format.

Request Message: The request message is sent by the client that consists of a request line,
headers, and sometimes a body.

Response Message: The response message is sent by the server to the client that consists of
a status line, headers, and sometimes a body.

Uniform Resource Locator (URL)


o A client that wants to access the document in an internet needs an address and to
facilitate the access of documents, the HTTP uses the concept of Uniform Resource
Locator (URL).
9
o The Uniform Resource Locator (URL) is a standard way of specifying any kind of
information on the internet.
o The URL defines four parts: method, host computer, port, and path.

o Method: The method is the protocol used to retrieve the document from a server. For
example, HTTP.
o Host: The host is the computer where the information is stored, and the computer is
given an alias name. Web pages are mainly stored in the computers and the computers
are given an alias name that begins with the characters "www". This field is not
mandatory.
o Port: The URL can also contain the port number of the server, but it's an optional
field. If the port number is included, then it must come between the host and path and
it should be separated from the host by a colon.
o Path: Path is the pathname of the file where the information is stored. The path itself
contains slashes that separate the directories from the subdirectories and files.

STRUCTURE OF HTML DOCUMENTS

5.DISCUSS IN DETAIL ABOUT HTML DOCUMENT STRUCTURE (PART B)

 HTML stands for Hypertext Markup Language, and it is the most widely used
language to write Web Pages.
o Hypertext refers to the way in which Web pages (HTML documents) are linked
together. Thus, the link available on a webpage is called Hypertext.
o As its name suggests, HTML is a Markup Language which means you use
HTML to simply "mark-up" a text document with tags that tell a Web browser
how to structure it to display.
 Originally, HTML was developed with the intent of defining the structure of
documents like headings, paragraphs, lists, and so forth to facilitate the sharing of
scientific information between researchers.
 Now, HTML is being widely used to format web pages with the help of different tags
available in HTML language. Basic HTML Document In its simplest form, following
is an example of an HTML document:

Example:
10
<html>
<head>
<title>This is document title </title>
</head>
<body>
<h1>
This is the heading tags
</h1>
<p>Document content goes here……</p>
</body>
</html>

HTML Document Structure

The basic structure of an HTML document consists of 5 elements:

1. <!DOCTYPE>
2. <html>
3. <head>
4. <title>
5. <body>
The <!DOCTYPE> DECLARATION
The <!DOCTYPE> declaration tag is used by the web browser to understand the version of
the HTML used in the document. Current version of HTML is 5 and it makes use of the
following declaration:
<!DOCTYPE html>
There are many other declaration types which can be used in HTML document depending on
what version of HTML is being used. We will see more details on this while discussing tag
along with other HTML tags.
11
An HTML document has two main parts:
 head. The head element contains title and meta data of a web document.
 body. The body element contains the information that you want to display on a web
page.
To make your web pages compatible with HTML 4, you need to add a document type
declaration (DTD) before the HTML element. Many web authoring software add DTD and
basic tags automatically when you create a new web page.

BASIC MARKUP TAGS


6.EXPLAIN THE DIFFERENT TYPES OF HTML TAGS (PART B)
HTML TAGS
 HTML tags are used to mark-up HTML elements
 HTML tags are surrounded by the two characters < and >
 The surrounding characters are called angle brackets
 HTML tags normally come in pairs like <B> and </B>
 The first tag in a pair is the start tag, the second tag is the end tag
 The text between the start and end tags is the element content
 HTML tags are not case sensitive means the <b> same as <B>

TAG ATTRIBUTES
Tags can have attributes. Attributes can provide additional information about the HTML
elements on your page.
This tag defines the body element of your HTML page: <BODY> . With an added bgcolor
attribute, you can tell the browser that the background color of your page should be red, like
this:
<BODY BGCOLOR=”RED”>
This tag defines an HTML table: <TABLE>. With an added border attribute, you can tell
the browser that the table should have no borders: <TABLE BORDER=”0”>. Attributes
always come in name/value pairs like this: name="value". Attributes are always added to
the start tag of an HTML element.
TAG LIST
 <!DOCTYPE html>: This tag is used to tells the HTML version. This currently tells
that the version is HTML 5.0
 <html> </html> : <html> is a root element of html. It’s a biggest and main element
in complete html language, all the tags , elements and attributes enclosed in it or we
can say wrap init , which is used to structure a web page. <html> tag is parent tag of
<head> and <body> tag , other tags enclosed within <head > and <body>. In <html
> tag we use “lang” attributes to define languages of html page such as <html

12
lang=”en”> here en represents English language. some of them are : es = Spanish ,
zh-Hans = Chinese, fr= french and el= Greek etc.
 <head>: Head tag contains metadata, title, page CSS etc. Data stored in the <head>
tag is not displayed to the user, it is just written for reference purposes and as a
watermark of the owner.
 <tittle> = to store website name or content to be displayed.
 <link> = To add/ link css( cascading style sheet) file.
 <meta> = 1. to store data about website, organisation , creator/ owner
2. for responsive website via attributes
3. to tell compatibility of html with browser
 <script> = to add javascript file.
 <body>: A body tag is used to enclose all the data which a web page has from texts
to links. All the content that you see rendered in the browser is contained within this
element. Following tags and elements used in the body.
 Headings are defined with the <H1> to <H6> tags. <H1> defines the largest heading.
<H6> defines the smallest heading.

 Paragraphs are defined with the <P> tag.


EG. <P> THIS IS THE PARAGRAPH</P>
 The <BR> tag is used when you want to end a line, but don't want to start a new
paragraph. The <BR>tag forces a line break wherever you place it.
EG. <P> THIS IS <BR> THE PARA <BR> GRAPH</P>
 The comment tag is used to insert a comment in the HTML source code. A comment
will be ignored by the browser. You can use comments to explain your code, which
can help you when you edit the source code at a later date.
 EG: <!.... THIS IS THE COMMENT….>
 HTML FORMATTING TAGS: HTML defines a lot of elements for formatting
output, like bold or italic text. The tags are,

13
 HTML links: HTML uses a hyperlink to link to another document on the Web. HTML
uses the (anchor) tag to create a link to another document. An anchor can point to any
resource on the Web: an HTML page, an image, a sound file, a movie, etc.

 HTML LISTS: HTML supports ordered and unordered list. They are,

Unordered Lists: An unordered list is a list of items. The list items are marked with bullets
(typically small black circles). An unordered list starts with the <ul> tag. Each list item starts
with the <li> tag.
Example:
<ul>
<li>coffee</li>
<li> tea</li>
</ul>

Ordered Lists: An ordered list is also a list of items. The list items are marked with
numbers. An ordered list starts with the <ol> tag. Each list item starts with the <li> tag.
Example:
<ol>
<li>coffee</li>
<li> tea</li>
</ol>

INTRODUCTION TO CSS
7.EXPAND CSS (PART A)
EXPLAIN THE USE OF CSS (PART A)
DESCRIBE CSS WITH EXAMPLE (PART B)
CSS stands for Cascading Style Sheets. CSS describes how HTML elements
14
are to be displayed on screen, paper, or in other media. CSS saves a lot of work. It
can control the layout of multiple web pages all at once. External style sheets are
stored in “.css” files. CSS is used to define styles for your web pages, including the
design, layout and variations in display for different devices and screen sizes.

Types of CSS:

There are three types of CSS available. They are:


1) Inline CSS
2) Internal CSS
3) External CSS
1. Inline CSS:
If the styles are mentioned along with the tag then this type of CSS is known as inline
CSS.
Example:
<p style=”text-align: justify; background-color: red”>Hi this is the Inline CSS. </p>
2. Internal CSS:
For internal CSS, we write all the desired “styles” for the “selectors” along with
the properties and values in the “head” section. And in the body section then
newly defied selector tags are used with the actual contents.
Example
<html>
<head>
<style type= “text/css”> p
{
text-align: justify; background-color:
red;
}
</style>
</head>
<body>
<p>This is Internal CSS.</p>
</body>
</html>
3. External CSS:
Sometimes we need to apply particular style to more than one web page; in such
cases external CSS can be used. The main idea in this type of CSS is that the
desired styles can be written in one “.css” file. And this file can be called in our
web pages to apply the styles.

15
Example
<html>
<head>
<link rel = “stylesheet” type = “text/css” href = “external.css”>

</head>
<body>
<p>This is Internal CSS.</p>
</body>
</html>
external.css:

p
{
text-align: justify; background-
color: red;
}

WORKING WITH TEXT AND IMAGES WITH CSS


8.HOW CAN WE HANDLE IMAGES IN CSS WITH EXAMPLE (PART B)
 Images are not just space fillers for our web pages.
 Picture can tell a thousand words. Images are a quick way to convey messages, and we
have used them long before the web existed.
 We use them to induce emotions that make us more receptive to certain messaging,
showcase the product or services, break down content into digestible information, spice
up content, and more.
Styling
We will now go through the following six ways that will improve the design of our web page:
1. Add text on image
2. Adjust opacity or clarity of image
3. Add background colour to text
4. Change the positioning of text
5. Use colour blending of image and text
6. Darken image and use light text
The two sets of starter code will create the following web page:

16
The difference between the two sets of starter code is that in Method 1, the text
content is the child element of the div with the background image, and in Method
2, these two elements are sibling elements.
1. Add text on image
You can add text as part of the image with a photo editor or a slide application like
PowerPoint, Google Slides, or KeyNote. However, this method doesn’t work in this design as
the content gets cut off on smaller screens.
2. Adjust opacity or clarity of image
You can adjust the opacity and clarity of the background image with either a photo editor or
in CSS. Let’s focus on the latter.
If your text is nested inside the element with the background image, your text gets faded
when you adjust the opacity of the image. Likewise, text will be blurred if you apply
the blur property on the image. That probably isn’t the effect you want.

17
3. Add background colour to text
You can add a white background to the text. However, some may not like this design as the
contrast between the text and the background is too stark. You can’t use opacity, either, as it
will make the text fade out.

4. Change positioning of text


You can consider manually adjusting the text to position it against a clear background on your
image, but this may not be a feasible solution as the position of the text will shift according to
different screen sizes.
5. Use colour blending of image and text
mix-blend-mode is an interesting property that you can use to style the text while maintaining
readability. It works if you have a clean image.

6. Darken image and use light text

18
Lastly, instead of keeping to black text and light background, you can darken the image
with linear-gradient and use white text.

Rounded Images

Use the border-radius property to create rounded images:

Example
Rounded Image:

img {

border-radius: 8px; }

Example
circle Image:

img {
border-radius: 50%;
}

CSS SELECTORS

9.DEFINE SELECTORS (PART A)

DISCUSS SELECTORS IN DETAIL (PART B)

19
 A CSS selector is the first part of a CSS Rule. It is a pattern of elements and other terms
that tell the browser which HTML elements should be selected to have the CSS property
values inside the rule applied to them.
 The cascade part of CSS means that more than one style sheet can be attached to a
document, and all of them can influence the presentation.
 For example, a designer can have a global style sheet for the whole site, but a local one for
say, controlling the link color and background of a specific page. Or, a user can use own
style sheet if s/he has problems seeing the page, or if s/he just prefers a certain look

A CSS style rule is made of three parts:

1. Selector: A selector is an HTML tag at which a style will be applied. This could be any tag
like <H1> ,<P> OR <TABLE>etc.

2. Property: A property is a type of attribute of HTML tag. Put simply, all the HTML
attributes are converted into CSS properties. They could be color, border, bgcolor etc.

3. Value: Values are assigned to properties. For example, color property can have the value
either red or #F1F1F1 etc.

Here h1 is a selector , color and font-size are properties and the given value red, and 15px are
the value of that property.

 The selector is normally the HTML element you want to style.


 Each declaration consists of a property and a value.
 The property is the style attribute you want to change. Each property has a value.

Parts of style sheet

A style sheet consists of one or more rules that describe how document elements should be
displayed. A rule in CSS has two parts: the selector and the declaration. The declaration also
has two parts, the property and the value.

20
Let's take a look at a rule for a heading 1 style: h1 { font-family: verdana, "sans serif"; font-
size: 1.3em } This expression is a rule that says every h1 tag will be verdana or other sans-
serif font and the font size will be 1.3em.

The declaration contains the property and value for the selector. The property is the attribute
you wish to change and each property can take a value. The property and value are separated
by a colon and surrounded by curly braces:
body { background-color: black}
If the value of a property is more than one word, put quotes around that value: body { font-
family: "sans serif"; } If you wish to specify more than one property, you must use a semi-
colon to separate each property. This rule defines a paragraph that will have blue text that is
centered.
p { text-align: center; color: blue }
You can group selectors. Separate each selector with a comma. The example below groups
headers 1, 2, and 3 and makes them all yellow. h1, h2, h3 { color: yellow}
TYPES OF SELECTORS
 Element Type Selectors
 Descendant Selectors
 Class selectors
 Id Selectors
 Child Selectors
 Adjacent sibling selectors
 Pseudo Selectors
 Universal Selectors
Element Type Selectors
A CSS declaration always ends with a semicolon, and declaration groups are surrounded by
curly brackets: Example
p {color:red;text-align:center;}
To make the CSS more readable, you can put one declaration on each line, like this:
p
{
21
color:red;
text-align:center;
}
Descendant Selectors
Match an element that is a descendant of another element. This uses two separate selectors,
separated by a space.
For example, if we wanted all emphasized text in our paragraphs to be green text, we
would use the following CSS rule:
EXAMPLE
p em { color: green; }
Class Selectors
Match an element that has the specified class.
To match a specific class attribute, we always start the selector with a period, to signify that
we are looking for a class value. The period is followed by the class attribute value we want
to match.
For example, if we wanted all elements with a class of "highlight" to have a different
background color, we would use the following CSS rule:

EXAMPLE 1
.highlight { background-color: #ffcccc; }

EXAMPLE 2
.center {
text-align: center;
color: red;
}

Id Selectors
The id selector is used to specify a style for a single, unique element.The id selector uses the
id attribute of the HTML element, and is defined with a "#".
The hash is followed by the id attribute value we want to match. Remember, we can only use
the same id attribute value once, so the id selector will always only match one element in our
document.
Example
Imagine within the body element of our html page, we have the following paragraph element
<p id=”welcome”>Welcome to the 1st CSS Document </p>
We can then create a CSS rule with the id selector:
#welcome
22
{
color:red;
text-align:center;
}
Child selectors
Match an element that is an immediate child of another element. For example, if we
wanted all emphasized text in our paragraphs's to have green text, but not emphasized
text in other elements, we would use the following CSS rule:
Example: p > em{color: green;}
Adjacent sibling selectors
Match an element that is immediately after another element, but not a child of it.
For example, if we wanted all paragraphs that immediately followed an h4 to have green
text, but not other paragraphs, we would use the following CSS rule:
h4 + p {color: green;}
Attribute Selector
You can also apply styles to HTML elements with particular attributes. The style rule below
will match all the input elements having a type attribute with a value of text:
input[type="text"]
{
color: #000000;
}
The advantage to this method is that the <input type="submit" /> element is unaffected, and
the color applied only to the desired text fields.There are following rules applied to attribute
selector.
p[lang] - Selects all paragraph elements with a lang attribute.
p[lang="fr"] - Selects all paragraph elements whose lang attribute has a value of exactly
"fr".
p[lang~="fr"] - Selects all paragraph elements whose lang attribute contains the word
"fr".
p[lang|="en"] - Selects all paragraph elements whose lang attribute contains values that
are exactly "en", or begin with "en-"
Pseudo Selectors
An Aside about Link States
 Anchor elements are special. You can style the <a> element with an Element Type
Selector, but it might not do exactly what you expect.
 This is because links have different states, that relate to how they are interacted with.
The four primary states of a link are: link, visited, hover, active.
 Pseudo selectors come in different sizes and shapes. By far the most common pseudo
selectors are used to style our links. There are four different pseudo selectors to be
used in conjunction with links:
23
:link
A link that has not been previously visited (visited is defined by the browser history)
:visited
A link that has been visited
:hover
A link that the mouse cursor is "hovering" over
:active
A link that is currently being clicked

a:link { color: red } /* unvisited links */


a:visited { color: blue } /* visited links */
a:hover { color: green } /* user hovers */
a:active { color: lime } /* active links */

Universal Selector
Matches every element on the page. For example, if we wanted every element to have a
solid 1px wide border, we would use the following CSS rule:
Example:
*{ border: 1 px solid blue;}
The CSS Grouping Selector
The grouping selector selects all the HTML elements with the same style definitions. Look at
the following CSS code (the h1, h2, and p elements have the same style definitions):

h1 {
text-align: center;
color: red;
}

h2 {
text-align: center;
color: red;
}
p{
text-align: center;
color: red;
}

It will be better to group the selectors, to minimize the code. To group selectors, separate
each selector with a comma.
24
h1, h2, p {
text-align: center;
color: red;
}

CSS FLEX BOX

10.BRIEFLY EXPLAIN ABOUT CSS FLEXBOX (PART B)


DEFINE CSS FLEXBOX (PART A)
 Cascading Style Sheets (CSS) is a simple design language intended to simplify the
process of making web pages presentable. CSS handles the look and feel part of a web
page.
 Flexbox is a one-dimensional layout method for arranging items in rows or columns.
Items flex (expand) to fill additional space or shrink to fit into smaller spaces. This article
explains all the fundamentals.

Why Flexbox?

For a long time, the only reliable cross-browser compatible tools available for creating CSS
layouts were features like floats and positioning. These work, but in some ways they're also
limiting and frustrating.
To determine the position and dimensions of the boxes, in CSS, you can use one of the
layout modes available

 The block layout: This mode is used in laying out documents.


 The inline layout: This mode is used in laying out text.
 The table layout: This mode is used in laying out tables.
 The positioned layout: This mode is used in positioning the elements.
What is Flexbox?
In addition to the above-mentioned modes, CSS3 provides another layout mode Flexible
Box, commonly called as Flexbox.
Using this mode, you can easily create layouts for complex applications and web pages.
Unlike floats, Flexbox layout gives complete control over the direction, alignment, order,
size of the boxes.
Features of Flexbox
Following are the notable features of Flexbox layout:
 Direction: You can arrange the items on a web page in any direction such as left to
right, right to left, top to bottom, and bottom to top.
 Order: Using Flexbox, you can rearrange the order of the contents of a web page.
 Wrap: In case of inconsistent space for the contents of a web page (in single line),
you can wrap them to multiple lines (both horizontally) and vertically.

25
 Alignment: Using Flexbox, you can align the contents of the webpage with respect
to their container.
 Resize: Using Flexbox, you can increase or decrease the size of the items in the
page to fit in available space.
Supporting browsers
Following are the browsers that support Flexbox.
 Chrome 29+
 Firefox 28+
 Internet Explorer 11+
 Opera 17+
 Safari 6.1+
 Android 4.4+
 iOS 7.1+
THE FLEX MODEL

 The main axis is the axis running in the direction the flex items are laid out in (for
example, as rows across the page, or columns down the page.) The start and end of
this axis are called the main start and main end.
 The cross axis is the axis running perpendicular to the direction the flex items are laid
out in. The start and end of this axis are called the cross start and cross end.
 The parent element that has display: flex set on it (the <section> in our example) is
called the flex container.
 The items laid out as flexible boxes inside the flex container are called flex
items (the <article> elements in our example).

FLEX CONTAINERS

FLEX

To use Flexbox in your application, you need to createdefine a flex container using the
display property.
26
Usage: display: flex | inline-flex

This property accepts two values

1. flex: Generates a block level flex container.


2. inline-flex: Generates an inline flex container box

Flex containers are not block containers, and so some properties that were designed with the
assumption of block layout don’t apply in the context of flex layout. In particular:

 float and clear do not create floating or clearance of flex item, and do not take it out-
of-flow.
 vertical-align has no effect on a flex item.
 the ::first-line and ::first-letter pseudo-elements do not apply to flex containers,
and flex containers do not contribute a first formatted line or first letter to their
ancestors.
EXAMPLE USING FLEX

<!doctype html>
<html lang="en">
<style>
.box1{background:green;}
.box2{background:blue;}
.box3{background:red;}
.box4{background:magenta;}
.box5{background:yellow;}
.box6{background:pink;}
.container{
display:flex;}

.box{
font-size:35px;
padding:15px;
}
</style>

<body>
<div class="container">
<div class="box box1">One</div>
27
<div class="box box2">two</div>
<div class="box box3">three</div>
<div class="box box4">four</div>
<div class="box box5">five</div>
<div class="box box6">six</div>
</div>
</body>
<html>
Since we have given the value flex to the display property, the container uses the width of
the container (browser). You can observe this by adding a border to the container as shown
below.
.container{ display:inline-flex; border:3px solid black;
}
INLINE FLEX

On passing this value to the display property, an inline level flex container will be created.
Itjust takes the place required for the content.
The following example demonstrates how to create an inline flex container. Here, we are
creating six boxes with different colors and we have used the inline-flex container to hold
them.

EXAMPLE USING INLINE FLEX

<!doctype html>
<html lang="en">
<style>
.box1{background:green;}
.box2{background:blue;}
.box3{background:red;}
.box4{background:magenta;}
.box5{background:yellow;}
.box6{background:pink;}

.container{ display:inline-flex;
border:3px solid black;

}
28
.box{
font-size:35px;
padding:15px;
}

</style>
<body>
<div class="container">
<div class="box box1">One</div>
<div class="box box2">two</div>
<div class="box box3">three</div>
<div class="box box4">four</div>
<div class="box box5">five</div>
<div class="box box6">six</div>
</div>
</body>
<html>
Flex Properties:
 flex-direction
 flex-wrap
 flex-flow
 justify-content
 align-items
 align-content
flex-direction: The flex-direction is used to define the direction of flexible item. The
default axis is horizontal in flexbox, so the items flow into a row.
Syntax:
// Stacking flex items column wise
flex-direction: column;

// Stacking flex items from bottom to top


flex-direction: column-reverse;

// Stacking flex items row wise


flex-direction: row;

// Stacking flex items from right to left

29
flex-direction: row-reverse;
EXAMPLE
gfg_flex {
display: flex;
flex-direction: row;
background-color: green;
text-align:center;
}

flex-wrap: The flex-wrap property is used to define the wrap of flex-items. If flex-wrap
property set to wrap then then browser’s window set the box. If browser window is smaller
than elements then elements go down to the next line.
Syntax:
// Wrap flex items when necessary
flex-wrap: wrap;

// Flex items will not wrap


flex-wrap: nowrap;
EXAMPLE

.gfg_flex {
display: flex;
flex-wrap: wrap;
text-align:center;
background-color: green;
}
justify-content: The justify-content property is used to align the flex items according to the
main axis within a flexbox container.
Syntax:
// Aligns the flex items at the center
justify-content: center;

// The space is distributed around the flexbox items


//and it also adds space before the first item and after the last one.
justify-content: space-around;

// Space between the lines


justify-content: space-between;

// Aligns the flex items at the beginning of the container


30
justify-content: flex-start;

// Aligns the flex items at the end of the container


justify-content: flex-end;
EXAMPLE

.flex1 {
display: flex;
justify-content: center;
background-color: green;
}
align-items: This property is used to align flex items vertically according to the cross axis.
Syntax:
// Aligns the flex items in the middle of the container
align-items: center;

// Flexbox items are aligned at the baseline of the cross axis


align-items: baseline;

// Stretches the flex items


align-items: stretch;

// Aligns the flex items at the top of the container


align-items: flex-start;

// Aligns elements at the bottom of the container


align-items: flex-end;
EXAMPLE

.flex1 {
display: flex;
height: 200px;
align-items: center;
background-color: green;
}
align-content: This property defines how each flex line is aligned within a flexbox and it
only applicable if flex-wrap: wrap is applied i.e. if there are multiple lines of flexbox items
present.
Syntax :
// Displays the flex lines with equal space between them
align-content: space-between;
31
// Displays the flex lines at the start of the container
align-content: flex-start;

// Displays the flex lines at the end of the container


align-content: flex-end;

// Dy using space-around property space will be


// Distributed equally around the flex lines
align-content: space-around;

// Stretches the flex lines


align-content: stretch;
EXAMPLE

.main-container {
display: flex;
height: 400px;
flex-wrap: wrap;
align-content: space-between;
background-color: green;
}
JAVASCRIPT
12.EXPLAIN INDETAIL JAVASCRIPT DATATYPES (PART B)
EXPLAIN INDETAIL JAVASCRIPT VARIABLES (PART B)

 JavaScript is the world's most popular programming language.


 JavaScript is the programming language of the Web.
 JavaScript is easy to learn
Why Study JavaScript?
 JavaScript is one of the 3 languages all web developers must learn:
 HTML to define the content of web pages
 CSS to specify the layout of web pages
 JavaScript to program the behavior of web pages
<!DOCTYPE html>
<html>
<body>
<h2>What Can JavaScript Do?</h2>
<p id="demo">JavaScript can change HTML content.</p>
<button
32
type="button"
onclick='document.getElementById("demo").innerHTML = "Hello JavaScript!"'>Click
Me!</button>
</body>
</html>
<!DOCTYPE html>
<html>
<body>
<h2>JavaScript in Body</h2>
<p id="demo"></p>
<script>
document.getElementById("demo").innerHTML = "My First JavaScript";
</script>
</body>
</html>
Output:
JavaScript in Body
My First JavaScript
JAVASCRIPT: DATA TYPES AND VARIABLES
JavaScript Data Types
 JavaScript types can be divided into two categories: primitive types and object types.
JavaScript’s primitive types include numbers, strings of text (known as strings), and
Boolean truth values
 JavaScript variables can hold many data types: numbers, strings, objects and more:
var length = 16; // Number
var lastName = "Johnson"; // String
var x = {firstName:"John", lastName:"Doe"}; // Object

JavaScript Types are Dynamic


JavaScript has dynamic types. This means that the same variable can be used to hold
different data types:
Example
var x; // Now x is undefined
x = 5; // Now x is a Number
x = "John"; // Now x is a string
<!DOCTYPE html>
<html>
<body>
<h2>JavaScript Data Types</h2>

33
<p>JavaScript has dynamic types. This means that the same variable can be used to hold
different data types:</p>
<p id="demo"></p>
<script>
var x; // Now x is undefined
x = 5; // Now x is a Number
x = "John"; // Now x is a String
document.getElementById("demo").innerHTML = x;
</script>
</body>
</html>
JavaScript Strings
 JavaScript has dynamic types. A string (or a text string) is a series of characters like
"John Doe".
 Strings are written with quotes. You can use single or double quotes:
This means that the same variable can be used to hold different data types:
<!DOCTYPE html>
<html>
<body>
<h2>JavaScript Strings</h2>
<p>Strings are written with quotes. You can use single or double quotes:</p>
<p id="demo"></p>
<script>
var carName1 = "Volvo XC60";
var carName2 = 'Volvo XC60‘;
document.getElementById("demo").innerHTML =carName1 + "<br>" + carName2;
</script>
</body>
</html>
JavaScript Numbers
JavaScript has only one type of numbers. Numbers can be written with, or without decimals:
EXAMPLE
let x1 = 34.00; // Written with decimals
let x2 = 34; // Written without decimals
JavaScript Function
<!DOCTYPE html>
<html>
<head>
<script>
function JEEVA()
34
{
document.getElementById("demo").innerHTML = "Paragraph changed.";
}
</script>
</head>
<body>
<h2>JavaScript in Head</h2>
<p id="demo">A Paragraph.</p>
<button type="button" onclick=“JEEVA()">Try it</button>
</body>
</html>
External JavaScript
 Scripts can also be placed in external files:
External file: myScript.js
 functionmyFunction() {
document.getElementById("demo").innerHTML = "Paragraph changed.";
}
 External scripts are practical when the same code is used in many different web pages.
 JavaScript files have the file extension .js.
 To use an external script, put the name of the script file in the src (source) attribute of
a <script> tag:
Example
 <script src="myScript.js"></script>

JavaScript Display Possibilities


 JavaScript can "display" data in different ways:
 Writing into an HTML element, using innerHTML.
 Writing into the HTML output using document.write().
 Writing into an alert box, using window.alert().
 Writing into the browser console, using console.log().
Using window.alert()
 You can use an alert box to display data:
Example
<!DOCTYPE html>
<html>
<body>
<h1>My First Web Page</h1>
<p>My first paragraph.</p>
<script>
35
window.alert(5 + 6);
</script>
</body>
</html>
JavaScript Print
 JavaScript does not have any print object or print methods.
 You cannot access output devices from JavaScript.
 The only exception is that you can call the window.print() method in the browser to
print the content of the current window.
Example
<!DOCTYPE html>
<html>
<body>
<button onclick="window.print()">Print this page</button>
</body>
</html>
Semicolons;
 Semicolons separate JavaScript statements.
 Add a semicolon at the end of each executable statement:
var a, b, c; // Declare 3 variables
a = 5; // Assign the value 5 to a
b = 6; // Assign the value 6 to b
c = a + b; // Assign the sum of a and b to c
When separated by semicolons, multiple statements on one line are allowed:
a = 5; b = 6; c = a + b;

JavaScript Syntax
 JavaScript syntax is the set of rules, how JavaScript programs are constructed:
var x, y, z; // Declare Variables
x = 5; y = 6; // Assign Values
z = x + y; // Compute Values

JavaScript Values
 The JavaScript syntax defines two types of values:
 Fixed values
 Variable values
 Fixed values are called Literals.
 Variable values are called Variables.
JavaScript Literals
 The two most important syntax rules for fixed values are:
36
1. Numbers are written with or without decimals:
10.50
1001
2. Strings are text, written within double or single quotes:
"John Doe"
'John Doe'
JavaScript Variables
 In a programming language, variables are used to store data values.
 JavaScript uses the var keyword to declare variables.
 An equal sign is used to assign values to variables.
 In this example, x is defined as a variable. Then, x is assigned (given) the value 6:
var x;
x = 6;
FUNCTIONS
13.EXPLAIN IN DETAIL ABOUT JAVASCRIPT FUNCTIONS (PART B)
 Functions are a fundamental building block for JavaScript programs and a common
feature in almost all programming languages. We may already be familiar with the
concept of a function under a name such as subroutine or procedure.
 A function is a block of JavaScript code that is defined once but may be executed, or
invoked, any number of times.
 A JavaScript function is a block of code designed to perform a particular task.
 A JavaScript function is executed when "something" invokes it (calls it).
 JavaScript functions are parameterized: a function definition may include a list of
identifiers, known as parameters that work as local variables for the body of the
function. Function invocations provide values, or arguments, for the function’s
parameters. Functions often use their argument values to
 Compute a return value that becomes the value of the function invocation expression.
In addition to the arguments, each invocation has another value—the invocation
context—that is the value of the keyword.
Function Declarations
 Function declarations consist of the function keyword, followed by these components:
1. An identifier that names the function. The name is a required part of function
declarations: it is used as the name of a variable, and the newly defined function object
is assigned to the variable.
2. A pair of parentheses around a comma-separated list of zero or more identifiers. These
identifiers are the parameter names for the function, and they behave like local
variables within the body of the function.
3. A pair of curly braces with zero or more JavaScript statements inside. These
statements are the body of the function: they are executed whenever the function is
invoked.
37
Example
function myFunction(p1, p2) {
return p1 * p2; // The function returns the product of p1 and p2
}
NAMING CONVENTION
 A JavaScript function is defined with the function keyword, followed by a name,
followed by parentheses ().
 Function names can contain letters, digits, underscores, and dollar signs (same rules as
variables).
 The parentheses may include parameter names separated by commas:
(parameter1, parameter2, ...).
 The code to be executed, by the function, is placed inside curly brackets: {}

function name(parameter1, parameter2, parameter3) {


// code to be executed
}
 Function parameters are listed inside the parentheses () in the function definition.
 Function arguments are the values received by the function when it is invoked.
 Inside the function, the arguments (the parameters) behave as local variables.
 A Function is much the same as a Procedure or a Subroutine, in other programming
languages.
Function Expressions
 Function expressions look a lot like function declarations, but they appear within the
context of a larger expression or statement, and the name is optional.
EG: // This function expression defines a function that squares
its argument.
// Note that we assign it to a variable
const square = function(x) { return x*x; };
Arrow Functions

We can define functions using a particularly compact syntax known as “arrow functions.”
This syntax is reminiscent of mathematical notation and uses an => “arrow” to separate the
function parameters from the function body. The function keyword is not used, and, since
arrow functions are expressions instead of statements, there is no need for a function name,
either.

EG: const sum = (x, y) => { return x + y; };


const f = x => { return { value: x }; };// Arrow Function Return A Value

Invoking Functions

38
The JavaScript code that makes up the body of a function is not executed when the function
is defined, but rather when it is invoked. JavaScript functions can be invoked in five ways:

 As functions
 As methods
 As constructors
 Indirectly through their call() and apply() methods
 Implicitly, via JavaScript language features that do not appear like normal function
invocations
Function Invocation
 The code inside the function will execute when "something" invokes (calls) the
function:
 An invocation expression consists of a function expression that evaluates to a function
object followed by an open parenthesis, a comma-separated list of zero or more
argument expressions, and a close parenthesis.
 When an event occurs (when a user clicks a button)
 When it is invoked (called) from JavaScript code Automatically (self invoked)
Function Return
 When JavaScript reaches a return statement, the function will stop executing.
 If the function was invoked from a statement, JavaScript will "return" to execute the
code after the invoking statement.Functions often compute a return value. The return
value is "returned" back to the "caller":
Example
 Calculate the product of two numbers, and return the result:
let x = myFunction(4, 3); // Function is called, return value will end up in x
functionmyFunction(a, b)
{
return a * b; // Function returns the product of a and b
}
 The result in x will be: 12
EVENTS
14.EXPLAIN IN DETAIL ABOUT JAVASCRIPT EVENTS (PART B)
 HTML events are "things" that happen to HTML elements.
 When JavaScript is used in HTML pages, JavaScript can "react" on these events.
 Client-side JavaScript programs use an asynchronous event-driven programming
model. In this style of programming, the web browser generates an event whenever
something interesting happens to the
document or browser or to some element or object associated with it.

39
 For example, the web browser generates an event when it finishes loading a document,
when the user moves the mouse over a hyperlink, or when the user strikes a key on the
keyboard.
 In client-side JavaScript, events can occur on any element within an HTML document,
and this fact makes the event model of web browsers significantly more complex than
Node’s event model.
EVENT CATEGORIES
1.Device-dependent input events
 These events are directly tied to a specific input device, such as the mouse or keyboard.
They include event types such as mouse down, mouse move, mouse up, touch start,
touch move, touch end, key down, and key up.
2.Device-independent input events
 These input events are not directly tied to a specific input device.
Example: The “click” event, for example, indicates that a link or button (or other document
element) has been activated. This is often done via a mouse click, but it could also be done
by keyboard or (on touch sensitive devices) with a tap.
3.User interface events
 UI events are higher-level often on HTML form elements that define a user interface for
a web application. They include
1. Focus event
2. Change event
3. Submit event
4.State-change events
 Some events are not triggered directly by user activity, but by network or browser
activity, and indicate some kind of life-cycle or state-related change.
1. Load event
2. Network state changes (online or offline)
3. Popstate event (back button)
5.API-specific events
 A number of web APIs defined by HTML and related specifications include their own
event types.
 The HTML <video> and <audio> elements define a long list of associated event types
 such as “waiting,” “playing,” “seeking,” “volume change,” and so on, and you can use
them to customize media playback.
REGISTERING EVENT HANDLERS
There are two basic ways to register event handlers. The first, from the early days of
the web, is to set a property on the object or document element that is the event target. The
second (newer and more general) technique is to pass the handler to the addEventListener()
method of the object or element.

40
SETTING EVENT HANDLER PROPERTIES
 The simplest way to register an event handler is by setting a property of the event
target to the desired event handler function.
 Event name followed by the event name: onclick, onchange, onload, onmouseover,
and so on.
SETTING EVENT HANDLER ATTRIBUTES
 The event handler properties of document elements can also be defined directly in the
HTML file as attributes on the corresponding HTML tag.
 When defining an event handler as an HTML attribute, the attribute value should be a
string of JavaScript code. That code should be the body of the event handler function,
not a complete function declaration.
 HTML event handler code should not be surrounded by curly braces and prefixed with
the function keyword.
For example:
<button onclick="console.log('Thank you');">Please Click</button>
ADDEVENTLISTENER()
 Any object that can be an event target—this includes the Window and Document
objects and all document Elements—defines a method named addEventListener() that you
can use to register an event handler for that target.
It takes three arguments.
1. The first is the event type for which the handler is being registered. The event type
(or name) is a string that does not include the “on” prefix used when setting event
handler properties.
2. The second argument to addEventListener() is the function that should be invoked
when the specified type of event occurs.
3. The third argument is optional
Example:
<button id="mybutton">Click me</button>
<script>
let b = document.querySelector("#mybutton");
b.onclick = function() { console.log("Thanks for clicking
me!"); };
b.addEventListener("click", () => { console.log("Thanks
again!"); });
</script>
 addEventListener() is paired with a removeEventListener() method that expects the
same two arguments (plus an optional third) but removes an event handler function
from an object rather than adding it.
EVENT HANDLER ARGUMENT

41
Event handlers are invoked with an Event object as their single argument. The properties of
the Event object provide details about the event:
1. type
2. target
3. currentTarget
4. timeStamp
5. isTrusted

AJAX
15.WHAT IS AJAX? (PART A)
EXPLAIN IN DETAIL ABOUT AJAX (PART B)
 AJAX is an acronym for Asynchronous JavaScript and XML. AJAX is a new
technique for creating better, faster and interactive web applications with the help of
JavaScript, DOM, XML, HTML, CSS etc.
 AJAX allows you to send and receive data asynchronously without reloading the
entire web page. So it is fast. AJAX allows you to send only important information to
the server not the entire page. So only valuable data from the client side is routed to
the server side. It makes your application interactive and faster.
 Ajax is the most viable Rich Internet Application(RIA) technique so far
Where it is used?
There are too many web applications running on the web that are using AJAX Technology.
Some are : 1. Gmail 2. Face book 3. Twitter 4. Google maps 5. YouTube etc.,
AJAX is Based on Internet Standards
AJAX is based on internet standards, and uses a combination of:
1. XMLHttpRequest object (to exchange data asynchronously with a server)
2. JavaScript/DOM (to display/interact with the information)
3. CSS (to style the data)
4. XML (often used as the format for transferring data)
AJAX Components
 AJAX is not a technology but group of inter-related technologies. AJAX
Technologies includes:
1. HTML/XHTML and CSS
2. DOM
3. XML or JSON(JavaScript Object Notation)
4. XMLHttpRequest Object
5. JavaScript
Understanding XMLHttpRequest
It is the heart of AJAX technique. An object of XMLHttpRequest is used for
asynchronous communication between client and server.it provides a set of useful methods

42
and properties that are used to send HTTP Request to and retrieve data from the web
server. It performs following operations:
1. Sends data from the client in the background
2. Receives the data from the server
3. Updates the webpage without reloading it.
Methods of XMLHttpRequest object
Method Description
void open(method, URL) Opens the request specifying get or
post method and url.
void open(method, URL, Same as above but specifies
async) asynchronous or not.
void open(method, Same as above but specifies
URL, async, username and password.
username, password)
void send() Sends GET request.
void send(string) Sends POST request.
setRequestHeader(header,va It adds request headers.
lue)
Syntax of open() method:
xmlHttp.open(“GET”,”conn.php”,tru e); which takes three attributes
1. An HTTP method such as GET ,POST , or HEAD
2. The URL of the Server resource

3. A boolean Flag that indicates whether the request should


be asynchronously(true) or synchronously(false)
Properties of XMLHttpRequest Object:
Property Descriptio n
Represents the state of the request. It ranges from 0 to
4.

0 UN INITIALIZED – After creating


readyState
XMLHttpRequest Object before calling open()
method.
1 CONNECTION ESTABLISHED – open() is
called but send() is not called.
2 REQUEST SENT- send() is called.
3 PROCESSING - Downloading data;
responseText holds the data.
4 DONE - The operation is completed successfully.

43
onReadyStateChan It is called whenever readystate attribute changes. It
ge must not be used with synchronous requests.

reponseText Returns response as TEXT.

responseXML Returns response as XML


How AJAX Works?

AJAX communicates with the server using XMLHttpRequest object. Let's


understand the flow of AJAX with the following figure:
1. User sends a request from the UI and a javascript call goes to
XMLHttpRequestobject.
2. HTTP Request is sent to the server by XMLHttpRequest object.
3. Server interacts with the database using JSP, PHP, Servlet, ASP.net etc.
4. Data is retrieved.
5. Server sends XML data or JSON data to the XMLHttpRequest callbackfunction.
6. HTML and CSS data is displayed on the browser.

REFERENCE:
1. Professional JavaScript for Web Developers, 4th Edition – Matt Frisbie
2. Javascript-The-Definitive-Guide – David Flanagan

***************************UNIT I COMPLETED**********************

44
POSSIBLE QUESTIONS

UNIT I
PART A
1. Define server
2. Expand css
3. Explain the use of css
4. Define selectors
5. Define css flexbox
6. What is ajax?
PART B

1. Explain about server


2. Explain the types of server
3. Briefly explain about the client side programming
4. Discuss http with neat diagram
5. Discuss in detail about html document structure
6. Explain the different types of html tags
7. Describe css with example
8. How can we handle images in css with example
9. Discuss selectors in detail
10. Briefly explain about css flexbox
11. Explain in detail javascript datatypes
12. Explain in detail javascript variables
13. Explain in detail about javascript functions
14. Explain in detail about javascript events
PART C
1. Develop a webpage using JAVASCRIPT a validation and CSS
2. Describe css selectors with example
3. Explain in detail about ajax
*************

45
VASAVI VIDYA TRUST GROUP OF INSTITUTIONS, SALEM – 103
DEPARTMENT OF MCA
CLASS: I MCA UNIT: II
SUBJECT: FULL STACK WEB DEVELOPMENT SUBJECTCODE: MC4201

INTRODUCTION TO WEB SERVERS

1.DEFINE WEB SERVER (PART A)


EXPLAIN IN DETAIL ABOUT WEB SERVERS (PART B)
INTRODUCTION TO WEB SERVERS

Web server is a computer where the web content is stored. Basically web server is used to host the web
sites but there exists other web servers also such as gaming, storage, FTP, email etc.
Web site is collection of web pages while web server is software that responds to the request for web
resources.
A web server can be referred to as either the hardware (the computer) or the software (the computer
application) that helps to deliver content that can be accessed through the Internet.

A web server is what makes it possible to be able to access content like web pages or other data from
anywhere as long as it is connected to the internet. The hardware houses the content, while the software
makes the content accessible through the internet.

The most common use of web servers is to host websites but there are other uses like data storage or for
running enterprise applications. There are also different ways to request content from a web server. The
most common request is the Hypertext Transfer Protocol (HTTP), but there are also other requests like the
Internet Message Access Protocol (IMAP) or the File Transfer Protocol (FTP).

HOW WEB SERVERS WORK

The browser broke the URL into three parts:


1. The protocol ("http")
2. The server name ("www.howstuffworks.com")
3. The file name ("web-server.htm")

Web Server Working


Web server respond to the client request in either of the following two ways:
1. Sending the file to the client associated with the requested URL.
2. Generating response by invoking a script and communicating with database
46
Key Points
 When client sends request for a web page, the web server search for the requested page if requested
page is found then it will send it to client with an HTTP response.
 If the requested web page is not found, web server will the send an HTTP response: Error
 404 Not found.
 If client has requested for some other resources then the web server will contact to the
 Application server and data store to construct the HTTP response.

Architecture
Web Server Architecture follows the following two approaches:
1. Concurrent Approach
2. Single-Process-Event-Driven Approach.
1.Concurrent Approach
Concurrent approach allows the web server to handle multiple client requests at the same time. It can be
achieved by following methods:
1. Multi-process
2. Multi-threaded
3. Hybrid method.

Multi-processing
 In this a single process parent process initiates several single-threaded child processes and distribute
incoming requests to these child processes. Each of the child processes are responsible for handling
single request.
 It is the responsibility of parent process to monitor the load and decide if processes should be killed
or forked.
Multi-threaded
 Unlike Multi-process, it creates multiple single-threaded processes.
Hybrid
 It is combination of above two approaches. In this approach multiple process are created and each
process initiates multiple threads. Each of the threads handles one connection. Using multiple threads
in single process results in fewer loads on system resources.
47
TYPES OF SERVER
1. Application server a server dedicated to running certain software applications.
2. Catalog server a central search point for information across a distributed network.
3. Communications server carrier-grade computing platform for communications networks.
4. Compute server, a server intended for intensive (esp. scientific) computations.
5. Database server provides database services to other computer programs or computers.
6. Fax server provides fax services for clients.
7. File server provides remote access to files.
8. Game server a server that video game clients connect to in order to play online together.
9. Home server a server for the home.
10. Mail server handles transport of and access to email.
11. Mobile Server or Server on the Go is an Intel Xeon processor based server class laptop form factor
computer.
12. Name server or DNS.
13. Print server Provides printer services.
14. Proxy server acts as an intermediary for requests from clients seeking resources from other servers.
15. Sound server provides multimedia broadcasting, streaming.
16. Stand-alone server a server on a Windows network that neither belongs to nor governs a Windows
domain
17. Web server a server that HTTP clients connect to in order to send commands and receive responses
along with data contents
JAVASCRIPT IN THE DESKTOP WITH NODE JS

2.EXPLAIN JAVASCRIPT IN NODE JS (PART B)


DEFINE NODE JS (PART A)
 Node.js can be used to build cross-platform desktop apps.
 It’s becoming a popular choice for developers, in particular web developers with little experience in
desktop application development—even Microsoft has built and shipped an IDE (Visual Studio Code)
using Node.js.

What is Node.js?
 Node.js is a server-side platform built on Google Chrome's JavaScript Engine (V8 Engine). Node.js was
developed by Ryan Dahl in 2009 and its latest version is v0.10.36.
 Node.js is a platform built on Chrome's JavaScript runtime for easily building fast and scalable network
applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and
efficient, perfect for data-intensive real-time applications that run across distributed devices.
 Node.js is an open source, cross-platform runtime environment for developing server-side and
networking applications. Node.js applications are written in JavaScript, and can be run within the
Node.js runtime on OS X, Microsoft Windows, and Linux.
Node.js = Runtime Environment + JavaScript Library
 In the Node.js ecosystem, there are two major frameworks for creating desktop apps: NW.js and
Electron. both have large communities around them, and both share similar approaches to building
desktop apps.
 Node.js is an open-source server side runtime environment built on Chrome's V8 JavaScript engine. It
provides an event driven, non-blocking (asynchronous) I/O and cross-platform runtime environment
for building highly scalable server-side application using JavaScript.

48
 Node.js can be used to build different types of applications such as command line application, web
application, real-time chat application, REST API server etc. However, it is mainly used to build
network programs like web servers, similar to PHP, Java, or ASP.NET.
 Node.js eliminates the waiting, and simply continues with the next request.
 Node.js runs single-threaded, non-blocking, asynchronous programming, which is very memory
efficient.

WHAT CAN NODE.JS DO?

 Node.js can generate dynamic page content


 Node.js can create, open, read, write, delete, and close files on the server
 Node.js can collect form data
 Node.js can add, delete, modify data in your database

WHAT IS A NODE.JS FILE?

 Node.js files contain tasks that will be executed on certain events


 A typical event is someone trying to access a port on the server
 Node.js files must be initiated on the server before having any effect
 Node.js files have extension ".js"

FEATURES OF NODE.JS
Following are some of the important features that make Node.js the first choice of software architects.
 Asynchronous and Event Driven − All APIs of Node.js library are asynchronous, that is, non-
blocking. It essentially means a Node.js based server never waits for an API to return data. The server
moves to the next API after calling it and a notification mechanism of Events of Node.js helps the
server to get a response from the previous API call.
 Very Fast − Being built on Google Chrome's V8 JavaScript Engine, Node.js library is very fast in
code execution.
 Single Threaded but Highly Scalable − Node.js uses a single threaded model with event looping.
Event mechanism helps the server to respond in a non-blocking way and makes the server highly
scalable as opposed to traditional servers which create limited threads to handle requests. Node.js uses
a single threaded program and the same program can provide service to a much larger number of
requests than traditional servers like Apache HTTP Server.
 No Buffering − Node.js applications never buffer any data. These applications simply output the data
in chunks.
 License − Node.js is released under the MIT license.

49
ADVANTAGES OF NODE.JS

1. Node.js is an open-source framework under MIT license. (MIT license is a free software license
originating at the Massachusetts Institute of Technology (MIT).)
2. Uses JavaScript to build entire server side application.
3. Lightweight framework that includes bare minimum modules. Other modules can be included as per
the need of an application.
4. Asynchronous by default. So it performs faster than other frameworks.
5. Cross-platform framework that runs on Windows, MAC or Linux

3.EXPLAIN NODE JS PROCESS MODEL (PART B)


NODE.JS PROCESS MODEL

1.Traditional Web Server Model

In the traditional web server model, each request is handled by a dedicated thread from the thread pool. If
no thread is available in the thread pool at any point of time then the request waits till the next available
thread. Dedicated thread executes a particular request and does not return to thread pool until it
completes the execution and returns a response.

2.Node.js Process Model

Node.js processes user requests differently when compared to a traditional web server model. Node.js
runs in a single process and the application code runs in a single thread and thereby needs less resources
than other platforms. All the user requests to your web application will be handled by a single thread and
all the I/O work or long running job is performed asynchronously for a particular request. So, this single
50
thread doesn't have to wait for the request to complete and is free to handle the next request. When
asynchronous I/O work completes then it processes the request further and sends the response.

An event loop is constantly watching for the events to be raised for an asynchronous job and executing
callback function when the job completes. Internally, Node.js uses libev for the event loop which in turn
uses internal C++ thread pool to provide asynchronous I/O.

The following figure illustrates asynchronous web server model using Node.js.

Node.js process model increases the performance and scalability with a few caveats. Node.js is not fit for
an application which performs CPU-intensive operations like image processing or other heavy
computation work because it takes time to process a request and thereby blocks the single thread.
Install Node.js

Node.js development environment can be setup in Windows, Mac, Linux and Solaris. The following
tools/SDK are required for developing a Node.js application on any platform.

1. Node.js
2. Node Package Manager (NPM)
3. IDE (Integrated Development Environment) or TextEditor

NPM (Node Package Manager) is included in Node.js installation since Node version 0.6.0., so there is
no need to install it separately.

COMPONENTS OF NODE.JS
51
A Node.js application consists of the following three important components:
1. Import required modules: We use the require directive to load Node.js modules.
2. Create server: A server which will listen to client's requests similar to Apache HTTP Server.
3. Read request and return response: The server created in an earlier step will read the HTTP request made
by the client which can be a browser or a console and return the response.
NPM
4.WHAT IS NPM? (PART A)
EXPLAIN ABOUT NPM IN DETAIL(PART B)

What is NPM?

NPM – or "Node Package Manager" – is the default package manager for JavaScript's runtime Node.js.
NPM consists of two main parts:
 a CLI (command-line interface) tool for publishing and downloading packages, and
 an online repository that hosts JavaScript packages
 Official website: https://www.npmjs.com
 NPM is included with Node.js installation. After you install Node.js, verify NPM installation by
writing the following command in terminal or command prompt.

 C:\> npm -v

2.11.3

 If you have an older version of NPM then you can update it to the latest version using the following
command.

 C:\> npm install npm -g

 To access NPM help, write npm help in the command prompt or terminal window.

 C:\> npm help

 NPM performs the operation in two modes: global and local. In the global mode, NPM performs
operations which affect all the Node.js applications on the computer whereas in the local mode,
NPM performs operations for the particular local directory which affects an application in that
directory only.

 Install Package Locally


 Use the following command to install any third party module in your local Node.js project folder.

 C:\>npm install <package name>

 For example, the following command will install ExpressJS into MyNodeProj folder.

 C:\MyNodeProj> npm install express

52
 All the modules installed using NPM are installed under node_modules folder. The above
command will create ExpressJS folder under node_modules folder in the root folder of your project
and install Express.js there.

SERVING FILES WITH THE HTTP MODULE

5.HOW TO SERVE A FILE USINH HTTP MODULE (PART B)

Create Node.js Web Server

Node.js makes it easy to create a simple web server that processes incoming requests asynchronously.
The following example is a simple Node.js web server contained in server.js file
var http = require('http'); // 1 - Import Node.js core module
var server = http.createServer(function (req, res) { // 2 - creating server
//handle incomming requests here..
});
server.listen(5000); //3 - listen for any incoming requests
console.log('Node.js web server at port 5000 is running..')
Run the above web server by writing node server.js command in command prompt or terminal
window and it will display message as shown below.
C:\> node server.js
Node.js web server at p C:\> node server.js
Node.js web server at port 5000 is running.ort 5000 is running..

Handle HTTP Request

The http.createServer() method includes request and response parameters which is supplied by Node.js.
The request object can be used to get information about the current HTTP request e.g., url, request
header, and data. The response object can be used to send a response for a current HTTP request.
The following example demonstrates handling HTTP request and response in Node.js.

var http = require('http'); // Import Node.js core module


var server = http.createServer(function (req, res) { //create web server
if (req.url == '/') { //check the URL of the current request
// set response header
res.writeHead(200, { 'Content-Type': 'text/html' });
// set response content
res.write('<html><body><p>This is home Page.</p></body></html>');
res.end();
}
else if (req.url == "/student") {
res.writeHead(200, { 'Content-Type': 'text/html' });
res.write('<html><body><p>This is student Page.</p></body></html>');
res.end();
}
else if (req.url == "/admin") {
res.writeHead(200, { 'Content-Type': 'text/html' });
res.write('<html><body><p>This is admin Page.</p></body></html>');
53
res.end();
}
else
res.end('Invalid Request!');
});
server.listen(5000); //6 - listen for any incoming requests
console.log('Node.js web server at port 5000 is running..')
In the above example, req.url is used to check the url of the current request and based on that it sends
the response. To send a response, first it sets the response header using writeHead() method and then
writes a string as a response body using write() method. Finally, Node.js web server sends the response
using end() method. Now, run the above web server as shown below. C:\> node server.js
To test it, you can use the command-line program curl, which most Mac and Linux machines have pre-
installed.

You should see the following response.

HFor Windows users, point your browser to http://localhost:5000 and see the following result.

INTRODUCTION TO EXPRESS FRAME WORK


6.WHAT IS EXPRESS? (PART A)
DISCUSS IN DETAIL ABOUT EXPRESS FRAMEWORK. (PART B)
What is Express?
Express is a small framework that sits on top of Node.js’s web server functionality to simplify its APIs
and add helpful new features.It makes it easier to organize your application’s functionality with middle
ware and routing; it adds helpful utilities to Node.js’s HTTP objects;it facilitates the rendering of dynamic
HTTP objects.
Express is a part of MEAN stack, a full stack JavaScript solution used in building fast, robust, and
maintainable production web applications.
 MongoDB(Database)
 ExpressJS(Web Framework)
 AngularJS(Front-end Framework)
 NodeJS(Application Server)
Installing Express on Windows (WINDOWS 10)
Assuming that you have installed node.js on your system, the following steps should be followed to install
express on your Windows:
54
STEP-1: Creating a directory for our project and make that our working directory.
$ mkdir gfg
$ cd gfg
STEP-2: Using npm init command to create a package.json file for our project.
$ npm init
This command describes all the dependencies of our project. The file will be updated when adding further
dependencies during the development process, for example when you set up your build system.

Keep pressing enter


and enter “yes/no”
accordingly at the
terminus line.
STEP-3: Installing
Express
Now in
your gfg(name of
your folder) folder
type the following
command line:
$ npm install express --save
NOTE- Here “WARN” indicates the fields that must be entered in STEP-2.
STEP-4: Verify that Express.js was installed on your Windows:
To check that express.js was installed on your system or not, you can run the following command line on
cmd:
C:\Users\Admin\gfg\node_modules>npm --
version express
The version of express.js will be displayed on
successful installation
Features of express
o t can be used to design single-page, multi-page and hybrid web applications.
o It allows to setup middlewares to respond to HTTP Requests.
o It defines a routing table which is used to perform different actions based on HTTP method and
URL.
o It allows to dynamically render HTML Pages based on passing arguments to templates.

55
Middleware functions are functions that have access to the request object (req), the response
object (res), and the next middleware function in the application’s request-response cycle. The next
middleware function is commonly denoted by a variable named next.
Middleware functions can perform the following tasks:

 Execute any code.


 Make changes to the request and the response objects.
 End the request-response cycle.
 Call the next middleware function in the stack.

If the current middleware function does not end the request-response cycle, it must call next() to
pass control to the next middleware function. Otherwise, the request will be left hanging.
An Express application can use the following types of middleware:

1. Application-level middleware
2. Router-level middleware
3. Error-handling middleware
4. Built-in middleware
5. Third-party middleware

Why use Express

o Ultra fast I/O


o Asynchronous and single threaded
o MVC like structure
o Robust API makes routing easy
Example
var express = require('express');
var app = express();
app.get('/', function (req, res) {
res.send('Welcome to JavaTpoint!');
});

56
var server = app.listen(8000, function () {
var host = server.address().address;
var port = server.address().port;
console.log('Example app listening at http://%s:%s', host, port);
});
SERVER SIDE RENDERING WITH TEMPLATE ENGINE

7.WHAT IS SERVER-SIDE RENDERING? (PART A)

EXPLAIN SERVER SIDE RENDERING WITH TEMPLATE ENGINE (PART B)

What is Server-Side rendering?

What is Server-Side rendering?


 Server-side rendering means using a server to generate HTML from JavaScript modules in
response to a URL request. That’s in contrast to client-side rendering, which uses the browser to
create HTML using the DOM.
 Server-side rendering with JavaScript works similarly to other server-side languages such as PHP or
.NET, but with Node.js as the runtime environment.
 When the server receives a request, it parses the JavaScript modules and data required to generate a
response, and returns a rendered HTML page to the browser.
 Server-side rendering (SSR), is the ability of an application to contribute by displaying the
web-page on the server instead of rendering it in the browser. Server-side sends a fully
rendered page to the client; the client’s JavaScript bundle takes over and allows the SPA
framework to operate. There is also client-side rendering which slows down the procedure of
viewing and interacting with the web page.
Some server-side rendering advantages include:
 A server-side rendered application enables pages to load faster, improving the user experience.
 When rendering server-side, search engines can easily index and crawl content because the content
can be rendered before the page is loaded, which is ideal for SEO(search engine optimization).
 Web pages are correctly indexed because web browsers prioritize web pages with faster load
times.
 Rendering server-side helps efficiently load web pages for users with slow internet connection or
outdated devices.

57
Why we need?
 Modern JavaScript frameworks / libraries that focus on creating interactive websites or Single Page
Applications the way that pages are displayed to a visitor has changed a lot.
 There’s one more important thing you should consider when going with a client-side rendered
app: search engines and social networks presence.
Solution
A — Consider having your key pages as static
When you’re creating an platform that requires the users to login, and not providing the content to not-
signed in users you might decide to create your public facing sites (like the index, “about us”, “contact
us” etc.) pages as static HTML, and not have them rendered by JS.
B — Generate parts of your application as HTML pages when running the build process
Libraries like react-snapshot can be added to your project, used to generate HTML copies of your
application pages and save them to a specified folder.
C — Create a server-side rendered application in JS
One of the big selling point of the current generation of JS applications is the fact, that they can be ran on
both the client (browser) and on server — this allows us to generate HTML for pages that are more dynamic
— which content is not known at build time.
Template Engine
 Template engine helps us to create an HTML template with minimal code. Also, it can inject data
into HTML template at client side and produce the final HTML.
 Templates also enable fast rendering of the server-side data that needs to be passed to the
application. For example, you might want to have components such as body, navigation, footer,
dashboard, etc.
 The following figure illustrates how template engine works in Node.js.

58
As per the above figure, client-side browser loads HTML template, JSON/XML data and template engine
library from the server. Template engine produces the final HTML using template and data in client's
browser. However, some HTML templates process data and generate final HTML page at server side also.
There are many template engines available for Node.js. Each template engine uses a different language to
define HTML template and inject data into it.
The following is a list of important (but not limited) template engines for Node.js
1. Jade
2. Vash
3. EJS
4. Mustache
5. Dust.js
6. Nunjucks
7. Handlebars
8. atpl
9. haml
Advantages of Template engine in Node.js
1. Improves developer's productivity.
2. Improves readability and maintainability.
3. Faster performance.
4. Maximizes client side processing.
5. Single template for multiple pages.
6. Templates can be accessed from CDN (Content Delivery Network).
Using template engines with Express
Template engine makes you able to use static template files in your application. To render template files
you have to set the following application setting properties:

59
Views: It specifies a directory where the template files are located.
For example: app.set('views', './views').
view engine: It specifies the template engine that you use. For example, to use the Pug template engine:
app.set('view engine', 'pug').

Pug Template Engine


Pug is a template engine for Node.js. Pug uses whitespaces and indentation as the part of the syntax. Its
syntax is easy to learn.
Install pug
Execute the following command to install pug template engine:
npm install pug --save

The pug template engine takes the input in a simple way and produces the output in HTML. See how it
renders HTML:

Simple input:
doctype html
html
head
title A simple pug example
body
h1 This page is produced by pug template engine
p some paragraph here..

Output produced by pug template:


!DOCTYPE html>
<html>
<head>
<title>A simple pug example</title>
</head>
<body>
<h1>This page is produced by pug template engine</h1>
<p>some paragraph here..</p>
</body>
</html>

60
Express.js can be used with any template engine. Let's take an example to deploy how pug template creates
HTML page dynamically.
STATIC FILES
9.HOW TO SERVE STATIC FILES USING NODE JS (PART B)
 Static files are files that clients download as they are from the server.
 Static files are files that don’t change when your application is running.
 These files do a lot to improve your application, but they aren’t dynamically generated by your
Python web server like a usual HTML response.
 In a typical web application, your most common static files will be the following types:
1. Cascading Style Sheets, CSS
2. JavaScript
3. Images
How static content works
 Fetching a static asset from a server is one of the basic functions of the web. For example, typing the
following URL in a web browser (http://www.example.com/index.html) fetches the
file index.html from the server hosting example.com.
 There are three steps to requesting static content from a server:
 A user sends a request for a file to the web server.
 The web server retrieves the file from disk.
 The web server sends the file to the user.
Benefits of static content
 Static content does not change. Once a static file is uploaded to a server, it does not change until
you replace it with another file. In the meantime, users who return to your website will see exactly
the same content.
 Static content is easier to cache. Although there are tricks to caching dynamic content, it often can’t
be cached effectively because it’s hard to predict when it’s needed. Since static content is the same
for all users, it can be cached very easily.
 Static content is less power-hungry. Dynamic websites contain layers of application logic that
run before the user receives a response. Static websites only need to pull files from the disk.
Additionally, techniques such as compression only need to be applied once to static content,
making it very resource-efficient.
Create a new directory, public. Express, by default does not allow you to serve static files. You need to
enable it using the following built-in middleware.
app.use(express.static('public'));
Serving static files in Express
To serve static files such as images, CSS files, and
JavaScript files, use the express.static built-in
middleware function in Express.
The function signature is:
express.static(root, [options])
The root argument specifies the root directory
from which to serve static assets. For more
information on the options argument,
see express.static.
For example, use the following code to serve
images, CSS files, and JavaScript files in a
directory named public:
app.use(express.static('public'))

61
Now, you can load the files that are in the public directory:
http://localhost:3000/images/kitten.jpg
http://localhost:3000/css/style.css
http://localhost:3000/js/app.js
http://localhost:3000/images/bg.png
http://localhost:3000/hello.html
Express looks up the files relative to the static directory, so the name of the static directory is not part of the
URL.
To use multiple static assets directories, call the express.static middleware function multiple times:
app.use(express.static('public'))
app.use(express.static('files'))
Express looks up the files in the order in which you set the static directories with
the express.static middleware function.
To create a virtual path prefix (where the path does not actually exist in the file system) for files that are
served by the express.static function, specify a mount path for the static directory, as shown below:
app.use('/static', express.static('public'))
Now, you can load the files that are in the public directory from the /static path prefix.
http://localhost:3000/static/images/kitten.jpg
http://localhost:3000/static/css/style.css
http://localhost:3000/static/js/app.js
http://localhost:3000/static/images/bg.png
http://localhost:3000/static/hello.html
However, the path that you provide to the express.static function is relative to the directory from where you
launch your node process. If you run the express app from another directory, it’s safer to use the absolute
path of the directory that you want to serve:
const path = require('path')
app.use('/static', express.static(path.join(__dirname, 'public')))
Example
Multiple Static Directories
We can also set multiple static assets directories using the following program −
var express = require('express');
var app = express();
app.use(express.static('public'));
app.use(express.static('images'));
app.listen(3000);
Virtual Path Prefix
We can also provide a path prefix for serving static files. For example, if you want to provide a path prefix
like '/static', you need to include the following code in your index.js file −
var express = require('express');
var app = express();
app.use('/static', express.static('public'));
app.listen(3000);
Now whenever you need to include a file, for example, a script file called main.js residing in your public
directory, use the following script tag −
<script src = "/static/main.js" />

62
This technique can come in handy when providing multiple directories as static files. These prefixes can
help distinguish between multiple directories.
Absolute Path to Static Files Directory
// Set up Express
var express = require('express');
var app = express();
// Serve files from the absolute path of the directory
app.use(express.static(__dirname + '/public'));
// Start Express server
app.listen(3030);
Absolute Path to Directory & Virtual Path Prefix
// Set up Express
var express = require('express');
var app = express();
/* Serve from the absolute path of the directory that you want to serve with a
*/ virtual path prefix
app.use('/static', express.static(__dirname + '/public'));
// Start Express server
app.listen(3030);
ASYNC/AWAIT
10DEFINE ASYNC (PART A)
DEFINE AWAIT (PART A)
 async programming is critical to learn if you want to use JavaScript and Node.js to build web
applications and servers – because JS code is asynchronous by default.
The Node.js Event Loop
 Node.js is single-threaded. However, to be exact, only the event loop in Node.js, which interacts with
a pool of background C++ worker threads, is single-threaded.
 There are four important components to the Node.js processing model:
1. Event Queue: Tasks that are declared in a program, or returned from the processing thread pool
via callbacks. (The equivalent of this in our Santa's workshop is the pile of letters for Santa.)
2. Event Loop: The main Node.js thread that facilitates event queues and worker thread pools to carry
out operations – both async and synchronous.
3. Background thread pool: These threads do the actual processing of tasks, which
might be I/O blocking.

63
console.log("Hello");
https.get("https://httpstat.us/200", (res) => {
console.log(`API returned status: ${res.statusCode}`);
});
console.log("from the other side");
If we execute the above piece of code, we would get this in our standard output:
Output
Hello
from the other side
API returned status: 200
 Before Node version 7.6, the callbacks were the only official way provided by Node to run one
function after another. As Node architecture is single-threaded and asynchronous, the community
devised the callback functions, which would fire (or run) after the first function (to which the
callbacks were assigned) run is completed.
Example of a Callback:
app.get('/', function(){
function1(arg1, function(){
...
})
});
The problem with this kind of code is that this kind of situations can cause a lot of trouble and the code
can get messy when there are several functions. This situation is called what is commonly known as
a callback hell.
So, to find a way out, the idea of Promises and function chaining was introduced.
Example: Before async/await
function fun1(req, res){
return request.get('http://localhost:3000')
.catch((err) =>{
console.log('found error');
}).then((res) =>{
console.log('get request returned.');
});
64
Explanation:
The above code demos a function implemented with function chaining instead of callbacks. It can be
observed that the code is now more easy to understand and readable. The code basically says that GET
localhost:3000, catch the error if there is any; if there is no error then implement the following statement:
console.log(‘get request returned.’);

With Node v8, the async/await feature was officially rolled out by the Node to deal with Promises and
function chaining. The functions need not to be chained one after another, simply await the function that
returns the Promise. But the function async needs to be declared before awaiting a function returning a
Promise. The code now looks like below.
Example: After async/await
async function fun1(req, res){
let response = await request.get('http://localhost:3000');
if (response.err) { console.log('error');}
else { console.log('fetched response');
}
Explanation:

 The code above basically asks the javascript engine running the code to wait for
the request.get() function to complete before moving on to the next line to execute it.
 The request.get() function returns a Promise for which user will await . Before async/await, if it
needs to be made sure that the functions are running in the desired sequence, that is one after the
another, chain them one after the another or register callbacks.

 Code writing and understanding becomes easy with async/await as can be observed from both the
examples.
Async: It simply allows us to write promises based code as if it was synchronous and it checks that we are
not breaking the execution thread. It operates asynchronously via the event-loop. Async functions will
always return a value. It makes sure that a promise is returned and if it is not returned then JavaScript
automatically wraps it in a promise which is resolved with its value.
Example-1:

const getData = async() => {


var data = "Hello World";
return data;
}
getData().then(data => console.log(data));

Await: Await function is used to wait for the promise. It could be used within the async block only. It
makes the code wait until the promise returns a result. It only makes the async block wait.
Example-2:

const getData = async() => {


var y = await "Hello World";
console.log(y);
}

65
console.log(1);
getData();
console.log(2);

FETCHING JSON FROM EXPRESS


11.HOW WE FETCH JSON FROM EXPRESS (PART B)
 JavaScript Object Notation, referred to as JSON in short, is one of the most popular formats for data
storage and data interchange over the internet. The simplicity of the JSON syntax makes it very easy for
humans and machines to read and write.
 Despite its name, the use of the JSON data format is not limited to JavaScript. Most programming
languages implement data structures that you can easily convert to JSON and vice versa.
 JavaScript, and therefore the Node.js runtime environment, is no exception. More often than not, this
JSON data needs to be read from or written to a file for persistence. The Node runtime environment has
the built-in fs module specifically for working with files.
 The fs module is built in, you don’t need to install it. It provides functions that you can use to read and
write data in JSON format, and much more.
JSON or JavaScript Object Notation is a light weight, text-based data interchange format. Like XML, it is
one of the way of exchanging information between applications. This format of data is widely used by
web applications/APIs to communicate with each other.
Reading a JSON file:

 Method 1: Using require method: The simplest method to read a JSON file is to require it in a
node.js file using require() method.
Syntax:
const data = require('path/to/file/filename');
Example: Create a users.json file in the same directory where index.js file present. Add following
data to the json file.
users.json file:

[ {
"name": "John",
"age": 21,
"language": ["JavaScript", "PHP", "Python"]
},
{
"name": "Smith",
"age": 25,
"language": ["PHP", "Go", "JavaScript"]
}]

Now, add the following code to your index.js file.


index.js file:

66
// Requiring users file
const users = require("./users");

console.log(users);

Now, run the file using the command:

node index.js

Output:

 Method 2: Using the fs module: We can also use node.js fs module to read a file. The fs module
returns a file content in string format so we need to convert it into JSON format by
using JSON.parse() in-built method.
Add the following code into your index.js file:
index.js file:

const fs = require("fs");

// Read users.json file


fs.readFile("users.json", function(err, data) {

// Check for errors


if (err) throw err;

// Converting to JSON
const users = JSON.parse(data);

console.log(users); // Print users


});

 Now run the file again and you’ll see an output like this:
Output:

67
Writing to a JSON file: We can write data into a JSON file by using the node.js fs module. We can
use writeFile method to write data into a file.
Syntax:
fs.writeFile("filename", data, callback);
Example: We will add a new user to the existing JSON file, we have created in the previous example.
This task will be completed in three steps:
 Read the file using one of the above methods.
 Add the data using .push() method.
 Write the new data to the file using JSON.stringify() method to convert data into string.

const fs = require("fs");

// STEP 1: Reading JSON file


const users = require("./users");

// Defining new user


let user = {
name: "New User",
age: 30,
language: ["PHP", "Go", "JavaScript"]
}; // STEP 2: Adding new data to users object
users.push(user);

// STEP 3: Writing to a file


fs.writeFile("users.json", JSON.stringify(users), err => {

// Checking for errors


if (err) throw err;

console.log("Done writing"); // Success


});

Run the file again and you will see a message into the console:

68
Now check your users.json file it will looks something like below:

REFERENCE:

1.Learning nodejsbook - Marc Wandschneider


2. www.tutorialspoint.com
3. Node.js in Practice - Alex Young, Marc Harter
***************************UNIT II COMPLETED**************************
POSSIBLE QUESTIONS
UNIT II
PART A
1. Define web server
2. Define node JS
3. What is NPM?
4. What is express?
5. What is server-side rendering?
6. What is a template engine?
7. Define Async
8. Define Await

PART B
1. Explain in detail about web servers
2. Explain Javascript in node JS
3. Explain node JS process model
4. Explain about NPM in detail
5. How to serve a file using http module
6. Discuss in detail about express framework.
7. Explain server side rendering with template engine
8. How to serve static files using node JS
9. How we fetch JSON from express

PART C
1. Discuss about how to create a node JS server that serves the following
a. Static files
b. JSON file
2. Discuss about how to create a server USING EXPRESS
************************

69
VASAVI VIDYA TRUST GROUP OF INSTITUTIONS, SALEM – 103
DEPARTMENT OF MCA
CLASS: I MCA UNIT: III
SUBJECT: FULL STACK WEB DEVELOPMENT SUBJECT CODE: MC4201

INTRODUCTION TO NOSQL DATABASES


1.WHAT IS NOSQL? (PART A)
What is NoSQL?
 NoSQL Database is a non-relational Data Management System that does not require a
fixed schema. It avoids joins, and is easy to scale. The major purpose of using a NoSQL
database is for distributed data stores with humongous data storage needs. NoSQL is used
for Big data and real-time web apps. For example, companies like Twitter, Facebook and
Google collect terabytes of user data every single day.
 NoSQL database stands for “Not Only SQL” or “Not SQL.” Though a better term would
be “NoREL”, NoSQL caught on. Carl Strozz introduced the NoSQL concept in 1998.
 Traditional RDBMS uses SQL syntax to store and retrieve data for further insights.
Instead, a NoSQL database system encompasses a wide range of database technologies
that can store structured, semi-structured, unstructured and polymorphic data.

Why NoSQL?
 The concept of NoSQL databases became popular with Internet giants like Google,
Facebook, Amazon, etc. who deal with huge volumes of data. The system response time
becomes slow when you use RDBMS for massive volumes of data.
 To resolve this problem, we could “scale up” our systems by upgrading our existing
hardware. This process is expensive.
 The alternative for this issue is to distribute database load on multiple hosts whenever
the load increases. This method is known as “scaling out.”

70
NoSQL database is non-relational, so it scales out better than relational databases as they are
designed with web applications in mind.
2.EXPLAIN HISTORY OF NOSQL DATABASE (PART B)
EXPLAIN FEATURES OF NOSQL DATABASE (PART B)
Brief History of NoSQL Databases
 1998- Carlo Strozzi use the term NoSQL for his lightweight, open-source relational
database
 2000- Graph database Neo4j is launched
 2004- Google BigTable is launched
 2005- CouchDB is launched
 2007- The research paper on Amazon Dynamo is released
 2008- Facebooks open sources the Cassandra project
 2009- The term NoSQL was reintroduced
Features of NoSQL
Non-relational
 NoSQL databases never follow the relational model
 Never provide tables with flat fixed-column records
 Work with self-contained aggregates or BLOBs
 Doesn’t require object-relational mapping and data normalization
 No complex features like query languages, query planners,referential integrity joins,
ACID
Schema-free
 NoSQL databases are either schema-free or have relaxed schemas
 Do not require any sort of definition of the schema of the data
 Offers heterogeneous structures of data in the same domain

NoSQL is Schema-Free
Simple API
71
 Offers easy to use interfaces for storage and querying data provided
 APIs allow low-level data manipulation & selection methods
 Text-based protocols mostly used with HTTP REST with JSON
 Mostly used no standard based NoSQL query language
 Web-enabled databases running as internet-facing services
Distributed

 Multiple NoSQL databases can be executed in a distributed fashion


 Offers auto-scaling and fail-over capabilities
 Often ACID concept can be sacrificed for scalability and throughput
 Mostly no synchronous replication between distributed nodes Asynchronous Multi-
Master Replication, peer-to-peer, HDFS Replication
 Only providing eventual consistency
 Shared Nothing Architecture. This enables less coordination and higher distribution.

NoSQL is Shared Nothing.

3.EXPLAIN DIFFERENT TYPES OF NOSQL DATABASE (PART B)


Types of NoSQL Databases
NoSQL Databases are mainly categorized into four types: Key-value pair, Column-oriented,
Graph-based and Document-oriented. Every category has its unique attributes and
limitations. None of the above-specified database is better to solve all the problems. Users
should select the database based on their product needs.

 Key-value Pair Based


 Column-oriented Graph
 Graphs based
 Document-oriented

72
Key Value Pair Based
 Data is stored in key/value pairs. It is designed in such a way to
handle lots of data and heavy load.
 Key-value pair storage databases store data as a hash table where each
key is unique, and the value can be a JSON, BLOB(Binary Large
Objects), string, etc.
 It is one of the most basic NoSQL database example. This kind of
NoSQL database is used as a collection, dictionaries, associative arrays, etc. Key value
stores help the developer to store schema-less data. They work best for shopping cart
contents.
 Redis, Dynamo, Riak are some NoSQL examples of key-value store DataBases. They
are all based on Amazon’s Dynamo paper.
Column-based
 Column-oriented databases work on columns and are based on
BigTable paper by Google. Every column is treated separately.
Values of single column databases are stored contiguously.
 They deliver high performance on aggregation queries like
SUM, COUNT, AVG, MIN etc. as the data is readily available
in a column.
 Column-based NoSQL databases are widely used to manage
data warehouses, business intelligence, CRM, Library card catalogs,
 HBase, Cassandra, HBase, Hypertable are NoSQL query examples of column based
database.
Document-Oriented:
Document-Oriented NoSQL DB stores and retrieves data as a key value pair but the value
part is stored as a document. The document is stored in JSON or XML formats. The value is
understood by the DB and can be queried.

Relational Vs. Document


 In this diagram on your left you can see we have rows and columns, and in the right, we
have a document database which has a similar structure to JSON. Now for the relational
database, you have to know what columns you have and so on. However, for a
document database, you have data store like JSON object. You do not require to define
which make it flexible.

73
 The document type is mostly used for CMS systems, blogging platforms, real-time
analytics & e-commerce applications. It should not use for complex transactions which
require multiple operations or queries against varying aggregate structures.
 Amazon SimpleDB, CouchDB, MongoDB, Riak, Lotus Notes, MongoDB, are popular
Document originated DBMS systems.
Graph-Based
 A graph type database stores entities as well the relations amongst those entities. The
entity is stored as a node with the relationship as edges. An edge gives a relationship
between nodes. Every node and edge has a unique identifier.

 Compared to a relational database where tables are loosely connected, a Graph database
is a multi-relational in nature. Traversing relationship is fast as they are already captured
into the DB, and there is no need to calculate them.
 Graph base database mostly used for social networks, logistics, spatial data.
 Neo4J, Infinite Graph, OrientDB, FlockDB are some popular graph-based databases.

Query Mechanism tools for NoSQL


 The most common data retrieval mechanism is the REST-based retrieval of a value based
on its key/ID with GET resource Document store Database offers more difficult queries
as they understand the value in a key-value pair. For example, CouchDB allows defining
views with MapReduce
 REST stands for Representational State Transfer and API stands for Application Program
Interface. REST is a software architectural style that defines the set of rules to be used for
creating web services. Web services which follow the REST architectural style are known
as RESTful web services. It allows requesting systems to access and manipulate web
resources by using a uniform and predefined set of rules. Interaction in REST based
systems happen through Internet’s Hypertext Transfer Protocol (HTTP).
 A Restful system consists of a:
1. Client who requests for the resources.
2. Server who has the resources.
Constraints
I. Uniform Interface
II. Stateless
III. Cacheable
IV. Client-Server
V. Layered System
VI. Code on Demand

4.WHAT IS THE CAP THEOREM? (PART A)


74
What is the CAP Theorem?
 The CAP theorem, originally introduced as the CAP principle, can be used to explain
some of the competing requirements in a distributed system with replication. It is a tool
used to make system designers aware of the trade-offs while designing networked
shared-data systems.
 CAP theorem is also called brewer’s theorem. It states that is impossible for a
distributed data store to offer more than two out of three guarantees
 Consistency
 Availability
 Partition Tolerance
Consistency:
 Consistency means that the nodes will have the same copies of a replicated data item
visible for various transactions.
 The data should remain consistent even after the execution of an operation.
 This means once data is written, any future read request should contain that data. For
example, after updating the order status, all the clients should be able to see the same
data.

Availability:
 The database should always be available and responsive. It should not have any
downtime.
 Availability means that each read or write request for a data item will either be
processed successfully or will receive a message that the operation cannot be
completed.
Partition Tolerance:
 Partition Tolerance means that the system should continue to function even if the
communication among the servers is not stable. For example, the servers can be
partitioned into multiple groups which may not communicate with each other. Here, if
part of the database is unavailable, other parts are always unaffected.
 The use of the word consistency in CAP and its use in ACID do not refer to the same
identical concept.
 In CAP, the term consistency refers to the consistency of the values in different copies
of the same data item in a replicated distributed system. In ACID, it refers to the fact
that a transaction will not violate the integrity constraints specified on the database
schema.

75
CA(Consistency and Availability)-
 The system prioritizes availability over consistency and can respond with possibly stale
data.
 Example databases: Cassandra, CouchDB, Riak, Voldemort.
AP(Availability and Partition Tolerance)-
 The system prioritizes availability over consistency and can respond with possibly stale
data.
 The system can be distributed across multiple nodes and is designed to operate reliably
even in the face of network partitions.
 Example databases: Amazon DynamoDB, Google Cloud Spanner.
CP(Consistency and Partition Tolerance)-
 The system prioritizes consistency over availability and responds with the latest updated
data.
 The system can be distributed across multiple nodes and is designed to operate reliably
even in the face of network partitions.
 Example databases: Apache HBase, MongoDB, Redis.
Eventual Consistency
 The term “eventual consistency” means to have copies of data on multiple machines to
get high availability and scalability. Thus, changes made to any data item on one
machine has to be propagated to other replicas.
 Data replication may not be instantaneous as some copies will be updated immediately
while others in due course of time. These copies may be mutually, but in due course of
time, they become consistent. Hence, the name eventual consistency.
DEFINE BASE (PART A)
BASE: Basically Available, Soft state, Eventual consistency

 Basically, available means DB is available all the time as per CAP theorem
76
 Soft state means even without an input; the system state may change
 Eventual consistency means that the system will become consistent over time

5.EXPLAIN THE ADVANTAGES AND DISADVANTAGES OF NOSQL (PART B)


Advantages of NoSQL
 Can be used as Primary or Analytic Data Source
 Big Data Capability
 No Single Point of Failure
 Easy Replication
 No Need for Separate Caching Layer
 It provides fast performance and horizontal scalability.
 Can handle structured, semi-structured, and unstructured data with equal effect
 Object-oriented programming which is easy to use and flexible
 NoSQL databases don’t need a dedicated high-performance server
 Support Key Developer Languages and Platforms
 Simple to implement than using RDBMS
 It can serve as the primary data source for online applications.
 Handles big data which manages data velocity, variety, volume, and complexity
 Excels at distributed database and multi-data center operations
 Eliminates the need for a specific caching layer to store data
 Offers a flexible schema design which can easily be altered without downtime or
service disruption
Disadvantages of NoSQL
 No standardization rules
 Limited query capabilities
 RDBMS databases and tools are comparatively mature
 It does not offer any traditional database capabilities, like consistency when multiple
transactions are performed simultaneously.
 When the volume of data increases it is difficult to maintain unique values as keys
become difficult
 Doesn’t work as well with relational data
 The learning curve is stiff for new developers
 Open source options so not so popular for enterprises.

77
MONGODB SYSTEM OVERVIEW
6.DEFINE MONGODB (PART A)
EXPLAIN ABOUT MONGODB SYSTEM (PART B)
 MongoDB is an open source NoSQL database management program. NoSQL (Not only
SQL) is used as an alternative to traditional relational databases.
 Like any other database management language, MongoDB is based on a NoSQL
database that is used for storing data in a key-value pair.
 Its working is based on the concept of document and collection.
 Collections, the equivalent of SQL tables, contain document sets. MongoDB offers
support for many programming languages, such as C, C++, C#, Go, Java, Python, Ruby
and Swift.
 It is also an open-source, a document-oriented, cross-platform database system that is
written using C++.
 It also provides high availability, high performance, along with automatic scaling.
 This open-source product was developed by the company - 10gen in October 2007, and
the company also maintains it.
 MongoDB exists under the General Public License (GPL) as a free database
management tool as well as available under Commercial license as of the manufacturer.
 MongoDB was also intended to function with commodity servers. Companies of
different sizes all over the world across all industries are using MongoDB as their
database.
 Here are some key terminologies that you must know to get into the in-depth of
MongoDB:
What is a Database?
 In MongoDB, a database can be defined as a physical container for collections of data.
 Database is a physical container for collections.
 Here, on the file system, every database has its collection of files residing. Usually, a
MongoDB server contains numerous databases.
What are Collections?
 Collections can be defined as a cluster of MongoDB documents that exist within a
single database. You can relate this to that of a table in a relational database
management system.
78
 Collection is a group of MongoDB documents. It is the equivalent of an RDBMS table.
A collection exists within a single database. Collections do not enforce a schema.
Documents within a collection can have different fields. Typically, all documents in a
collection are of similar or related purpose.
What are documents?
 A document is a set of key-value pairs. Documents have dynamic schema.
 Dynamic schema means that documents in the same collection do not need to have the
same set of fields or structure, and common fields in a collection's documents may hold
different types of data.
Here is a table showing the relation between the terminologies used in RDBMS and
MongoDB:
RDBMS MongoDB
Database Database
Table Collection
Tuple or Row Document
Column Field
Table Join Embedded Documents
Primary Key Primary key / Default key
Mysqld / Oracle mongod

Popular Organizations That Use MongoDB


Here is a list of some popular and multinational companies and organizations that are using
MongoDB as their official database to perform and manage different business applications.

 Adobe
 McAfee
 LinkedIn
 FourSquare
 MetLife
 eBay
 SAP
8.WHERE IS MONGODB USED? (PART A)
Where Is MongoDB Used?
Beginners need to know the purpose and requirement of why to use MongoDB or what is the
need of it in contrast to SQL and other database systems. In simple words, it can be said that
every modern-day application involves the concept of big data, analyzing different forms of
data, fast features improvement in handling data, deployment flexibility, which old database
systems are not competent enough to handle. Hence, MongoDB is the next choice.
Why Use MongoDB?
Some basic requirements are supported by this NoSQL database, which is lacking in other
database systems. These collective reasons make MongoDB popular among other database
systems:

79
 Storage. MongoDB can store large structured and unstructured data volumes and is
scalable vertically and horizontally. Indexes are used to improve search performance.
Searches are also done by field, range and expression queries.
 Data integration. This integrates data for applications, including for hybrid and multi-
cloud applications.
 Complex data structures descriptions. Document databases enable the embedding of
documents to describe nested structures (a structure within a structure) and can tolerate
variations in data.
 Load balancing. MongoDB can be used to run over multiple servers.
9.EXPLAIN FEATURE OF MONGODB (PART B)
Features of MongoDB:
 Document Oriented: MongoDB stores the main subject in the minimal number of
documents and not by breaking it up into multiple relational structures like RDBMS.
For example, it stores all the information of a computer in a single document called
Computer and not in distinct relational structures like CPU, RAM, Hard disk, etc.
 Indexing: Without indexing, a database would have to scan every document of a
collection to select those that match the query which would be inefficient. So, for
efficient searching Indexing is a must and MongoDB uses it to process huge volumes of
data in very less time.
 Scalability: MongoDB scales horizontally using sharding (partitioning data across
various servers). Data is partitioned into data chunks using the shard key, and these data
chunks are evenly distributed across shards that reside across many physical servers.
Also, new machines can be added to a running database.
 Replication and High Availability: MongoDB increases the data availability with
multiple copies of data on different servers. By providing redundancy, it protects the
database from hardware failures. If one server goes down, the data can be retrieved
easily from other active servers which also had the data stored on them.
 Aggregation: Aggregation operations process data records and return the computed
results. It is similar to the GROUPBY clause in SQL. A few aggregation expressions are
sum, avg, min, max, etc

10.EXPLAIN THE ADVANTAGES OF MONGODB (PART B)


Advantages of Using MongoDB
 Schema-less. Like other NoSQL databases, MongoDB doesn't require
predefined schemas. It stores any type of data. This gives users the flexibility to create
any number of fields in a document, making it easier to scale MongoDB databases
compared to relational databases.
 Document-oriented. One of the advantages of using documents is that these objects map
to native data types in several programming languages., Having embedded documents
also reduces the need for database joins, which can lower costs.
 Scalability. A core function of MongoDB is its horizontal scalability, which makes it a
useful database for companies running big data applications. In addition, sharding lets
the database distribute data across a cluster of machines. MongoDB also supports the
creation of zones of data based on a shard key.

80
 Third-party support. MongoDB supports several storage engines and provides pluggable
storage engine APIs that let third parties develop their own storage engines for
MongoDB.
 Aggregation. The DBMS also has built-in aggregation capabilities, which lets users
run MapReduce code directly on the database rather than running MapReduce
on Hadoop. MongoDB also includes its own file system called GridFS, akin to
the Hadoop Distributed File System. The use of the file system is primarily for storing
files larger than BSON's size limit of 16 MB per document. These similarities let
MongoDB be used instead of Hadoop, though the database software does integrate with
Hadoop, Spark and other data processing frameworks.
Disadvantages of MongoDB
 Continuity. With its automatic failover strategy, a user sets up just one master node in a
MongoDB cluster. If the master fails, another node will automatically convert to the
new master. This switch promises continuity, but it isn't instantaneous -- it can take up
to a minute. By comparison, the Cassandra NoSQL database supports multiple master
nodes. If one master goes down, another is standing by, creating a highly available
database infrastructure.
 Write limits. MongoDB's single master node also limits how fast data can be written to
the database. Data writes must be recorded on the master, and writing new information
to the database is limited by the capacity of that master node.
 Data consistency. MongoDB doesn't provide full referential integrity through the use of
foreign-key constraints.
 Security. In addition, user authentication isn't enabled by default in MongoDB databases
BASIC QUERYING WITH MOGODB SHELL

11.DEFINE MONGODB SHELL (PART A)


EXPLAIN ABOUT MONGODB SHELL (PART B)
 MongoDB is an unstructured database which, in the form of documents, stores data.
 In addition, MongoDB is very effective in handling enormous amounts of data and is
the most commonly used NoSQL database as it provides rich query language and
versatile and easy data access.
 MongoDB Shell is the quickest way to connect, configure, query, and work with your
MongoDB database. It acts as a command-line client of the MongoDB server.
 The MongoDB Shell is a standalone, open-source product and developed separately
from the MongoDB Server under the Apache 2 license.
 MongoDB Shell is already installed with MongoDB.

81
Step 1 — Connecting to the MongoDB Server
 To open up the MongoDB shell, run the mongo command from your server prompt. By
default, the mongo command opens a shell connected to a locally-installed MongoDB
 $mongo
 This will print
 Output

MongoDB shell version v4.4.6

connecting to:
mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb

Implicit session: session { "id" : UUID("b9a48dc7-e821-4b09-a753-429eedf072c5") }

MongoDB server version: 4.4.6


 Now try executing the show dbs command once again:
show dbs
 This time, the command will return a list of all available databases in the system:
Output
admin 0.000GB
config 0.000GB
local 0.000GB
Step 2 — Executing Commands
 As with other command line interfaces, the MongoDB shell accepts commands and
returns the desired results to standard output. As mentioned previously, in MongoDB
shell, all commands are typed into the command prompt denoted with the greater-than
sign (>). Pressing ENTER after the command immediately executes it and returns the
command output to the screen.

82
 Most commands in MongoDB database are executed on a database or on a collection in
a selected database. The currently-selected database is represented by the db object
accessible through the shell. You can check which database is currently selected by
typing db into the shell:
db
 On a freshly-connected shell instance, the selected database is always called test:
Output
test
 You can safely use this database to experiment with MongoDB and the MongoDB shell.
To switch to another database, you can run the use command followed by the new
database name. Try switching to a database called fruits:
use fruits
 The shell will inform you that you’re now using the new database:
Output
switched to db fruits
 Type the following line into the MongoDB shell and press ENTER. Notice the
highlighted collection name (apples):
db.apples.insert(
 Pressing ENTER after an open parenthesis will start a multi-line command prompt,
allowing you to enter longer commands in more than one line. The insert command
won’t register as complete until you enter a closing parenthesis. Until you do, the
prompt will change from the greater-than sign to an ellipsis (...).
 You don’t need to break up MongoDB commands into multiple lines like this, but doing
so can make long commands easier to read and understand.
 On the next line, enter the object within a pair of curly brackets ({ and }). This example
document has only one field and value pair:
{name: 'Red Delicious'}
 you can end your input and run the operation by entering a closing parenthesis and
pressing ENTER:
)
 This time, the Mongo shell will register the ending of the insert command and execute
the whole statement.
Output
WriteResult({ "nInserted" : 1 })
Create a Sample Database
 Before the start, we will create a sample DB with some sample data to perform all
operations.
 We will create a database with name myDB and will create a collection with
name orders. For this, the statement would be as follows.
> use myDB
>db.createCollection("orders")
>
 MongoDB doesn't use the rows and columns. It stores the data in a document format. A
collection is a group of documents.
 You can check all collections in a database by using the following statement.
83
> use myDB
>show collections
orders
 Let's insert some documents by using the following statement.
>db.orders.insert([
{
Customer: "abc",
Address:{"City":"Jaipur","Country":"India"},
PaymentMode":"Card",
Email:"[email protected]",
OrderTotal: 1000.00,
OrderItems:[
{"ItemName":"notebook","Price":"150.00","Qty":10},
{"ItemName":"paper","Price":"10.00","Qty":5},
{"ItemName":"journal","Price":"200.00","Qty":2},
{"ItemName":"postcard","Price":"10.00","Qty":500}
]
},
{
Customer: "xyz",
Address:{"City":"Delhi","Country":"India"},
PaymentMode":"Cash",
OrderTotal: 800.00,
OrderItems:[
{"ItemName":"notebook","Price":"150.00","Qty":5},
{"ItemName":"paper","Price":"10.00","Qty":5},
{"ItemName":"postcard","Price":"10.00","Qty":500}
]
},
{
Customer: "ron",
Address:{"City":"NewYork","Country":"USA"},
PaymentMode":"Card",
Email:"[email protected]",
OrderTotal: 800.00,
OrderItems:[
{"ItemName":"notebook","Price":"150.00","Qty":5},
{"ItemName":"postcard","Price":"10.00","Qty":00}
]
}
])
Query Documents
find() method
 We need to use the find() method to query documents from MongoDB collections. The
following statement will retrieve all documents from the collection.
>db.orders.find()
84
db.movies.find( { "title": "Titanic" } )
 This operation corresponds to the following SQL statement:
SELECT * FROM movies WHERE title = "Titanic
 Filter the Documents by Specifying a Condition
 Now we will learn how we can fetch the documents that match a specified condition.
MongoDB provides many comparison operators for this.
1. $eq Operator
 The $eq operator checks the equality of the field value with the specified value. To fetch
the order where PaymentMode is 'Card' you can use the following statement
>db.orders.find( { PaymentMode: { $eq: "Card" } } )
This query can be written also like below
>db.orders.find( { PaymentMode: "Card" } )
$eq Operator with embedded document
 You may have noticed that we inserted an embedded document Address in
the Orders collection. If you want to fetch the order where Country is 'India' you can use
a dot notation like the following statement.
 >db.Orders.find( { "Address.Country": { $eq: "India" } } )
 This query can be written also like below
 >db.Orders.find( { "Address.Country":"India" } )
$gt Operator
 You can use the $gt operator to retrieve the documents where a field’s value is greater
than the specified value. The following statement will fetch the documents
where OrderTotal is greater than 800.
>db.orders.find( { OrderTotal: { $gt: 800.00 } } )
$gte Operator
 You can use the $gte operator to retrieve the documents where a field’s value is greater
than or equal to the specified value. The following statement will fetch the documents
where OrderTotal is greater than or equal to 800.
 >db.orders.find( { OrderTotal: { $gte: 800.00 } } )

$lt Operator
 You can use the $lt operator to retrieve the documents where a field’s value is less than
the specified value. The following statement will fetch the documents
where OrderTotal is less than 800.
>db.orders.find( { OrderTotal: { $lt: 800.00 } } )
lte Operator
 You can use the $lte operator to retrieve the documents where a field’s value is less than
or equal to the specified value. Following statement will fetch the documents
where OrderTotal is less than or equal to 800.
db.orders.find( { OrderTotal: { $lte: 800.00 } } )>
$ne Operator
 You can use the $ne operator to retrieve the documents where a field’s value is not
equal to the specified value.
>db.orders.find( { PaymentMode: { $ne: "Card" } } )
85
$in Operator
 You can use the $in operator to retrieve the documents where a field’s value is equal to
any value in the specified array.
>db.orders.find( { OrderItems.ItemName: { $in: ["journal","paper"] } } )
$nin Operator
 You can use the $nin operator to retrieve the documents where a field’s value is not
equal to any value in the specified array. It will also select the documents where the
field does not exist.
>db.orders.find( { OrderItems.ItemName: { $nin: ["journal","paper"] } } )
Indexing
 We know that indexing is very important if we are performing the queries on a large
database. Without indexing execution of a query can be expensive. We can add a simple
ascending index on a single field by using the following statement.
>db.Orders.createIndex({"Customer":1})
Select All Documents in a Collection
 To select all documents in the collection, pass an empty document as the query filter
parameter to the find method. The query filter parameter determines the select criteria:
db.inventory.find( {} )
12.WHAT IS THE MONGODB MONGO SHELL?(PART A)
MongoDB Shell
 This operation corresponds to the following SQL statement:
 SELECT*FROM inventory
What is the MongoDB Mongo shell?
 MongoDB Mongo shell is an interactive JavaScript interface that allows you to interact
with MongoDB instances through the command line. The shell can be used for:
Data manipulation
 Administrative operations such as maintenance of database instances
 MongoDB Mongo shell features
 MongoDB Mongo shell is the default client for the MongoDB database server. It’s a
command-line interface (CLI), where the input and output are all console-based. The
Mongo shell is a good tool to manipulate small sets of data.
 Here are the top features that Mongo shell offers:
1. Run all MongoDB queries from the Mongo shell.
2. Manipulate data and perform administration operations.
3. Mongo shell uses JavaScript and a related API to issue commands.
4. See previous commands in the mongo shell with up and down arrow keys.
5. View possible command completions using the tab button after partially entering a
command.
6. Print error messages, so you know what went wrong with your commands.
7. MongoDB has recently introduced a new mongo shell known as mongosh. It has some
additional features, such as extensibility and embeddability—that is, the ability to use it
inside other products such as VS Code.
MongoDB Shell (mongosh)

86
 The MongoDB Shell, mongosh, is a fully functional JavaScript and Node.js 16.x for
interacting with MongoDB deployments. You can use the MongoDB Shell to test
queries and operations directly with your database.
 mongosh is available as a standalone package in the MongoDB Download Center.
Download and Install mongosh
 Connect to a MongoDB Deployment
 Once you have installed the MongoDB Shell and added it to your system PATH, you
can connect to a MongoDB deployment. To learn more, see Connect to a
Deployment.The MongoDB Shell versus the Legacy mongo Shell
 The new MongoDB Shell, mongosh, offers numerous advantages over the
legacy mongo shell, such as:
 Improved syntax highlighting.
 Improved command history.
 Improved logging.
 Currently mongosh supports a subset of the mongo shell methods.
 Achieving feature parity between mongosh and the mongo shell is an ongoing effort.
 To maintain backwards compatibility, the methods that mongosh supports use the same
syntax as the corresponding methods in the mongo shell.

REQUEST BODY PARSING IN EXPRESS


13.DEFINE BODY PARSER (PART A)
EXPLAIN BODY PARSER (PART B)
What is parser in Node JS?
 Express body-parser is an npm module used to process data sent in an HTTP request
body. It provides four express middleware for parsing JSON, Text, URL-encoded, and
raw data sets over an HTTP request body. Before the target controller receives an
incoming request, these middleware routines handle it.

What Is Body-parser?
 Body Parser is a middleware of Node JS used to handle HTTP POST request. Body
Parser can parse string based client request body into JavaScript Object which we can
use in our application.
 Body-parser parses is an HTTP request body that usually helps when you need to know
more than just the URL being hit.
 Specifically in the context of a POST HTTP request where the information you want is
contained in the body.
 Using body-parser allows you to access req.body from within routes and use that data.
 For example: To create a user in a database.
What Body Parser Do?
 Parse the request body into JS Object
 Put above JS Object in req.body so that middleware can use the data.

87
Installation

$ npm install body-parser

API
 To include body-parser in our application, use following code. The bodyParser object
has two methods, bodyParser.json() and bodyParser.urlencoded(). The data will be
available in req.body property.
var bodyParser = require('body-parser');
req.body
 req.body contains data ( in key-value form ) submitted in request body. The default value
is undefined.
req.json
const express=require('express');
const app=express();
const bodyParser=require('body-parser');
// parse application/json
app.use(bodyParser.json());

app.post('post',(req,res)=>{
console.log(req.body);
res.json(req.body);
});

Form Data
 Use bodyParser to parse HTML Form Data received through HTTP POST method.
Create a separate HTML Form with inputs. Use name attributes in all input controls. Set
the method (in html form ) to POST and action to path of Post request.
const express=require('express');
const app=express();
const bodyParser=require('body-parser');
// parse application/x-www-form-urlencoded
app.use(bodyParser.urlencoded({ extended: false }));
app.post('formdata',(req,res)=>{
console.log(req.body);
res.json(req.body);
});
HTML Page
<form method="post" action="127.0.0.01:3000/formdata">
88
<input type="text" name="username" required>
<input type="password" name="userpass" required>
<button>Send</button>
<form>
Errors
 The middlewares provided by this module create errors using the http-errors module.
 The errors will typically have a status/status Code property that contains the suggested
HTTP response code, an expose property to determine if the message property should
be displayed to the client, a type property to determine the type of error without
matching against the message, and a body property containing the read body, if
available.
 The following are the common errors created, though any error can come through for
various reasons.
1. Content encoding unsupported
2. Entity parse failed
3. Entity verify failed
4. Request aborted
5. Request entity too large
6. Request size did not match content length
7. Stream encoding should not be set
8. Stream is not readable
9. Too many parameters
NODEJS AND MONGODB CONNECTION
14.HOW TO CONNECT MOGODB WITH NODEJS(PART B)
 The MongoDB Node.js Driver allows you to easily interact with MongoDB databases
from within Node.js applications.

 MongoDB is a NoSQL database for nodejs. We will use MongoDB driver for nodejs to
manage MongoDB database. MongoDB uses binary JSON to store data. We will also
use the mongoose tool to connect MongoDB with Node js and manage the database (i.e.
create, read, update, and delete documents). Alike traditional databases, MongoDB is
easy to use and saves time.
Introduction
 Consider, We have a table schema defined in a relational database as shown below. We
cannot insert new data into the table that contains the new field phoneNumber. Because
Field phoneNumber is not defined in the table schema.
+---------------+--------------+------+-----+---------+
| Field | Type | Null | Key | Default |
+---------------+--------------+------+-----+---------+
| id | int | NO | PRI | NULL |
| firstName | varchar(255) | YES | | NULL |
| lastName | varchar(255) | YES | | NULL |
+---------------+--------------+------+-----+---------+
But, MongoDB doesn't need a pre-defined schema. We can insert new data in the object
format with any additional fields. MongoDB store data as a document shown below:
89
// document 1
{
id:'1',
firstName:'Suhani',
lastName:'Singh'
}

// document 2
{
id:'2',
firstName:'Modi',
lastName:'Kumar',
phoneNumber:'+91999999'
}

// document 3
{
id:'3',
email:'[email protected]',
firstName:'Ambesh',
lastName:'Yadav',
phoneNumber:['+9199999999','+9199999999'],
}
Install the MongoDB Node.js Driver
 Create a new file named mongodb-nodejs and go to the current folder mongodb-nodejs
with CLI (Command Line Interface) as shown below:
mkdir mongodb-nodejs
cd mongodb-nodejs
 Create a new node project with npm that add the package.json file inside mongodb-
nodejs folder
npm init -y
 Install the mongodb driver for nodejs to use the MongoDB database with nodejs
npm install mongodb --save
 mongodb driver helps us to connect and easily manage queries in MongoDB with
nodejs.
Connecting to The Local MongoDB Database
 We will define a path to store data for MongoDB on the local machine. We will add
the path C:\Program Files\MongoDB\data\db. Also, We make sure that the specified
path/folder exists.
mongod --dbpath 'C:\Program Files\MongoDB\data\db'
 In the above code block, We are using --dbpath to add a path for the MongoDB
database and start the server locally.
Configuring The MongoDB Node.js Connection
 Create a file named server.js and add the following code to the server.js file
const { MongoClient } = require('mongodb')

90
// Create Instance of MongoClient for mongodb
const client = new MongoClient('mongodb://localhost:27017')

// Connect to database
client.connect()
.then(() => console.log('Connected Successfully'))
.catch(error => console.log('Failed to connect', error))

 Run the node server.js command and We will see the following output
$ node server.js
Connected Successfully!
Closing The Connection
 We will replace the previous code of the server.js file with the following code:
const { MongoClient } = require('mongodb')

// Create Instance of MongoClient for mongodb


const client = new MongoClient('mongodb://localhost:27017')

// Connect to database
client.connect()
.then(() => {
console.log('Connected Successfully!')

//Close the database connection


console.log('Exiting..')
client.close()
})
.catch(error => console.log('Failed to connect!', error))
 In the above code block, We are using the close() method to disconnect the database
from the node app.
 Node app will exit from the MongoDB instance after connecting the node to the
database.
 Run node server.js
$ node server.js
Connected Successfully!
Exiting..
Example
 First, We will install the following npm packages:
 Nodemon: It watches file changes in nodejs and restarts the node app if any changes
happen to files. We will install nodemon using the command npm install nodemon --
save-dev.
 Express: It is a nodejs framework that helps to build node APIs. We will install
express using the command npm install express --save.
 Mongoose: It is a Tool for the MongoDB database. It helps to create schema, model,
and manage database queries for MongoDB. We will install mongoose using the
command npm install mongoose --save.
91
 Bodyparser: It will parse the data that is coming from the HTML body. We will install
body-parser using the command npm install body-parser --save.
 We will follow the file structure to create files and folders as shown below:
mongodb-nodejs/
├─ public/
│ ├─ index.html
├─ index.js
├─ package.json
└─ server.js
 In package.json, We will add the start property to scripts.
"scripts": {
"start": "nodemon index.js",
}

 We will require all modules to index.js file as shown below:


const express = require('express')
const mongoose = require('mongoose')
const bodyParser = require('body-parser')
 We will listen to the express app at local port 4000 and add the following code to the
index.js file
const app = express()
app.listen(4000, (res) => {
console.log('Listening on port 4000')
})
 We will add the following code to index.html. It is a simple student form that will get
the data from the web and store the data in the local MongoDB database.
<!DOCTYPE html>
<html lang="en">
<head>
<title>Student Form</title>
</head>
<body>
<form action="/student" method="post">
<input type="text" placeholder="Enter Your Name" name="name">
<input type="text" placeholder="Enter Your Email" name="email">
<button type="submit">Submit</button>
</form>
</body>
</html>
 In the above code block, We are using the action and method attribute in the form tag.
action attribute sends the data to the server and the method attribute tells whether it is
a POST request or GET request.
 We will tell express to serve the index.html on local port 4000 and use the body-parser
to get the data from the HTTP request in a proper format.
app.use(express.static(`${__dirname}/public`))
app.use(bodyParser.json())
92
app.use(bodyParser.urlencoded({
extended: true
}))
 In the above code block, We are enabling middlewares with the help of the use()
method of the express app. We are using the static() method to add a path for static
files.
 We need a database to store the data. Hence, We will connect the local MongoDB
database using the connect() method of mongoose.
mongoose.connect('mongodb://localhost:27017/students')
 In the above code block, We pass the database URL that creates the collection name
students if it doesn't exist.
 We will create and validate the schema using the Schema() method of mongoose. We
will also create a model using the model() method of mongoose.
const studentSchema = new mongoose.Schema({
name: String,
email: String
})
const Student = mongoose.model('Student', studentSchema)
 We will use the post() method of the express app to serve the HTTP request POST to
the /student route.
app.post('/student', (req, res) => {
let student = new Student(req.body);
student.save()
.then(doc => {
res.send(doc)
console.log(doc)
})
.catch(err => console.log(err))
})
 When we hit submit button, It will store the student data in the local MongoDB
database and navigate to the http://localhost:4000/student URL and display the output
as shown below:

"name": "abc",
"email": "[email protected]",
"_id": "635a46cf9e89ce7cd5e1473f",
"__v": 0
}
 We will use the get() method to fetch all the students from the local MongoDB
database.
// Get all students
app.get('/students', (req, res) => {
Student.find({})
.then(docs => {
console.log(docs)
res.json(docs)
93
})
.catch(err => console.log(err))
})
 In the above code block, We are using the find() method to retrieve students' data.
When we hit the URL http://localhost:4000/students, It will display the output as
shown below:
[
{
"_id": "635a4a55b9e33f7ab6e09ed4",
"name": "abc",
"email": "[email protected]",
"__v": 0
},
{
"_id": "635a4a8dc7824ed2d957b882",
"name": "xyz",
"email": "[email protected]",
"__v": 0
},
{
"_id": "635a4b2e0c67336d6d866ca1",
"name": "hi",
"email": "[email protected]",
"__v": 0
}
]
Insert Documents
const { MongoClient } = require('mongodb')

// Create Instance of MongoClient for mongodb


const client = new MongoClient('mongodb://localhost:27017')

// Insert to database
client.db('students').collection('students').insertOne({
name: 'Amyport',
email: '[email protected]'
})
.then((res) => {
console.log(res)
client.close()
})
.catch((err) => console.log(err))
Update/Delete Documents
const { MongoClient } = require('mongodb')

// Create Instance of MongoClient for mongodb


94
const client = new MongoClient('mongodb://localhost:27017')

// Insert to database
client.db('students').collection('students')
.updateOne({ name: 'Amyporter' },
{
$set:
{ email: '[email protected]' }
})
.then((res) => {
console.log(res)
client.close()
})
.catch((err) => console.log(err))

 In the above code block, We are using the updateOne() method to select the student
using name or email and update it with the $set variable
const { MongoClient } = require('mongodb')

// Create Instance of MongoClient for mongodb


const client = new MongoClient('mongodb://localhost:27017')

// Insert to database
client.db('students').collection('students')
.deleteOne({ name: 'Amyporter' })
.then((res) => {
console.log(res)
client.close()
})
.catch((err) => console.log(err))

 In the above code block, We are using the deleteOne() method to select the student
using name or email and delete it.
Mongoose
 Mongoose is an ODM(Object Data Modeling) tool that helps to define the model
based on the schema. A schema is a kind of structure that defines how we can store
data in the database. It helps us to validate types like objects, strings, booleans,
numbers, etc.
 Consider, We want to make a list of npm libraries, then we will use the following
format to store information in the database.
const npmSchema = {
id:'',
packageName:'',
homePage:'',
repository:''
}
95
 In this way, We can store data in the MongoDB database but it is not validated. Hence,
We use mongoose to define a schema, validate data and create a model as shown
below
// Define schema and validate using mongoose
const npmList = new mongoose.Schema({
id: { type: Number },
packageName: { type: String },
homePage: { type: String },
repository: { type: String }
})

// Create model for schema `npmList` using mongoose


const NpmList = mongoose.model('NpmList', npmList)
 -mongoose map our JSON data with the mongoose model, validate the data with a
defined schema, and interacts with the MongoDB database with available methods in
mongoose.
ADDING AND RETRIEVING DATA TO MONGODB FROM NODE JS
15.HOW TO PERFORM CRUD OPERATIONS IN NODEJS(PART B)
Node.js and NoSQL Databases
 Over the years, NoSQL database such as MongoDB and MySQL have become quite
popular as databases for storing data. The ability of these databases to store any type
of content and particularly in any type of format is what makes these databases so
famous.
 Node.js has the ability to work with both MySQL and MongoDB as databases. In
order to use either of these databases, you need to download and use the required
modules using the Node package manager.
 For MySQL, the required module is called “mysql” and for using MongoDB the
required module to be installed is “Mongoose.”
 With these modules, you can perform the following operations in Node.js

1. Manage the connection pooling – Here is where you can specify the number of
MySQL database connections that should be maintained and saved by Node.js.
2. Create and close a connection to a database. In either case, you can provide a callback
function which can be called whenever the “create” and “close” connection methods
are executed.
3. Queries can be executed to get data from respective databases to retrieve data.
4. Data manipulation, such as inserting data, deleting, and updating data can also be
achieved with these modules.
Using MongoDB and Node.js
Database name: EmployeeDB
Collection name: Employee
Documents
{
{Employeeid : 1, Employee Name : Guru99},
{Employeeid : 2, Employee Name : Joe},
{Employeeid : 3, Employee Name : Martin},
96
}

1. Installing the NPM Modules


You need a driver to access Mongo from within a Node application. There are a number of
Mongo drivers available, but MongoDB is among the most popular. To install the MongoDB
module, run the below command
npm install mongodb

2. Creating and closing a connection to a MongoDB database. The below code


snippet shows how to create and close a connection to a MongoDB database.

Code Explanation:

1. The first step is to include the mongoose module, which is done through the require
function. Once this module is in place, we can use the necessary functions available in
this module to create connections to the database.
2. Next, we specify our connection string to the database. In the connect string, there are
3 key values which are passed.

 The first is ‘mongodb’ which specifies that we are connecting to a mongoDB database.
 The next is ‘localhost’ which means we are connecting to a database on the local
machine.
 The next is ‘EmployeeDB’ which is the name of the database defined in our MongoDB
database.

 The next step is to actually connect to our database. The connect function takes in our
URL and has the facility to specify a callback function. It will be called when the
connection is opened to the database. This gives us the opportunity to know if the
database connection was successful or not.
 In the function, we are writing the string “Connection established” to the console to
indicate that a successful connection was created.
 Finally, we are closing the connection using the db.close statement.
If the above code is executed properly, the string “Connected” will be written to the console
as shown below.

97
3. Querying for data in a MongoDB database – Using the MongoDB driver we can
also fetch data from the MongoDB database.The below section will show how we can
use the driver to fetch all of the documents from our Employee collection in our
EmployeeDB database. This is the collection in our MongoDB database, which
contains all the employee-related documents. Each document has an object id,
Employee name, and employee id to define the values of the document.

Code Explanation:

1. In the first step, we are creating a cursor (A cursor is a pointer which is used to point
to the various records fetched from a database. The cursor is then used to iterate
through the different records in the database. Here we are defining a variable name
called cursor which will be used to store the pointer to the records fetched from the
database. ) which points to the records which are fetched from the MongoDb
collection. We also have the facility of specifying the collection ‘Employee’ from
which to fetch the records. The find() function is used to specify that we want to
retrieve all of the documents from the MongoDB collection.
2. We are now iterating through our cursor and for each document in the cursor we are
going to execute a function.
3. Our function is simply going to print the contents of each document to the console.
For example, suppose if you just wanted to fetch the record
which has the employee name as raj, then this statement can
be written as follows
var cursor=db.collection('Employee').find({EmployeeName:
"raj"})
If the above code is executed successfully, the following
output will be displayed in your console.

From the output,


 You will be able to clearly see that all the documents from the collection are retrieved.
This is possible by using the find() method of the mongoDB connection (db) and
iterating through all of the documents using the cursor.

98
4. Inserting documents in a collection – Documents can be inserted into a collection
using the insertOne method provided by the MongoDB library. The below code
snippet shows how we can insert a document into a mongoDB collection.

Code Explanation:

1. Here we are using the insertOne method from the MongoDB library to insert a
document into the Employee collection.
2. We are specifying the document details of what needs to be inserted into the
Employee collection.
If you now check the contents of your MongoDB database, you will find the record with
Employeeid of 4 and EmployeeName of “NewEmployee” inserted into the Employee
collection.
To check that the data has been properly inserted in the database, you need to execute the
following commands in MongoDB
1. Use EmployeeDB
2. db.Employee.find({Employeeid :4 })
The first statement ensures that you are connected to the EmployeeDb database. The second
statement searches for the record which has the employee id of 4.

5. Updating documents in a collection – Documents can be updated in a collection


using the updateOne method provided by the MongoDB library. The below code
snippet shows how to update a document in a mongoDB collection.

Code Explanation:

99
1. Here we are using the “updateOne” method from the MongoDB library, which is used
to update a document in a mongoDB collection.
2. We are specifying the search criteria of which document needs to be updated. In our
case, we want to find the document which has the EmployeeName of
“NewEmployee.”
3. We then want to set the value of the EmployeeName of the document from
“NewEmployee” to “Mohan”.
If you now check the contents of your MongoDB database, you will find the record with
Employeeid of 4 and EmployeeName of “Mohan” updated in the Employee collection.
To check that the data has been properly updated in the database, you need to execute the
following commands in MongoDB

1. Use EmployeeDB
2. db.Employee.find({Employeeid :4 })
The first statement ensures that you are connected to the EmployeeDb database. The second
statement searches for the record which has the employee id of 4.

6. Deleting documents in a collection – Documents can be deleted in a collection using


the “deleteOne” method provided by the MongoDB library. The below code snippet
shows how to delete a document in a mongoDB collection.

Code Explanation:

1. Here we are using the “deleteOne” method from the MongoDB library, which is used
to delete a document in a mongoDB collection.
2. We are specifying the search criteria of which document needs to be deleted. In our
case, we want to find the document which has the EmployeeName of “Mohan” and
delete this document.
If you now check the contents of your MongoDB database, you will find the record with
Employeeid of 4 and EmployeeName of “Mohan” deleted from the Employee collection.
To check that the data has been properly updated in the database, you need to execute the
following commands in MongoDB

1. Use EmployeeDB
2. db.Employee.find()
100
The first statement ensures that you are connected to the EmployeeDb database. The second
statement searches and display all of the records in the employee collection. Here you can
see if the record has been deleted or not.
How to build a node express app with MongoDB to store and serve content
Building an application with a combination of both using express and MongoDB is quite
common nowadays. When working with JavaScript web based applications, one will
normally here of the term MEAN stack.

 The term MEAN stack refers to a collection of JavaScript based technologies used to
develop web applications.
 MEAN is an acronym for MongoDB, ExpressJS, AngularJS, and Node.js.
Step 1) Define all the libraries which need to be used in our application, which in our case is
both the MongoDB and express library.

Code Explanation:

1. We are defining our ‘express’ library, which will be used in our application.
2. We are defining our ‘MongoDB’ library, which will be used in our application for
connecting to our MongoDB database.
3. Here we are defining the URL of our database to connect to.
4. Finally, we are defining a string which will be used to store our collection of employee
id which need to be displayed in the browser later on.
Step 2) In this step, we are now going to get all of the records in our ‘Employee’ collection
and work with them accordingly.

Code Explanation:

1. We are creating a route to our application called ‘Employeeid.’ So whenever anybody


browses to http://localhost:3000/Employeeid of our application, the code snippet
defined for this route will be executed.
2. Here we are getting all of the records in our ‘Employee’ collection through the
db.collection(‘Employee’).find() command. We are then assigning this collection to a

101
variable called cursor. Using this cursor variable, we will be able to browse through all
of the records of the collection.
3. We are now using the cursor.each() function to navigate through all of the records of
our collection. For each record, we are going to define a code snippet on what to do
when each record is accessed.
4. Finally, we see that if the record returned is not null, then we are taking the employee
via the command “item.Employeeid”. The rest of the code is just to construct a proper
HTML code which will allow our results to be displayed properly in the browser.
Step 3) In this step, we are going to send our output to the web page and make our
application listen on a particular port.

Code Explanation:

1. Here we are sending the entire content which was constructed in the earlier step to our
web page. The ‘res’ parameter allows us to send content to our web page as a
response.
2. We are making our entire Node.js application listen on port 3000.
Output:

HANDLING SQL DATABASES FROM NODEJS


16.HOW TO HANDLE SQL DATABASES IN NODEJS(PART B)
Access SQL Server in Node.js
In order to access MS SQL database, we need to install drivers for it. There are many
drivers available for SQL server in NPM. We will use mssql driver here.
Install Driver
Install mssql driver using npm command, npm install mssql in the command
prompt.
This will add mssql module folder in node_modules folder in your Node.js application.
This tutorial uses mssql v2.3.1, which is latest version as of now.
After installing the driver, we are ready to access MS SQL server database. We will
connect to a local SQLExpres database server and fetch all the records from Student table
in SchoolDB database shown below.
102
Now, create server.js and write the following code.
Server.js
var express = require('express');
var app = express();
app.get('/', function (req, res) {

var sql = require("mssql");


// config for your database
var config = {
user: 'sa',
password: 'mypassword',
server: 'localhost',
database: 'SchoolDB'
};
// connect to your database
sql.connect(config, function (err) {
if (err) console.log(err);
// create Request object
var request = new sql.Request();
// query to the database
and get the records
request.query('select * from
Student', function (err, recordset) {
if (err) console.log(err)
// send records as a response
res.send(recordset);

});
});
});
var server = app.listen(5000, function () {
console.log('Server is running..');
});
In the above example, we have imported mssql module and called connect() method to
connect with our SchoolDB database. We have passed config object which includes
database information such as userName, password, database server and database name. On
successful connection with the database, use sql.request object to execute query to any
database table and fetch the records.
Run the above example using node server.js command and point your browser
to http://localhost:5000 which displays an array of all students from Student table.

103
Example
1. Create an empty folder and name it node.js mysql.
2. Open the newly created directory in VS Code inside the terminal, and type npm init to
initialize the project. Press Enter to leave the default settings as they are.

index.js
 Create a file called index.js in the project directory. Since our main goal is to
understand the connection between Node.js and MySQL database, this will be the only
file that we create and work on in this project.

const express = require("express");


const mysql = require("mysql");

 We can add these modules using the terminal inside VSCode


 After we add the modules, we use the createConnection() method to create a
bridge between the Node.js and MySQL.

Host: Specifies the host that runs the database


User: Sets the user’s name
Password: Sets up a password
Database: Names the database

// Create connection
const db = mysql.createConnection({
host: "localhost",
user: "root",
104
password: "simplilearn",
database: "nodemysql",
});

// Connect to MySQL
db.connect((err) => {
if (err) {
throw err;
}
console.log("MySql Connected");
});
Since we are using the Express module to create the web server, we create a variable named
app that behaves as an object of the express module.

const app = express();

We use the GET API request to create the database. Notice that we have first defined the
SQL query for creating the database. We then pass this query to the query() method along
with the error callback method. The latter throws an error if the query wasn’t successfully
executed.

// Create DB
app.get("/createdb", (req, res) => {
let sql = "CREATE DATABASE nodemysql";
db.query(sql, (err) => {
if (err) {
throw err;
}
res.send("Database created");
});
});

After the database is created, create a table called “employee” to store the employee details
into the database. This table will have the attributes “id”, “name”, and “designation.” For the
last step, we pass this query to the query() method and the query executes.

// Create table
app.get("/createemployee", (req, res) => {
let sql =
"CREATE TABLE employee(id int AUTO_INCREMENT, name VARCHAR(255),
designation VARCHAR(255), PRIMARY KEY(id))";
db.query(sql, (err) => {
if (err) {
throw err;
}
res.send("Employee table created");
105
});
});

Now that we have created both the database and the table, let’s add an employee to the
database. We start by writing the relevant query for adding the record into the table. The
record is added and we get a message stating that employee 1 has been added if the query
gets successfully executed. Otherwise, we get an error message.

// Insert employee 1
app.get("/employee1", (req, res) => {
let post = { name: "Jake Smith", designation: "Chief Executive Officer" };
let sql = "INSERT INTO employee SET ?";
let query = db.query(sql, post, (err) => {
if (err) {
throw err;
}
res.send("Employee 1 added");
});
});
We can also update an employee record that resides in the database. For example, to update
the name of a particular employee record, we can use a GET request on that employee id,
and the name will get updated if the query executes successfully.
// Update employee
app.get("/updateemployee/:id", (req, res) => {
let newName = "Updated name";
let sql = `UPDATE employee SET name = '${newName}' WHERE id = ${req.params.id}`;
let query = db.query(sql, (err) => {
if (err) {
throw err;
}
res.send("Post updated...");
});
});

//Delete employee
app.get("/deleteemployee/:id", (req, res) => {
let sql = `DELETE FROM employee WHERE id = ${req.params.id}`;
let query = db.query(sql, (err) => {
if (err) {
throw err;
}
res.send("Employee deleted");
});
});

app.listen("3000", () => {
106
console.log("Server started on port 3000");
});
In the end, we set the server to listen at Port 3000.
app.listen("3000", () => {
console.log("Server started on port 3000");
});

HANDLING COOKIES IN NODEJS


17.HOW TO HANDLE COOKIES IN NODEJS(PART B)
What are cookies?
A cookie is usually a tiny text file stored in your web browser. A cookie was initially used to
store information about the websites that you visit. But with the advances in technology, a
cookie can track your web activities and retrieve your content preferences.
For example;
 Cookies save your language preferences. This way, when you visit that website in the
future, the language you used will be remembered.
 You have most likely visited an e-commerce website. When you include items into
your shopping cart, a cookie will remember your choices. Your shopping list item will
still be there whenever you revisit the site. Basically, a cookie is used to remember
data from the user.
A brief history of cookies
The first HTTP cookie was created in 1994 by Lou Montulli, an employee of Netscape
Communications, the company that created the Netscape browser.
Lou recreated this concept and implemented it in a web browser. In 1994, the Netscape
browser implemented cookies, followed by Internet Explorer in 1995 and that marked the
birth of HTTP cookies.
How cookies work
 When a user visits a cookie-enabled website for the first time, the browser will prompt
the user that the web page uses cookies and request the user to accept cookies to be
saved on their computer. Typically, when a makes a user request, the server responds
by sending back a cookie (among many other things).
 This cookie is going to be stored in the user’s browser. When a user visits the website
or sends another request, that request will be sent back together with the cookies. The
cookie will have certain information about the user that the server can use to make
decisions on any other subsequent requests.
 A perfect example is accessing Facebook from a browser.
The different types of cookies include:
 Session cookies - store user’s information for a short period. When the current session
ends, that session cookie is deleted from the user’s computer.
 Persistent cookies - a persistent cookie lacks expiration date. It is saved as long as the
webserver administrator sets it.
 Secure cookies - are used by encrypted websites to offer protection from any possible
threats from a hacker.

107
 Third-party cookies - are used by websites that show ads on their pages or track
website traffic. They grant access to external parties to decide the types of ads to show
depending on the user’s previous preferences.
 The major difference between sessions and cookies is that sessions live on the server-
side (the webserver), and cookies live on the client-side (the user browser). Sessions
have sensitive information such as usernames and passwords. This is why they are
stored on the server. Sessions can be used to identify and validate which user is
making a request.
 As we have explained, cookies are stored in the browser, and no sensitive information
can be stored in them. They are typically used to save a user’s preferences.
Setting up cookies with Node.js
We will use the following NPM packages:
 Express - this is an opinionated server-side framework for Node.js that helps you
create and manage HTTP server REST endpoints.
 cookie-parser - cookie-parser looks at the headers in between the client and the server
transactions, reads these headers, parses out the cookies being sent, and saves them in
a browser. In other words, cookie-parser will help us create and manage cookies
depending on the request a user makes to the server.
Run the following command to install these NPM packages:
npm install express cookie-parser

Step 1 - Import the installed packages

To set up a server and save cookies, import the cookie parser and express modules to your
project. This will make the necessary functions and objects accessible.
const express = require('express')
const cookieParser = require('cookie-parser')

Step - 2 Get your application to use the packages

You need to use the above modules as middleware inside your application, as shown below.
//setup express app
const app = express()

// let’s you use the cookieParser in your application


app.use(cookieParser());
This will make your application use the cookie parser and Express modules.

Step - 3 Set a simple route to start the server

We use the following code to set up a route for the homepage:


//set a simple for homepage route
app.get('/', (req, res) => {
res.send('welcome to a simple HTTP cookie server');
});

108
Step 4 - Set a port number

This is the port number that the server should listen to when it is running. This will help us
access our server locally. In this example, the server will listen to port 8000, as shown
below.
//server listening to port 8000
app.listen(8000, () => console.log('The server is running port 8000...'));
Now we have a simple server set. Run node app.js to test if it is working.

And if you access the localhost on port 8000 (http://localhost:8000/), you should
get an HTTP response sent by the server. Now we’re ready to start implementing cookies.
SETTING COOKIES
Add routes and endpoints that will help us create, update and delete a cookie.\

Step 1 - Set a cookie

We will set a route that will save a cookie in the browser. In this case, the cookies will be
coming from the server to the client browser.
app.get('/setcookie', (req, res) => {
res.cookie(`Cookie token name`,`encrypted cookie string Value`);
res.send('Cookie have been saved successfully');
});
run node app.js to serve the above endpoint. Open http://localhost:8000/getcookie your
browser and access the route. To confirm that the cookie was saved, go to your browser’s
inspector tool 🡆 select the application tab 🡆 cookies 🡆 select your domain URL.

Step 2 - Using the req.cookies method to check the saved cookies

If the server sends this cookie to the browser, this means we can iterate the incoming
requests through req.cookies and check the existence of a saved cookie.
// get the cookie incoming request
app.get('/getcookie', (req, res) => {
//show the saved cookies
console.log(req.cookies)
res.send(req.cookies);
});
Again run the server using node app.js to
expose the above route
(http://localhost:8000/getcookie)
and you can see the response on the browser.

HANDLING USER AUTHENTICATION WITH NODE JS

109
18.HOW TO AUTHENTICATE USER IN NODEJS(PART B)
WHAT IS AUTHENTICATION AND HOW IT WORKS?(PART A)
 In authentication, the user or computer has to prove its identity to the server or
client. Usually, authentication by a server entails the use of a user name and password.
Other ways to authenticate can be through cards, retina scans, voice recognition, and
fingerprints.
Authentication vs Authorization
Authentication
It can appear that authorization and authentication are the same things. But there is a
significant distinction between entering a building (authentication) and what you can do
inside (authorization).
The act of authenticating a user involves getting credentials and utilizing those credentials
to confirm the person's identity. The authorization process starts if the certificates are
legitimate.

Authentication Techniques
1. Using a password for authentication
2. Password-free identification
A link or OTP (one-time password) is sent to the user's registered mobile number or
phone number in place of a password in this procedure. Authentication based on an OTP
can also be mentioned.
3. 2FA/MFA
The highest level of authentication is known as 2FA/MFA or two-factor
authentication/multi-factor authentication. Authenticating the user requires additional PIN
or security questions.
4. One-time password
Access to numerous applications can be made possible by using single sign-on or SSO.
Once logged in, the user can immediately sign into all other online apps using the same
centralized directory.
5. Social Authentication
Social authentication checks the user using the already-existing login credentials for the
relevant social network; it does not call for additional security.
Authorization
Allowing authenticated users access to resources involves authorizing them after
determining whether they have system access permissions. You can also limit access
rights by approving or rejecting particular licenses for authenticated users.
After the system confirms your identification, authorization takes place, giving you
complete access to all the system's resources, including data, files, databases, money,
places, and anything else.
110
Authorization Methods
1. Role-based access controls (RBAC):
This type of authorization grants users access to data in accordance with their positions
within the company. For instance, all employees inside a corporation might have access to
personal data, such as pay, vacation time, but not be able to edit it. However, HR may be
granted access to all employee HR data and be given the authority to add, remove, and
modify this information. Organizations may make sure each user is active while limiting
access to sensitive information by granting permissions based on each person's function.
2. Attribute-based access control (ABAC):
ABAC uses several distinct attributes to grant users authorization at a finer level than
RBAC. User attributes including the user's name, role, organization, ID, and security
clearance may be included. Environmental factors including the time of access, the
location of the data, and the level of organizational danger currently in effect may be
included. Additionally, it might contain resource information like the resource owner, file
name, and data sensitivity level. The aim of ABAC, which is a more intricate authorization
process than RBAC, is to further restrict access. For instance, to preserve strict security
boundaries, access can be restricted to specific geographic regions or times of the day
rather than allowing all HR managers in a business to update employees' HR data.

var express = require('express');


var app = express();
var bodyParser = require('body-parser');
var multer = require('multer');
var upload = multer();
var session = require('express-session');
var cookieParser = require('cookie-parser');

app.set('view engine', 'pug');


app.set('views','./views');

app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use(upload.array());
app.use(cookieParser());
app.use(session({secret: "Your secret key"}));

var Users = [];

app.get('/signup', function(req, res){


res.render('signup');
});

111
app.post('/signup', function(req, res){
if(!req.body.id || !req.body.password){
res.status("400");
res.send("Invalid details!");
} else {
Users.filter(function(user){
if(user.id === req.body.id){
res.render('signup', {
message: "User Already Exists! Login or choose another user id"});
}
});
var newUser = {id: req.body.id, password: req.body.password};
Users.push(newUser);
req.session.user = newUser;
res.redirect('/protected_page');
}
});
app.listen(3000);
What is Pug used for?
Pug (formerly known as Jade) is a preprocessor which simplifies the task of writing HTML.
Using Pug is just as easy. Here are the three steps:
1. Install Pug into your project: npm install pug --save
2. Set up your view engine: app.set(‘view engine’, ‘pug’)
3. Create a .pug file
SIGNUP
html
head
title Signup
body
if(message)
h4 #{message}
form(action = "/signup" method = "POST")
input(name = "id" type = "text" required placeholder = "User ID")
input(name = "password" type = "password" required placeholder = "Password")
button(type = "Submit") Sign me up!
Check if this page loads by visiting localhost:3000/signup.
We have set the required attribute for both fields, so HTML5 enabled browsers will not let us
submit this form until we provide both id and password. If someone tries to register using a
curl request without a User ID or Password, an error will be displayed.
REFERENCE:
1.Learning nodejsbook - Marc Wandschneider
2 Node.js in Practice - Alex Young, Marc Harter
***********************UNIT III COMPLETED************************
112
POSSIBLE QUESTIONS
UNIT III
PART A

1. What is nosql?
2. What is the cap theorem?
3. Define base
4. Define MONGODB
5. What is a database?
6. What are collections?
7. Where is MONGODB used?
8. Define MONGODB shell
9. What is the MONGODB mongo shell?
10. Define body parser
11. What is authentication and how it works?
PART B

1. Explain history of NOSQL database


2. Explain features of NOSQL database
3. Explain different types of NOSQL database
4. Explain the advantages and disadvantages of NOSQL
5. Explain about MONGODB system
6. Explain feature of MONGODB
7. Explain the advantages and disadvantages of MONGODB
8. Explain about MONGODB shell
9. Explain body parser
10. How to connect MOGODB with NODEJS
11. How to perform crud operations in NODEJS
12. How to handle SQL databases in NODEJS
13. How to handle cookies in NODEJS
14. How to authenticate user in NODEJS

**************************

113
VASAVI VIDYA TRUST GROUP OF INSTITUTIONS, SALEM – 103
DEPARTMENT OF MCA
CLASS: I MCA UNIT: IV
SUBJECT: FULL STACK WEB DEVELOPMENT SUBJECT CODE: MC4201

ADVENCED CLIENT-SIDE PROGRAMMING


Client-Side Processing can be used for a variety of purposes.
Complex user interface interactions are done with Client-Side Processing including
 Dragging items around a webpage (e.g., Google Maps)
 Nested menus – while basic HTML form elements can provide a single level of pulldown menu,
anything more complex such as a nested menu requires client-side processing.
 Search result refinement where typing into a text field results in a list of possible completions. In
general if the webpage changes or reconfigures as the user interacts with it, that’s client-side
processing. This is sometimes done in conjunction with the server, but ultimately the server by itself
can’t change part of the webpage without client-side processing.
Single Page Applications require Client-Side Processing
 SPAs use JavaScript programs to dynamically change a webpage adding and removing elements, as
the user interacts with them.
 Client-Side Processing can also be used for basic calculations such as determining loan payments for
a given interest rate or converting from metric to imperial units.
CLIENT-SIDE PROCESSING LANGUAGES
1.EXPLAIN ABOUT CLIENT SIDE PROGRAMMING LANGUAGE (PART B)
 The vast majority of Client-Side Processing is done with JavaScript
 Java can also run in a web browser, although most security experts recommend that you disable your
web browser’s ability to run Java, as this is a known vector for attacks on your computer.
 Adobe Flash is also used on the Client-Side, although it’s no longer in favor, and Adobe is scheduled
to end support for it in 2020.
 Users of these languages will use compilers to convert their code to JavaScript, and then their web
server will serve out the converted JavaScript code to visitors.
 These languages include TypeScript, Google Dart, and Scala
REACT
REACT
 React is a relatively new framework that has been gaining a lot of popularity.
 React has a very declarative style.
 We can think of it as allowing us to define new types of HTML-like elements.
 ReactJS is an open-source, component-based front-end library responsible only for the view layer of the
application. It is maintained by Facebook.
2.WHAT IS REACT? (PART A)

 React is a declarative, efficient, and flexible JavaScript library for building user interfaces. It lets
you compose complex UIs from small and isolated pieces of code called “components
Features of React.js: There are unique features are available on React because that it is widely popular.
 Use JSX: It is faster than normal JavaScript as it performs optimizations while translating to regular
JavaScript. It makes it easier for us to create templates.
 Virtual DOM: Virtual DOM exists which is like a lightweight copy of the actual DOM. So for every
object that exists in the original DOM, there is an object for that in React Virtual DOM. It is exactly
the same, but it does not have the power to directly change the layout of the document. Manipulating
DOM is slow, but manipulating Virtual DOM is fast as nothing gets drawn on the screen.
 One-way Data Binding: This feature gives you better control over your application.

114
 Component: A Component is one of the core building blocks of React. In other words, we can say
that every application you will develop in React will be made up of pieces called components.
Components make the task of building UIs much easier. You can see a UI broken down into multiple
individual pieces called components and work on them independently and merge them all in a parent
component which will be your final UI.
 Performance: React.js use JSX, which is faster compared to normal JavaScript and HTML. Virtual
DOM is a less time taking procedure to update web pages content.
REACT DOM
3.WHAT IS DOM? (PART A)
What is DOM?
 DOM, abbreviated as Document Object Model, is a World Wide Web Consortium standard
logical representation of any webpage.
 In easier words, DOM is a tree-like structure that contains all the elements and it’s properties of a
website as its nodes. DOM provides a language-neutral interface that allows accessing and
updating of the content of any element of a webpage.
 Before React, Developers directly manipulated the DOM elements which resulted in frequent
DOM manipulation, and each time an update was made the browser had to recalculate and
repaint the whole view according to the particular CSS of the page, which made the total process
to consume a lot of time.
 React brought into the scene the virtual DOM. The Virtual DOM can be referred to as a copy of
the actual DOM representation that is used to hold the updates made by the user and finally
reflect it over to the original Browser DOM at once consuming much lesser time.
What is ReactDOM?
ReactDOM is a package that provides DOM specific methods that can be used at the top level of a web app
to enable an efficient way of managing DOM elements of the web page.
 In order to work with React in the browser, we need to include two libraries: React and
ReactDOM. React is the library for creating views.
 ReactDOM is the library used to actually render the UI in the browser.
 The browser DOM is made up of DOM elements. Similarly, the React DOM is made up of React
elements. DOM elements and React elements may look the same, but they’re actually quite
different. A React element is a description of what the actual DOM element should look like. In
other words, React elements are the instructions for how the browser DOM should be created.
We can create a React element to represent an h1 using
React.createElement("h1", { id: "recipe-0" }, "Baked Salmon");
 The first argument defines the type of element we want to create. In
 This case, we want to create an h1 element. The second argument represents the element’s
properties. This h1 currently has an id of recipe-0. The third argument represents the element’s
children: any nodes that are inserted between the opening and closing tag
 During rendering, React will convert this element to an actual DOM
 element:
<h1 id="recipe-0">Baked Salmon</h1>
ReactDOM contains the tools necessary to render React elements in the browser. ReactDOM is where we’ll
find the render method.
We can render a React element, including its children, to the DOM with ReactDOM.render. The element we
want to render is passed as the first argument, and the second argument is the target node, where
we should render the element:
const dish = React.createElement("h1", null, "Baked Salmon");
ReactDOM.render(dish, document.getElementById("root"));
Children
React renders child elements using props. children. we rendered a text element as a child of the h1 element,
and thus props. children was set to Baked Salmon. We could render other React elements as children, too,
115
creating a tree of elements. This is why we use the term element tree: the tree has one root element from
which many branches grow.
Let’s consider the unordered list that contains ingredients:
<ul>
<li>2 lb salmon</li>
<li>5 sprigs fresh rosemary</li>
<li>2 tablespoons olive oil</li>
<li>2 small lemons</li>
<li>1 teaspoon kosher salt</li>
<li>4 cloves of chopped garlic</li>
</ul>
In this sample, the unordered list is the root element, and it has six children. We can represent this ul and its
children with React.createElement:
React.createElement( "ul", null,
React.createElement("li", null, "2 lb salmon"),
React.createElement("li", null, "5 sprigs fresh rosemary"),
React.createElement("li", null, "2 tablespoons olive oil"),
React.createElement("li", null, "2 small lemons"),
React.createElement("li", null, "1 teaspoon kosher salt"),
React.createElement("li", null, "4 cloves of chopped garlic")
);
ReactDOM provides the developers with an API containing the following methods and a few more.
 render()
 findDOMNode()
 unmountComponentAtNode()
 hydrate()
 createPortal()
To use the ReactDOM in any React web app we must first import ReactDOM from the react-dom package
by using the following code snippet:
 import ReactDOM from 'react-dom'

4.EXPLAIN THE METHODS USED IN REACT API (PART B)


METHODS IN REACT API:
render() Function
This is one of the most important methods of ReactDOM. This function is used to render a single React
Component or several Components wrapped together in a Component or a div element. This function uses
the efficient methods of React for updating the DOM by being able to change only a subtree, efficient diff
methods, etc.
Syntax:
ReactDOM.render(element, container, callback)
Parameters: This method can take a maximum of three parameters as described below.
 element: This parameter expects a JSX expression or a React Element to be rendered.
 container: This parameter expects the container in which the element has to be rendered.
 callback: This is an optional parameter that expects a function that is to be executed once the render is
complete.
findDOMNode() Function
 This function is generally used to get the DOM node where a particular React component was
rendered. This method is very less used like the following can be done by adding a ref attribute to each
component itself.
Syntax:
 ReactDOM.findDOMNode(component)
 Parameters: This method takes a single parameter component that expects a React Component to be
searched in the Browser DOM.
116
unmountComponentAtNode() Function
This function is used to unmount or remove the React Component that was rendered to a particular
container. As an example, you may think of a notification component, after a brief amount of time it is
better to remove the component making the web page more efficient.
Syntax:
ReactDOM.unmountComponentAtNode(container)
Parameters: This method takes a single parameter container which expects the DOM container from
which the React component has to be removed.
Return Type: This function returns true on success otherwise false.

hydrate() Function
This method is equivalent to the render() method but is implemented while using server-side rendering.
Syntax:
ReactDOM.hydrate(element, container, callback)
Parameters: This method can take a maximum of three parameters as described below.
 element: This parameter expects a JSX expression or a React Component to be rendered.
 container: This parameter expects the container in which the element has to be rendered.
 callback: This is an optional parameter that expects a function that is to be executed once the render is
complete.
Return Type: This function attempts to attach event listeners to the existing markup and returns a
reference to the component or null if a stateless component was rendered.

createPortal() Function
Usually, when an element is returned from a component’s render method, it’s mounted on the DOM as a
child of the nearest parent node which in some cases may not be desired. Portals allow us to render a
component into a DOM node that resides outside the current DOM hierarchy of the parent component.
Syntax:
ReactDOM.createPortal(child, container)
Parameters: This method takes two parameters as described below.
 child: This parameter expects a JSX expression or a React Component to be rendered.
 container: This parameter expects the container in which the element has to be rendered.
JSX
5. DISCUSS JSX(PART B)
React JSX

JSX(JavaScript Extension), is a React extension which allows writing JavaScript code that looks like
HTML. In other words, JSX is an HTML-like syntax used by React that extends ECMAScript so
that HTML-like syntax can co-exist with JavaScript/React code. The syntax is used by preprocessors (i.e.,
transpilers like babel) to transform HTML-like syntax into standard JavaScript objects that a JavaScript
engine will parse.
JSX provides you to write HTML/XML-like structures (e.g., DOM-like tree structures) in the same file
where you write JavaScript code, then preprocessor will transform these expressions into actual JavaScript
code. Just like XML/HTML, JSX tags have a tag name, attributes, and children.
Example
Here, we will write JSX syntax in JSX file and see the corresponding JavaScript code which transforms by
preprocessor(babel).

JSX File
1. <div>Hello JavaTpoint</div>

Corresponding Output
1. React.createElement("div", null, "Hello JavaTpoint");
117
The above line creates a react element and passing three arguments inside where the first is the name of
the element which is div, second is the attributes passed in the div tag, and last is the content you pass
which is the "Hello JavaTpoint."

Why use JSX?

o It is faster than regular JavaScript because it performs optimization while translating the code to
JavaScript.
o Instead of separating technologies by putting markup and logic in separate files, React uses
components that contain both. We will learn components in a further section.
o It is type-safe, and most of the errors can be found at compilation time.
o It makes easier to create templates.

Nested Elements in JSX

To use more than one element, you need to wrap it with one container element. Here, we use div as a
container element which has three nested elements inside it.

App.JSX
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
4. return(
5. <div>
6. <h1>JavaTpoint</h1>
7. <h2>Training Institutes</h2>
8. <p>This website contains the best CS tutorials.</p>
9. </div>
10. );
11. }
12. }
13. export default App;

JSX Attributes

JSX use attributes with the HTML elements same as regular HTML. JSX uses camelcase naming
convention for attributes rather than standard naming convention of HTML such as a class in HTML
becomes className in JSX because the class is the reserved keyword in JavaScript. We can also use our
own custom attributes in JSX. For custom attributes, we need to use data- prefix. In the below example, we
have used a custom attribute data-demoAttribute as an attribute for the <p> tag.
Example
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
118
4. return(
5. <div>
6. <h1>JavaTpoint</h1>
7. <h2>Training Institutes</h2>
8. <p data-demoAttribute = "demo">This website contains the best CS tutorials.</p>
9. </div>
10. );
11. }
12. }
13. export default App;

In JSX, we can specify attribute values in two ways:

1. As String Literals: We can specify the values of attributes in double quotes:


1. var element = <h2 className = "firstAttribute">Hello JavaTpoint</h2>;

Example
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
4. return(
5. <div>
6. <h1 className = "hello" >JavaTpoint</h1>
7. <p data-demoAttribute = "demo">This website contains the best CS tutorials.</p>
8. </div>
9. );
10. }
11. }
12. export default App;

2. As Expressions: We can specify the values of attributes as expressions using curly braces {}:
1. var element = <h2 className = {varName}>Hello JavaTpoint</h2>;

Example
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
4. return(
5. <div>
6. <h1 className = "hello" >{25+20}</h1>
7. </div>
8. );
119
9. }
10. }
11. export default App;

JSX Comments

JSX allows us to use comments that begin with /* and ends with */ and wrapping them in curly braces {}
just like in the case of JSX expressions. Below example shows how to use comments in JSX.

Example
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
4. return(
5. <div>
6. <h1 className = "hello" >Hello JavaTpoint</h1>
7. {/* This is a comment in JSX */}
8. </div>
9. );
10. }
11. }
12. export default App;

JSX Styling

React always recommends to use inline styles. To set inline styles, you need to use camelCase syntax.
React automatically allows appending px after the number value on specific elements. The following
example shows how to use styling in the element.

Example
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
4. var myStyle = {
5. fontSize: 80,
6. fontFamily: 'Courier',
7. color: '#003300'
8. }
9. return (
10. <div>
11. <h1 style = {myStyle}>www.javatpoint.com</h1>
12. </div>
13. );

120
14. }
15. }
16. export default App;
Example
1. import React, { Component } from 'react';
2. class App extends Component{
3. render(){
4. var i = 5;
5. return (
6. <div>
7. <h1>{i == 1 ? 'True!' : 'False!'}</h1>
8. </div>
9. );
10. }
11. }
12. export default App;
COMPONENTS
6. EXPLAIN COMPONENTS IN REACT (PART B)
 No matter its size, its contents, or what technologies are used to create it, a user interface is made up
of parts. Buttons, Lists, Headings. All of these parts, when put together, make up a user interface.
 Consider a recipe application with three different recipes. The data is different in each box, but the
parts needed to create a recipe are the same.

 In React, we describe each of these parts as a component. Components allow us to reuse the same
structure, and then we can populate those structures with different sets of data.
 When considering a user interface you want to build with React, look for opportunities to break
down your elements into reusable pieces.

121
React Components

Earlier, the developers write more than thousands of lines of code for developing a single page application.
These applications follow the traditional DOM structure, and making changes in them was a very
challenging task. If any mistake found, it manually searches the entire application and update accordingly.
The component-based approach was introduced to overcome an issue. In this approach, the entire
application is divided into a small logical group of code, which is known as components.

A Component is considered as the core building blocks of a React application. It makes the task of building
UIs much easier. Each component exists in the same space, but they work independently from one another
and merge all in a parent component, which will be the final UI of your application.

Every React component have their own structure, methods as well as APIs. They can be reusable as per your
need. For better understanding, consider the entire UI as a tree. Here, the root is the starting component, and
each of the other pieces becomes branches, which are further divided into sub-branches.

In ReactJS, we have mainly two types of components. They are


1. Functional Components
2. Class Components

Functional Components

In React, function components are a way to write components that only contain a render method and don't
have their own state. They are simply JavaScript functions that may or may not receive data as parameters.
We can create a function that takes props(properties) as input and returns what should be rendered. A valid
functional component can be shown in the below example.
1. function WelcomeMessage(props) {

122
2. return <h1>Welcome to the , {props.name}</h1>;
3. }

The functional component is also known as a stateless component because they do not hold or manage state.
It can be explained in the below example.
Example
1. import React, { Component } from 'react';
2. class App extends React.Component {
3. render() {
4. return (
5. <div>
6. <First/>
7. <Second/>
8. </div>
9. );
10. }
11. }
12. class First extends React.Component {
13. render() {
14. return (
15. <div>
16. <h1>JavaTpoint</h1>
17. </div>
18. );
19. }
20. }
21. class Second extends React.Component {
22. render() {
23. return (
24. <div>
25. <h2>www.javatpoint.com</h2>
26. <p>This websites contains the great CS tutorial.</p>
27. </div>
28. );
29. }
30. }
31. export default App;

Class Components

123
Class components are more complex than functional components. It requires you to extend from React.
Component and create a render function which returns a React element. You can pass data from one class to
other class components. You can create a class by defining a class that extends Component and has a render
function. Valid class component is shown in the below example.
1. class MyComponent extends React.Component {
2. render() {
3. return (
4. <div>This is main component.</div>
5. );
6. }
7. }

The class component is also known as a stateful component because they can hold or manage local state. It
can be explained in the below example.
Example
In this example, we are creating the list of unordered elements, where we will dynamically insert
StudentName for every object from the data array. Here, we are using ES6 arrow syntax (=>) which looks
much cleaner than the old JavaScript syntax. It helps us to create our elements with fewer lines of code. It is
especially useful when we need to create a list with a lot of items.
1. import React, { Component } from 'react';
2. class App extends React.Component {
3. constructor() {
4. super();
5. this.state = {
6. data:
7. [
8. {
9. "name":"Abhishek"
10. },
11. {
12. "name":"Saharsh"
13. },
14. {
15. "name":"Ajay"
16. }
17. ]
18. }
19. }
20. render() {
21. return (
22. <div>
23. <StudentName/>

124
24. <ul>
25. {this.state.data.map((item) => <List data = {item} />)}
26. </ul>
27. </div>
28. );
29. }
30. }
31. class StudentName extends React.Component {
32. render() {
33. return (
34. <div>
35. <h1>Student Name Detail</h1>
36. </div>
37. );
38. }
39. }
40. class List extends React.Component {
41. render() {
42. return (
43. <ul>
44. <li>{this.props.data.name}</li>
45. </ul>
46. );
47. }
48. }
49. export default App;

PROPERTIES

7. DISCUSS PROPS IN REACT (PART B)


 Props are arguments passed into React components.
 Props are passed to components via HTML attributes.
 props stands for properties.

React Props

React Props are like function arguments in JavaScript and attributes in HTML.
To send props into a component, use the same syntax as HTML attributes:

import React from 'react';


import ReactDOM from 'react-dom/client';

function Car(props) {
return <h2>I am a { props.brand }!</h2>;
125
}

const myElement = <Car brand="Ford" />;

const root = ReactDOM.createRoot(document.getElementById('root'));


root.render(myElement);

Pass Data

Props are also how you pass data from one component to another, as parameters.
Send the "brand" property from the Garage component to the Car component:
import React from 'react';
import ReactDOM from 'react-dom/client';

function Car(props) {
return <h2>I am a { props.brand }!</h2>;
}

function Garage() {
return (
<>
<h1>Who lives in my garage?</h1>
<Car brand="Ford" />
</>
);
}

const root = ReactDOM.createRoot(document.getElementById('root'));


root.render(<Garage />);

FETCH API

8.DISCUSS FETCH API WITH EXAMPLE (PART B)

API is an abbreviation for Application Programming Interface which is a collection of communication


protocols and subroutines used by various programs to communicate between them. A programmer can
make use of various API tools to make its program easier and simpler. Also, an API facilitates the
programmers with an efficient way to develop their software programs.

Below is the stepwise implementation of how we fetch the data from an API in react. We will use the
fetch function to get the data from the API.
Step by step implementation to fetch data from an api in react.
 Step 1: Create React Project
npm create-react-app MY-APP
 Step 2: Change your directory and enter your main folder charting as
cd MY-APP
 Step 3: API endpoint
https://jsonplaceholder.typicode.com/users

126
Step 4: Write code in App.js to fetch data from API and we are using fetch function

APP.CS
import React from "react";
import './App.css';
class App extends React.Component {

// Constructor
constructor(props) {
super(props);

this.state = {
items: [],
DataisLoaded: false
};
}

// ComponentDidMount is used to
// execute the code
componentDidMount() {
fetch(
"https://jsonplaceholder.typicode.com/users")
.then((res) => res.json())
.then((json) => {
this.setState({
items: json,
DataisLoaded: true
});
})
}
render() {
const { DataisLoaded, items } = this.state;
if (!DataisLoaded) return <div>
<h1> Pleses wait some time.... </h1> </div> ;

127
return (
<div className = "App">
<h1> Fetch data from an api in react </h1> {
items.map((item) => (
<ol key = { item.id } >
User_Name: { item.username },
Full_Name: { item.name },
User_Email: { item.email }
</ol>
))
}
</div>
);
}
}

export default App;

APP.CSS
.App {
text-align: center;
color: Green;
}
.App-header {
background-color: #282c34;
min-height: 100vh;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
font-size: calc(10px + 2vmin);
color: white;
}
.App-link {
color: #61dafb;
}

@keyframes App-logo-spin {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
}
Step to run the application: Open the terminal and type the following command.
npm start
Output: Open the browser and our project is shown in the URL http://localhost:3000/

128
STATES AND LIFECYCLE

9.EXPLAIN THE STATES AND LIFECYCLE OF REACT (PART B)

React Components can be broadly classified into Functional and Class Components. It is also seen that
Functional Components are faster and much simpler than Class Components. The primary difference
between the two is the availability of the State.
React State

 The state is an updatable structure that is used to contain data or information about the component.
The state in a component can change over time. The change in state over time can happen as a
response to user action or system event. A component with the state is known as stateful
components. It is the heart of the react component which determines the behavior of the component
and how it will render. They are also responsible for making a component dynamic and interactive.
 A state must be kept as simple as possible. It can be set by using the setState() method and calling
setState() method triggers UI updates. A state represents the component's local state or information.
It can only be accessed or modified inside the component or by the component directly. To set an
initial state before any interaction occurs, we need to use the getInitialState() method.

For example, if we have five components that need data or information from the state, then we need to
create one container component that will keep the state for all of them.

Defining State

To define a state, you have to first declare a default set of values for defining the component's initial state.
To do this, add a class constructor which assigns an initial state using this.state. The 'this.state' property can
be rendered inside render() method.

Example

The below sample code shows how we can create a stateful component using ES6 syntax.
1. import React, { Component } from 'react';
2. class App extends React.Component {
3. constructor() {
4. super();
5. this.state = { displayBio: true };
6. }
7. render() {

129
8. const bio = this.state.displayBio ? (
9. <div>
10. <p><h3>Javatpoint is one of the best Java training institute in Noida, Delhi, Gurugram, Ghaziaba
d and Faridabad. We have a team of experienced Java developers and trainers from multinational companies
to teach our campus students.</h3></p>
11. </div>
12. ) : null;
13. return (
14. <div>
15. <h1> Welcome to JavaTpoint!! </h1>
16. { bio }
17. </div>
18. );
19. }
20. }
21. export default App;

Changing the State

We can change the component state by using the setState() method and passing a new state object as the
argument. Now, create a new method toggleDisplayBio() in the above example and bind this keyword to the
toggleDisplayBio() method otherwise we can't access this inside toggleDisplayBio() method.

this.toggleDisplayBio = this.toggleDisplayBio.bind(this);

Example

In this example, we are going to add a button to the render() method. Clicking on this button triggers the
toggleDisplayBio() method which displays the desired output.
1. import React, { Component } from 'react';
2. class App extends React.Component {
3. constructor() {
4. super();
5. this.state = { displayBio: false };
6. console.log('Component this', this);
7. this.toggleDisplayBio = this.toggleDisplayBio.bind(this);
8. }
9. toggleDisplayBio(){
10. this.setState({displayBio: !this.state.displayBio});
11. }
12. render() {
13. return (

130
14. <div>
15. <h1>Welcome to JavaTpoint!!</h1>
16. {
17. this.state.displayBio ? (
18. <div>
19. <p><h4>Javatpoint is one of the best Java training institute in Noida, Delhi, Gurugram, G
haziabad and Faridabad. We have a team of experienced Java developers and trainers from multinational co
mpanies to teach our campus students.</h4></p>
20. <button onClick={this.toggleDisplayBio}> Show Less </button>
21. </div>
22. ):(
23. <div>
24. <button onClick={this.toggleDisplayBio}> Read More </button>
25. </div>
26. )
27. }
28. </div>
29. )
30. }
31. }
32. export default App;
LIFE CYCLE
In ReactJS, every component creation process involves various lifecycle methods. These lifecycle methods
are termed as component's lifecycle. These lifecycle methods are not very complicated and called at various
points during a component's life. The lifecycle of the component is divided into four phases. They are:
1. Initial Phase
2. Mounting Phase
3. Updating Phase
4. Unmounting Phase

Each phase contains some lifecycle methods that are specific to the particular phase. Let us discuss each of
these phases one by one.

1. Initial Phase

It is the birth phase of the lifecycle of a ReactJS component. Here, the component starts its journey on a
way to the DOM. In this phase, a component contains the default Props and initial State. These default
properties are done in the constructor of a component. The initial phase only occurs once and consists of the
following methods.
o getDefaultProps()
It is used to specify the default value of this.props. It is invoked before the creation of the component
or any props from the parent is passed into it.
131
o getInitialState()
It is used to specify the default value of this.state. It is invoked before the creation of the component.

2. Mounting Phase

In this phase, the instance of a component is created and inserted into the DOM. It consists of the following
methods.
o componentWillMount()
This is invoked immediately before a component gets rendered into the DOM. In the case, when you
call setState() inside this method, the component will not re-render.
o componentDidMount()
This is invoked immediately after a component gets rendered and placed on the DOM. Now, you can
do any DOM querying operations.
o render()
This method is defined in each and every component. It is responsible for returning a single
root HTML node element. If you don't want to render anything, you can return a null or false value.

3. Updating Phase

It is the next phase of the lifecycle of a react component. Here, we get new Props and change State. This
phase also allows to handle user interaction and provide communication with the components hierarchy. The
main aim of this phase is to ensure that the component is displaying the latest version of itself. Unlike the
Birth or Death phase, this phase repeats again and again. This phase consists of the following methods.
o componentWillRecieveProps()
It is invoked when a component receives new props. If you want to update the state in response to
prop changes, you should compare this.props and nextProps to perform state transition by
using this.setState() method.
o shouldComponentUpdate()
It is invoked when a component decides any changes/updation to the DOM. It allows you to control
the component's behavior of updating itself. If this method returns true, the component will update.
Otherwise, the component will skip the updating.
o componentWillUpdate()
It is invoked just before the component updating occurs. Here, you can't change the component state
by invoking this.setState() method. It will not be called, if shouldComponentUpdate() returns false.
o render()
It is invoked to examine this.props and this.state and return one of the following types: React
elements, Arrays and fragments, Booleans or null, String and Number. If shouldComponentUpdate()
returns false, the code inside render() will be invoked again to ensure that the component displays
itself properly.
o componentDidUpdate()
It is invoked immediately after the component updating occurs. In this method, you can put any code
132
inside this which you want to execute once the updating occurs. This method is not invoked for the
initial render.

4. Unmounting Phase

It is the final phase of the react component lifecycle. It is called when a component instance
is destroyed and unmounted from the DOM. This phase contains only one method and is given below.
o componentWillUnmount()
This method is invoked immediately before a component is destroyed and unmounted permanently.
It performs any necessary cleanup related task such as invalidating timers, event listener, canceling
network requests, or cleaning up DOM elements. If a component instance is unmounted, you cannot
mount it again.
JS LOCAL STORAGE

10. EXPLAIN JS LOCAL STORAGE (PART B)


JAVASCRIPT LOCALSTORAGE

 LocalStorage is a data storage type of web storage. This allows the JavaScript sites and apps to store
and access the data without any expiration date. This means that the data will always be persisted and
will not expire. So, data stored in the browser will be available even after closing the browser window.
 In short, all we can say is that the localStorage holds the data with no expiry date, which is available to
the user even after closing the browser window. It is useful in various ways, such as remembering the
shopping cart data or user login on any website.
 In the past days, cookies were the only option to remember this type of temporary and local
information, but now we have localStorage as well. Local storage comes with a higher storage limit
than cookies (5MB vs 4MB). It also does not get sent with every HTTP request. So, it is a better choice
now for client-side storage. Some essential points of localStorage need to be noted:
 localStorage is not secure to store sensitive data and can be accessed using any code. So, it is quite
insecure.
 It is an advantage of localStorage over cookies that it can store more data than cookies. You can
store 5MB data on the browser using localStorage.
 localStorage stores the information only on browser instead in database. Thereby the localStorage is
not a substitute for a server-based database.
 localStorage is synchronous, which means that each operation executes one after another.

LOCALSTORAGE METHODS

The localStorage offers some methods to use it. We will discuss all these localStorage methods with
examples. Before that, a basic overview of these methods are as follows:
Methods Description

setItem() This method is used to add the data through key and value to
localStorage.

getItem() It is used to fetch or retrieve the value from the storage using the key.

133
removeItem() It removes an item from storage by using the key.

clear() It is used to gets clear all the storage.

Each of these methods is used with localStorage keyword connecting with dot(.) character. For
Example: localStorage.setItem().
1. Remember that localStorage property is read-only.
2. Following some codes given, which are used to add, retrieve, remove, and clear the data in localStorage.

Add data

To add the data in localStorage, both key and value are required to pass in setItem() function.
1. localStorage.setItem("city", "Noida");

Retrieve data

It requires only the key to retrieve the data from storage and a JavaScript variable to store the returned data.
1. const res = localStorage.getItem("city");

Remove data

It also requires only the key to remove the value attached with it.
1. localStorage.removeItem("city");

Clear localStorage

It is a simple clear() function of localStorage, which is used to remove all the localStorage data:
1. localStorage.clear()

Limitation of localStorage

As the localStorage allows to store temporary, local data, which remains even after closing the browser
window, but it also has few limitations. Below are some limitations of localStorage are given:
o Do not store sensitive information like username and password in localStorage.
o localStorage has no data protection and can be accessed using any code. So, it is quite insecure.
o You can store only maximum 5MB data on the browser using localStorage.
o localStorage stores the information only on browser not in server-based database.
o localStorage is synchronous, which means that each operation executes one after another.

Advantage of localStorage

The localStorage has come with several advantages. First and essential advantage of localStorage is that it
can store temporary but useful data in the browser, which remains even after the browser window closed.
Below is a list of some advantages:
o The data collected by localStorage is stored in the browser. You can store 5 MB data in the browser.
o There is no expiry date of data stored by localStorage.
134
o You can remove all the localStorage item by a single line code, i.e., clear().
o The localStorage data persist even after closing the browser window, like items in shopping cart.
o It also has advantages over cookies because it can store more data than cookies.
REACT EVENTS

12. EXPLAIN REACT EVENTS (PART B)

 An event is an action that could be triggered as a result of the user action or system generated event. For
example, a mouse click, loading of a web page, pressing a key, window resizes, and other interactions
are called events.
 React has its own event handling system which is very similar to handling events on DOM elements.
The react event handling system is known as Synthetic Events. The synthetic event is a cross-browser
wrapper of the browser's native event.

1. Event declaration in plain HTML: button onclick="showMessage()">


2. Hello JavaTpoint
3. </button>

Event declaration in React:


1. <button onClick={showMessage}>
2. Hello JavaTpoint
3. </button>

3. In react, we cannot return false to prevent the default behavior. We must call preventDefault event
explicitly to prevent the default behavior. For example:

In plain HTML, to prevent the default link behavior of opening a new page, we can write:
1. <a href="#" onclick="console.log('You had clicked a Link.'); return false">
2. Click_Me
3. </a>

In React, we can write it as:


1. function ActionLink() {
2. function handleClick(e) {
3. e.preventDefault();
4. console.log('You had clicked a Link.');
135
5. }
6. return (
7. <a href="#" onClick={handleClick}>
8. Click_Me
9. </a>
10. );
11. }
In the above example, e is a Synthetic Event which defines according to the W3C spec.
Example
In the below example, we have used only one component and adding an onChange event. This event will
trigger the changeText function, which returns the company name.
1. import React, { Component } from 'react';
2. class App extends React.Component {
3. constructor(props) {
4. super(props);
5. this.state = {
6. companyName: ''
7. };
8. }
9. changeText(event) {
10. this.setState({
11. companyName: event.target.value
12. });
13. }
14. render() {
15. return (
16. <div>
17. <h2>Simple Event Example</h2>
18. <label htmlFor="name">Enter company name: </label>
19. <input type="text" id="companyName" onChange={this.changeText.bind(this)}/>
20. <h4>You entered: { this.state.companyName }</h4>
21. </div>
22. );
23. }
24. }
25. export default App;

LIFTING STATE UP
13.DISCUSS LIFTING STATE UP WITH EXAMPLE (PART B)

136
 As we know, every component in React has its own state. Because of this sometimes data can be
redundant and inconsistent. So, by Lifting up the state we make the state of the parent component as a
single source of truth and pass the data of the parent in its children.
 Time to use Lift up the State: If the data in “parent and children components” or in “cousin
components” is Not in Sync.
Example 1: If we have 2 components in our App. A -> B where, A is parent of B. keeping the same data
in both Component A and B might cause inconsistency of data.
Example 2: If we have 3 components in our App.
A
/\
B C
Where A is the parent of B and C. In this case, If there is some Data only in component B but, component
C also wants that data. We know Component C cannot access the data because a component can talk only
to its parent or child (Not cousins).
Problem: Let’s Implement this with a simple but general example. We are considering the second
example.
Complete File Structure:

Approach: To solve this, we will Lift the state of component B and component C to component A. Make
A.js as our Main Parent by changing the path of App in the index.js file
Before:
import App from './App';
After:
import App from './A';
Filename- A.js:
import React,{ Component } from 'react';
import B from './B'
import C from './C'

class A extends Component {

137
constructor(props) {
super(props);
this.handleTextChange = this.handleTextChange.bind(this);
this.state = {text: ''};
}

handleTextChange(newText) {
this.setState({text: newText});
}

render() {
return (
<React.Fragment>
<B text={this.state.text}
handleTextChange={this.handleTextChange}/>
<C text={this.state.text} />
</React.Fragment>
);
}
}

export default A;
Filename- B.js:
import React,{ Component } from 'react';

class B extends Component {

constructor(props) {
super(props);
this.handleTextChange = this.handleTextChange.bind(this);
}

handleTextChange(e){
this.props.handleTextChange(e.target.value);
}

render() {
return (
<input value={this.props.text}
onChange={this.handleTextChange} />
);
}
}

export default B;
Filename- C.js:
import React,{ Component } from 'react';

class C extends Component {

render() {
return (
<h3>Output: {this.props.text}</h3>

138
);
}
} export default C;

COMPOSITION AND INHERITANCE


13. EXPLAIN COMPOSITION AND INHERITANCE IN REACT (PART B)
 Composition and inheritance are the approaches to use multiple components together in React.js . This
helps in code reuse. React recommend using composition instead of inheritance as much as possible and
inheritance should be used in very specific cases only.
INHERITANCE:
 Inheritance is an Object-Oriented Programming concept in JavaScript that allows us to inherit the
features of a parent from the child.
 For example, suppose we can design a parent that looks like a vehicle and has properties like wheels,
engine, gearbox and lights.
 Then, as a vehicle child, we may make a car that inherits the vehicle's(Parent) properties.
 It means that the car(Child) is a vehicle that will have wheels, lights, an engine, and a gearbox from the
start.
We can also extend the functionality of our object(here, car) by adding more features.

class UserNameForm extends React.Component {


render() {
return (
<div>
<input type="text" />
</div>
);
}
}
ReactDOM.render(
< UserNameForm />,
document.getElementById('root'));
This sis simple to just input the name. We will have two more components to create and update the
username field.
With the use of inheritance we will do it like −
class UserNameForm extends React.Component {
render() {
return (
<div>
<input type="text" />
</div>
);
}

139
}
class CreateUserName extends UserNameForm {
render() {
const parent = super.render();
return (
<div>
{parent}
<button>Create</button>
</div>
)
}
}
class UpdateUserName extends UserNameForm {
render() {
const parent = super.render();
return (
<div>
{parent}
<button>Update</button>
</div>
)
}
}
ReactDOM.render(
(<div>
< CreateUserName />
< UpdateUserName />
</div>), document.getElementById('root')
);
We extended the UserNameForm component and extracted its method in child component using
super.render();

Composition

In Object-Oriented Programming, composition is a well-known concept. It describes a class that can
refer to one or more objects of another class as instances rather than inheriting properties from a base
class.
 For example, the composition can be used for building a car's engine.
What is the definition of composition in general?
It's all about the ingredients and how they're put together to become something more significant. A dish is
made up of food ingredients while cooking. The fruits are used to make the ideal smoothie. In a dance video,
140
it's the choreography of dancers. In programming, the internals of a function must be organized so that the
intended output is obtained.

class UserNameForm extends React.Component {


render() {
return (
<div>
<input type="text" />
</div>
);
}
}
class CreateUserName extends React.Component {
render() {
return (
<div>
< UserNameForm />
<button>Create</button>
</div>
)
}
}
class UpdateUserName extends React.Component {
render() {
return (
<div>
< UserNameForm />
<button>Update</button>
</div>
)
}
}
ReactDOM.render(
(<div>
<CreateUserName />
<UpdateUserName />
</div>), document.getElementById('root') );
Use of composition is simpler than inheritance and easy to maintain the complexity.
141
Composition VS Inheritance

 The techniques for using several components together are done by composition and inheritance in
React. This facilitates code reuse. React recommends using composition instead of inheritance as far
as feasible, and inheritance should only be utilized in particular instances.

 The ‘is-a relationship’ mechanism was used in inheritance. Derived components had to inherit the
properties of the base component, which made changing the behaviour of any component quite
difficult. The composition aspires to be better. Why not inherit only behaviour and add it to the
desired component instead of inheriting properties from other components?

 Only the behaviour is passed down from composition without the inheritance of properties. Why is
this a plus point? It was challenging to add new behaviour via inheritance as the derived component
inherited all of the parent class's properties, making it impossible to add new behaviour. More use
cases had to be included. However, we only inherit behaviour in composition, and adding new
behaviour is relatively easy.

 React proposes utilizing composition instead of inheritance to reuse code between components
because React has an advanced composition model. Between Composition and Inheritance in React,
we can distinguish the following points:
 We can overuse ‘inheritance’.
 ‘Behavior’ composition can be made simpler and easier.
 Composition is preferred over deep inheritance in React.
 Inheritance inherits the properties of other components, whereas composition merely inherits the
behaviour of other components.
 It was difficult to add new behaviour via inheritance since the derived component inherits all of the
parent class's properties, making it impossible to add new behaviour.

Why Composition over Inheritance?

There are a few primary reasons why we should use composition over inheritance when developing React
apps.

 The first is the option of avoiding excessively nested components.


 We can segregate code in different places, thanks to {props.children}. We don't need to go too deeply
into the components and create a lot of ‘ifs’.
 The next point is crucial. When it comes to composition, we use React's ‘everything is component’
concept.
 Because we don't interact, it's safer to use composition in React.
 We can still utilize them with a little inheritance, like when building composed high-order components
(HOC).

REFERENCE:
1. www.geeksforgeeks.com
2. https://www.tutorialspoint.com/composition-vs-inheritance-in-react-js
3. https://www.geeksforgeeks.org/lifting-state-up-in-reactjs/
4. Learning react modern patterns for developing react apps 2nbsped_compress – Alex Banks &
Eve Porcello

*********************************UNIT IV COPLETED*****************************

142
POSSIBLE QUESTIONS
PART A
1.What is REACT?
2.What is DOM?
PART B

1.Explain about client side programming language


2. Explain the methods used in react API
3. Discuss JSX

4. Explain components in react


5. Discuss props in react
6. Discuss fetch API with example
7. Explain the states and lifecycle of react
8. Explain JS local storage
9. Explain react events
10. Discuss lifting state up with example
11. Explain composition and inheritance in react

**********************************

143
VASAVI VIDYA TRUST GROUP OF INSTITUTIONS, SALEM – 103
DEPARTMENT OF MCA
CLASS: I MCA UNIT: V
SUBJECT: FULL STACK WEB DEVELOPMENT SUBJECT CODE: MC4201

INTRODUCTION
Cloud Computing provides us means of accessing the applications as utilities over the Internet. It
allows us to create, configure, and customize the applications online.
What is Cloud?
The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is something,
which is present at remote location. Cloud can provide services over public and private networks, i.e.,
WAN, LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship management (CRM) execute on
cloud.
What is Cloud Computing?
Cloud Computing refers to manipulating, configuring, and accessing the hardware and software
resources remotely. It offers online data storage, infrastructure, and application.

Cloud computing offers platform independency, as the software is not required to be installed locally
on the PC. Hence, the Cloud Computing is making our business applications mobile and collaborative.
CLOUD PROVIDERS OVERVIEW
1.EXPLAIN ABOUT CLOUD OVERVIEW (PART B)
DEFINE CLOUD (PART A)
Cloud computing is a computing paradigm, where a large pool of systems are connected in
private or public networks, to provide dynamically scalable infrastructure for application, data
and file storage. With the advent of this technology, the cost of computation, application hosting,
content storage and delivery is reduced significantly.

Cloud computing is a practical approach to experience direct cost benefits and it has the potential to
transform a data center from a capital-intensive set up to a variable priced environment.

The idea of cloud computing is based on a very fundamental principal of reusability of IT


capabilities. The difference that cloud computing brings compared to traditional concepts of “grid
computing”, “distributed computing”, “utility computing”, or “autonomic computing” is to broaden
horizons across organizational boundaries.

144
Basic Concepts
There are certain services and models working behind the scene making the cloud computing feasible and
accessible to end users. Following are the working models for cloud computing:
 Deployment Models
 Service Models
Deployment Models
Deployment models define the type of access
to the cloud, i.e., how the cloud is located?
Cloud can have any of the four types of
access: Public, Private, Hybrid, and
Community.

Public Cloud
The public cloud allows systems and services
to be easily accessible to the general public.
Public cloud may be less secure because of its openness.
One of the advantages of a Public cloud is that they may be larger than an enterprises cloud, thus
providing the ability to scale seamlessly, on demand.
Private Cloud
The private cloud allows systems and services to be accessible within an organization. It is more secured
because of its private nature.
There are two variations to a private cloud:
- On-premise Private Cloud: On-premise private clouds, also known as internal clouds are
hosted within one‟s own data center. This model provides a more standardized process and
protection, but is limited in aspects of size and scalability. IT departments would also need to
incur the capital and operational costs for the physical resources. This is best suited for
applications which require complete control and configurability of the infrastructure and
security.
- Externally hosted Private Cloud: This type of private cloud is hosted externally with a cloud
provider, where the provider facilitates an exclusive cloud environment with full guarantee of
privacy. This is best suited for enterprises that don‟t prefer a public cloud due to sharing of
physical resources.
Community Cloud
The community cloud allows systems and services to be accessible by a group of organizations.
Hybrid Cloud
The hybrid cloud is a mixture of public and private cloud, in which the critical activities are performed
using private cloud while the non-critical activities are performed using public cloud.
Service Models
Cloud computing is based on service models. These are categorized into three basic service models which
are -
1. Infrastructure-as–a-Service (IaaS)
2. Platform-as-a-Service (PaaS)
3. Software-as-a-Service (SaaS)
4. Anything-as-a-Service (XaaS) is yet another service model, which includes Network-as-a-Service,

145
Business-as-a-Service, Identity-as-a-Service, Database-as-a-Service or Strategy-as-a-Service.
Cloud Computing Models
Cloud Providers offer services that can be grouped into three categories.
1. Software as a Service (SaaS): In this model, a complete application is offered to the customer,
as a service on demand. A single instance of the service runs on the cloud & multiple end users
are serviced. On the customers‟ side, there is no need for upfront investment in servers or
software licenses, while for the provider, the costs are lowered, since only a single application
needs to be hosted & maintained. Today SaaS is offered by companies such as Google,
Salesforce, Microsoft, Zoho, etc.
2. Platform as a Service (Paas): Here, a layer of software, or development environment is
encapsulated & offered as a service, upon which other higher levels of service can be built. The
customer has the freedom to build his own applications, which run on the provider‟s
infrastructure. To meet manageability and scalability requirements of the applications, PaaS
providers offer a predefined combination of OS and application servers, such as LAMP
platform (Linux, Apache, MySql and PHP), restricted J2EE, Ruby etc. Google‟s App Engine,
Force.com, etc are some of the popular PaaS examples.
3. Infrastructure as a Service (Iaas): IaaS provides basic storage and computing capabilities as
standardized services over the network. Servers, storage systems, networking equipment, data

centre space etc. are pooled and made available to handle workloads. The customer would
typically deploy his own software on the infrastructure. Some common examples are Amazon,
GoGrid, 3 Tera, etc.

Cloud Computing Benefits


Enterprises would need to align their applications, so as to exploit the architecture models that Cloud
Computing offers. Some of the typical benefits are listed below:
1. Reduced Cost
There are a number of reasons to attribute Cloud technology with lower costs. The billing
model is pay as per usage; the infrastructure is not purchased thus lowering maintenance.
Initial expense and recurring expenses are much lower than traditionalcomputing.
2. Increased Storage
With the massive Infrastructure that is offered by Cloud providers today, storage &
maintenance of large volumes of data is a reality. Sudden workload spikes are also managed
effectively & efficiently, since the cloud can scale dynamically.
3. Flexibility
This is an extremely important characteristic. With enterprises having to adapt, even more
rapidly, to changing business conditions, speed to deliver is critical. Cloud computing stresses
146
on getting applications to market very quickly, by using the most appropriate building blocks
necessary for deployment.
Cloud Computing Challenges
Despite its growing influence, concerns regarding cloud computing still remain. In our opinion, the
benefits outweigh the drawbacks and the model is worth exploring. Some common challenges are:
1. Data Protection
Data Security is a crucial element that warrants scrutiny. Enterprises are reluctant to buy an
assurance of business data security from vendors. They fear losing data to competition and the
data confidentiality of consumers. In many instances, the actual storage location is not
disclosed, adding onto the security concerns of enterprises. In the existing models, firewalls
across data centers (owned by enterprises) protect this sensitive information. In the cloud
model, Service providers are responsible for maintaining data security and enterprises would
have to rely on them.
2. Data Recovery and Availability
All business applications have Service level agreements that are stringently followed.
Operational teams play a key role in management of service level agreements and runtime
governance of applications. In production environments, operational teams support
 Appropriate clustering and Fail overData Replication
 System monitoring (Transactions monitoring, logs monitoring and others)
Maintenance (Runtime Governance)
 Disaster recovery
 Capacity and performance management
3. Management Capabilities
Despite there being multiple cloud providers, the management of platform and infrastructure is
still in its infancy. Features like „Auto-scaling‟ for example, are a crucial requirement for
many enterprises. There is huge potential to improve on the scalability and load balancing
features provided today.
4. Regulatory and Compliance Restrictions
In some of the European countries, Government regulations do not allow customer's
personal information and other sensitive information to be physically located outside the
state or country. In order to meet such requirements, cloud providers need to setup a data
center or a storage site exclusively within the country to comply with regulations. Having such
an infrastructure may not always be feasible and is a big challenge for cloud providers.
VIRTUAL PRIVATE CLOUD

4. WHAT IS A VIRTUAL PRIVATE CLOUD (VPC)? (PART A)


What is a virtual private cloud (VPC)?
A virtual private cloud (VPC) is a secure, isolated private cloud hosted within a public cloud. VPC
customers can run code, store data, host websites, and do anything else they could do in an ordinary private
cloud, but the private cloud is hosted remotely by a public
cloud provider. (Not all private clouds are hosted in this
fashion.) VPCs combine the scalability and convenience of
public cloud computing with the data isolation of private
cloud computing.
Imagine a public cloud as a crowded restaurant, and a virtual
private cloud as a reserved table in that crowded restaurant.
Even though the restaurant is full of people, a table with a
147
"Reserved" sign on it can only be accessed by the party who made the reservation. Similarly, a public cloud
is crowded with various cloud customers accessing computing resources – but a VPC reserves some of those
resources for use by only one customer.
What is a public cloud? What is a private cloud?
 A public cloud is shared cloud infrastructure. Multiple customers of the cloud vendor access that
same infrastructure, although their data is not shared – just like every person in a restaurant orders
from the same kitchen, but they get different dishes. Public cloud service providers include AWS,
Google Cloud Platform, and Microsoft Azure, among others.
 The technical term for multiple separate customers accessing the same cloud infrastructure is
"multitenancy" (see What Is Multitenancy? to learn more).
 A private cloud, however, is single-tenant. A private cloud is a cloud service that is exclusively
offered to one organization. A virtual private cloud (VPC) is a private cloud within a public cloud;
no one else shares the VPC with the VPC customer.
How is a VPC isolated within a public cloud?
A VPC isolates computing resources from the other computing resources available in the public cloud. The
key technologies for isolating a VPC from the rest of the public cloud are:
 Subnets: A subnet is a range of IP addresses within a network that are reserved so that they're not
available to everyone within the network, essentially dividing part of the network for private use. In a
VPC these are private IP addresses that are not accessible via the public Internet, unlike typical IP
addresses, which are publicly visible.
 VLAN: A LAN is a local area network, or a group of computing devices that are all connected to
each other without the use of the Internet. A VLAN is a virtual LAN. Like a subnet, a VLAN is a
way of partitioning a network, but the partitioning takes place at a different layer within the OSI
model (layer 2 instead of layer 3).
 VPN: A virtual private network (VPN) uses encryption to create a private network over the top of a
public network. VPN traffic passes through publicly shared Internet infrastructure – routers,
switches, etc. – but the traffic is scrambled and not visible to anyone.
 A VPC will have a dedicated subnet and VLAN that are only accessible by the VPC customer. This
prevents anyone else within the public cloud from accessing computing resources within the VPC –
effectively placing the "Reserved" sign on the table. The VPC customer connects via VPN to their
VPC, so that data passing into and out of the VPC is not visible to other public cloud users.
Some VPC providers offer additional customization with:
 Network Address Translation (NAT): This feature matches private IP addresses to a public IP
address for connections with the public Internet. With NAT, a public-facing website or
application could run in a VPC.
 BGP route configuration: Some providers allow customers to customize BGP routing tables for
connecting their VPC with their other infrastructure. (Learn how BGP works.)
ADVANTAGES OF USING A VPC INSTEAD OF A PRIVATE CLOUD
 Scalability: Because a VPC is hosted by a public cloud provider, customers can add more computing
resources on demand.
 Easy hybrid cloud deployment: It's relatively simple to connect a VPC to a public cloud or to on-
premises infrastructure via the VPN. (Learn about hybrid clouds and their advantages.)
 Better performance: Cloud-hosted websites and applications typically perform better than those
hosted on local on-premises servers.
 Better security: The public cloud providers that offer VPCs often have more resources for updating
and maintaining the infrastructure, especially for small and mid-market businesses. For large
148
enterprises or any companies that face extremely tight data security regulations, this is less of an
advantage.
KEY COMPONENTS OF VPC SYSTEMS AND NETWORKS

1. Internet gateway

This VPC component is horizontally scaled and features high availability as well as robust redundancy.
VPCs use internet gateways to communicate with the internet at large. The two purposes of an internet
gateway are:
 Executing network address translation for instances where a public IPv4 address has been
assigned
 Setting a target in VPC route tables for internet-routable traffic
Internet gateways can support both forms of traffic—IPv4 and IPv6—without the risk of network traffic
being affected by bandwidth limitations or availability fluctuations. Normally, VPC vendors will provide
internet gateways to all clients without levying additional charges.

Egress-only internet gateways are a related component. Like an internet gateway, this component is also
horizontally scaled, features high availability, and is redundant in nature. Egress-only internet gateways
support IPv6-based outbound communication from VPC instances to the internet. At the same time, this
component prevents the internet from establishing an IPv6 link with VPC instances.

2. Carrier gateways

Carrier gateways serve a dual purpose in a VPC setup of:


 Supporting inbound traffic from a carrier network at a particular location
149
 Supporting outbound traffic to the carrier network as well as the internet
Carrier gateways provide support for IPv4 traffic and work with VPCs that include subnets in a wavelength
zone (a form of AWS infrastructure deployment). Carrier gateways connect wavelength zones with telecom
carriers and the devices on their networks.

3. Network address translation devices

Network address translation devices support connections of private subnet instances to the internet, on-
premise networks, and even other VPCs. These instances can establish communication with external
services. However, they cannot receive connection requests that are unsolicited in nature.

Network address translation devices replace the original IPv4 address belonging to the instances with the
devices’ own address. The addresses are then translated back to the source IPv4 addresses when transmitting
response traffic to the instances.

AWS offers managed network address translation devices known as NAT gateways. These managed devices
offer higher availability and better bandwidth when compared to NAT instances (NAT devices created by
clients on EC2 instances). They also need less effort dedicated toward administration.

4. Dynamic host configuration protocol (DHCP) options sets

DHCP sets a standard for transmitting configuration data to TCP/IP network hosts. The DHCP message
contains an ‘options’ field for configuration parameters, such as NetBIOS-node-type, domain name, and
domain name server. Creating a VPC on AWS automatically leads to the creation of DHCP options that are
then associated with the VPC. Users can configure their own DHCP options set.

5. Domain name system (DNS) support

DNS sets the standard for resolving the names used on the internet according to their associated IP
addresses. DNS hostnames, which contain a domain name and a host name, are used to assign a unique
name to a computer. DNS servers process DNS hostnames and resolve them to their preset IP addresses.

Private IPv4 addresses are used for communicating within the network associated with the instance, while
public IPv4 addresses are used to communicate over the internet. AWS clients can use their own DNS server
by creating a fresh DHCP options set for their VPC.

6. Prefix lists

150
Prefix lists contain one or more classless inter-domain routing (CIDR) blocks and are used to configure and
manage route tables and security groups with ease. Prefix lists can be created from frequently used IP
addresses. They can be referenced as a set within routes and rules of security groups instead of being
referenced individually.

Security group rules with varying CIDR blocks but having the same protocol and port can be consolidated
into one rule that uses a prefix list. In networks that have been scaled and are required to transmit traffic
from a different CIDR block, the relevant prefix list can be updated to update all security groups that use the
prefix-list.

Prefix lists come in two types:


 Managed by AWS: Sets containing IP
address ranges that are used for AWS
services and cannot be created, shared,
modified, or deleted by users
 Customer-managed prefix lists: Sets
containing IP address ranges that are
defined and managed by users and can be
shared with other AWS accounts to enable
referencing
POPULAR VPC PROVIDERS

1. Amazon
2. Google cloud
3. IBM cloud
4. Microsoft azure virtual network
5. VM ware

SCALING (HORIZONTAL AND VERTICAL)


6.DEFINE SCALING (PART A)
EXPLAIN TYPES OF SCLAING IN CLOUD (PART B)
What is scalability?
Cloud scalability refers to the ability to increase or decrease IT resources (virtual machines, databases,
networks) as needed to meet changing needs. Scalability is one of the main advantages of the cloud and
the main driving force for its popularity in businesses.
Public cloud providers such as AWS (Amazon Web Services) already have all the infrastructure in
place; in the past, when scaling had to be done using on-premises infrastructure, the process could take
weeks or months and require capital investment.
Systems have four general areas that scalability can apply to:
 CPU
 Disk I/O
 Memory
 Network I/O

151
The main benefit of the scalable architecture is performance and the ability to handle bursts of traffic or
heavy loads with little or no notice.

What is horizontal scaling?

To scale horizontally (scaling in or out), you add more resources like virtual machines to your system to
spread out the workload across them. Horizontal scaling is especially important for companies that
need high availability services with a requirement for minimal downtime.

Benefits of horizontal scaling

Horizontal scaling increases high availability because as long as you are spreading your infrastructure
across multiple areas, if one machine fails, you can just use one of the other ones.
Because you’re adding a machine, you need fewer periods of downtime and don’t have to switch the old
machine off while scaling. There may never be a need for downtime if you scale effectively.
And here are some simpler advantages of horizontal scaling:

 Easy to resize according to your needs


 Immediate and continuous availability
 Cost can be linked to usage and you don’t always have to pay for peak demand

Disadvantages of horizontal scaling

The main disadvantage of horizontal scaling is that it increases the complexity of the maintenance and
operations of your architecture, but there are services in the AWS environment to solve this issue.

 Architecture design and deployment can be very complicated


 A limited amount of software that can take advantage of horizontal scaling

What is vertical scaling?

Through vertical scaling (scaling up or down), you can increase or decrease the capacity of existing
services/instances by upgrading the memory (RAM), storage, or processing power (CPU). Usually, this
means that the expansion has an upper limit based on the capacity of the server or machine being expanded.

152
Vertical scaling benefits

 No changes have to be made to the application code and no additional servers need to be added;
you just make the server you have more powerful or downsize again.
 Less complex network – when a single instance handles all the layers of your services, it will not
have to synchronize and communicate with other machines to work. This may result in faster responses.
 Less complicated maintenance – the maintenance is easier and less complex because of the number
of instances you will need to manage.

Vertical scaling disadvantages

 A maintenance window with downtime is required – unless you have a backup server that can
handle operations and requests, you will need some considerable downtime to upgrade your machine.
 Single point of failure – having all your operations on a single server increases the risk of losing all
your data if a hardware or software failure were to occur.
 Upgrade limitations – there is a limitation to how much you can upgrade a machine/instance.

Horizontal scaling vs. vertical scaling

In the cloud, you will usually use both of these methods, but horizontal scaling is usually considered a long-
term solution, while vertical scaling is usually considered a short-term solution. The reason for this
distinction is that you can usually add as many servers to the infrastructure as you need, but sometimes
hardware upgrades are just not possible anymore.

153
Both horizontal and vertical scaling have their benefits and limitations. Here are some factors to consider:
 Upgradability and flexibility – if you run your application layer on separate machines (horizontally
scaled), they are easier to decouple and upgrade without downtime.
 Worldwide distribution – if you plan to have national or global customers, it is unreasonable to
expect them to access your services from one location. In this case, you need to scale resources
horizontally.
 Reliability and availability – horizontal scaling can provide you with a more reliable system. It
increases redundancy and ensures that you are not dependent on one machine.
 Performance – sometimes it’s better to leave the application as is and upgrade the hardware to meet
demand (vertically scale). Horizontal scaling may require you to rewrite code, which can add
complexity.
VIRTUAL MACHINE
7.DEFINE VM (PART A)
EXPLAIN VM IN DETAIL (PART B)

Virtualization in Cloud Computing


 Virtualization is the "creation of a virtual (rather than actual) version of something, such as a
server, a desktop, a storage device, an operating system or network resources".
 In other words, Virtualization is a technique, which allows to share a single physical instance of a
resource or an application among multiple customers and organizations. It does by assigning a
logical name to a physical storage and providing a pointer to that physical resource when demanded.
What is the concept behind the Virtualization?
 Creation of a virtual machine over existing operating system and hardware is known as Hardware
Virtualization. A Virtual machine provides an environment that is logically separated from the
underlying hardware.
 The machine on which the virtual machine is going to create is known as Host Machine and that
virtual machine is referred as a Guest Machine

Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.

 A cloud virtual machine is the digital version of a physical computer that can run in a cloud. Like a
physical machine, it can run an operating system, store data, connect to networks, and do all the
other computing functions.
154
 Virtual machine is a software-based-computer that exists within the operating system of another
computer. In simpler terms, it is a virtualization of an actual computer, except that it exists on another
system.

 Typically you will have a hypervisor running on the physical machine, and you will have virtual
machines running on top of the hypervisor. Hypervisor is a software layer that allows you to
virtualize the environment. The operating system running in the virtual machine is called as the
Guest Operating System.
 Multiple virtual machines can share a single host; that is, they can run on the same physical machine.
This increases the usage efficiency of the physical machine and allows the virtual machines to be
completely oblivious to their physical environment. The number of virtual machines on a single host is
limited by the resources of the physical machine.

The diagram below depicts this:

Advantages
There are many advantages to using cloud virtual machines instead of physical machines, including:
 Low cost: It is cheaper to spin off a virtual machine in the clouds than to procure a physical machine.
 Easy scalability: We can easily scale in or scale out the infrastructure of a cloud virtual machine
based on load.
 Ease of setup and maintenance: Spinning off virtual machines is very easy as compared to buying
actual hardware. This helps us get set up quickly.

155
 Shared responsibility: Disaster recovery becomes the responsibility of the Cloud provider. We
don’t need a different disaster recovery site incase our primary site goes down.
ETHERNET AND SWITCHES
8. EXPLAIN ETERNET AND SWITCHES IN CLOUD (PART B)
 Ethernet is the traditional technology for connecting devices in a wired local area network
(LAN) or wide area network (WAN).

 It enables devices to communicate with each other via a protocol, which is a set of rules or common
network language.
 Ethernet describes how network devices format and transmit data so other devices on the same
LAN or campus network can recognize, receive and process the information. An Ethernet cable
is the physical, encased wiring over which the data travels.

How Ethernet works


 IEEE specifies in the family of standards called IEEE 802.3 that the Ethernet protocol touches both
Layer 1 (physical layer) and Layer 2 (data link layer) on the Open Systems Interconnection
(OSI) model.
 Ethernet defines two units of transmission: packet and frame. The frame includes the payload of
data being transmitted as well as the following:
 the physical media access control (MAC) addresses of both the sender and receiver;
 virtual LAN (VLAN) tagging and quality of service (QoS) information; and
 error correction information to detect transmission problems.

 Each frame is wrapped in a packet that contains several bytes of information to establish the
connection and mark where the frame starts.
 Engineers at Xerox first developed Ethernet in the 1970s. Ethernet initially ran over coaxial cables.
Early Ethernet connected multiple devices into network segments through hubs -- Layer 1 devices
responsible for transporting network data -- using either a daisy chain or star topology. Currently, a
typical Ethernet LAN uses special grades of twisted-pair cables or fiber optic cabling.
 If two devices that share a hub try to transmit data at the same time, the packets can collide and create
connectivity problems. To alleviate these digital traffic jams, IEEE developed the Carrier Sense Multiple
Access with Collision Detection (CSMA/CD) protocol. This protocol enables devices to check whether a
given line is in use before initiating new transmissions.
 Later, Ethernet hubs largely gave way to network switches. Because a hub cannot discriminate between
points on a network segment, it can't send data directly from point A to point B. Instead, whenever a
network device sends a transmission via an input port, the hub copies the data and distributes it to all
available output ports.
 In contrast, a switch intelligently sends any given port only the traffic intended for its devices rather than
copies of any and all the transmissions on the network segment, thus improving security and efficiency.
 Like with other network types, involved computers must include a network interface card (NIC) to
156
connect to Ethernet.
Types of Ethernet
 An Ethernet device with CAT5/CAT6 copper cables is connected to a fiber optic cable through fiber
optic media converters. The distance covered by the network is significantly increased by this
extension for fiber optic cable. There are some kinds of Ethernet networks, which are discussed below:
1. Fast Ethernet: This type of Ethernet is usually supported by a twisted pair or CAT5 cable, which has
the potential to transfer or receive data at around100 Mbps. They function at 100Base and
10/100Base Ethernet on the fiber side of the link if any device such as a camera, laptop, or other is
connected to a network. The fiber optic cable and twisted pair cable are used by fast Ethernet to create
communication. The 100BASE-TX,
100BASE-FX, and 100BASE-T4 are the three categories of Fast Ethernet.

2. Gigabit Ethernet: This type of Ethernet network is an upgrade from Fast Ethernet, which uses fiber
optic cable and twisted pair cable to create communication. It can transfer data at a rate of 1000
Mbps or 1Gbps. In modern times, gigabit Ethernet is more common. This network type also uses
CAT5e or other advanced cables, which can transfer data at a rate of 10 Gbps.

 The primary intention of developing the gigabit Ethernet was to full fill the user's requirements, such as
faster transfer of data, faster communication network, and more.

1. 10-Gigabit Ethernet: This type of network can transmit data at a rate of 10 Gigabit/second, considered a
more advanced and high-speed network. It makes use of CAT6a or CAT7 twisted-pair cables and fiber
optic cables as well. This network can be expended up to nearly 10,000 meters with the help of using a
fiber optic cable.
2. Switch Ethernet: This type of network involves adding switches or hubs, which helps to improve
network throughput as each workstation in this network can have its own dedicated 10 Mbps connection
instead of sharing the medium. Instead of using a crossover cable, a regular network cable is used when
a switch is used in a network. For the latest Ethernet, it supports 1000Mbps to 10 Gbps and 10Mbps to
100Mbps for fast Ethernet.
SWITCHES
 Network switch is an essential component in their networking building plan. In a network deployment,
switch channels incoming data from any of multiple input ports to the specific output port that will
take the data toward its intended destination. Besides, to achieve high performance level, there are
different types of switches in networking.

157
 LAN Switch
 Local area network switches or LAN switches are usually used to connect points on a company’s
internal LAN. It is also known as a data switch or an Ethernet switch. It blocks the overlap of data
packets running through a network by the economical allocation of bandwidth. The LAN switch
delivers the transmitted data packet before directing it to its planned receiver. These types of
switches reduce network congestion or bottlenecks by distributing a package of data only to its
intended recipient.
Unmanaged Switch
 Unmanaged network switches are frequently used in home networks, small companies and
businesses. It permits devices on the network to connect with each other, such as computer to
computer or printer to computer in one location.
 An unmanaged switch does not necessarily need to be configured or watched. It is simple and easy to
set up. If you want to add more Ethernet ports, you can use these plug and play types of switches in
networking.
Managed Switch
 Compared to unmanaged switches, the advantage of managed switches is that they can be
customized to enhance the functionality of a certain network. They offer some features like QoS
(Quality of Service), Simple Network Management Protocol (SNMP) and so on.
 These types of switches in networking can support a range of advanced features designed to be
controlled by a professional administrator.
 In addition, there is smart switch, a type of managed switch. It has some features that managed
switch has, but are more limited. Smart network switch is usually used for the networking devices
such as VLANs.
PoE Switch
 PoE Gigabit Ethernet switch is a network switch that utilizes Power over Ethernet technology.
 When connected with multiple other network devices, PoE switches can support power and data
transmission over one network cable at the same time. This greatly simplifies the cabling process.
 These types of switches in networking provide greater flexibility and you will never have to worry
about power outlet when deploying network devices.
Stackable Switch
 Stackable switches provide a way to simplify and increase the availability of the network.
 For example, instead of configuring, managing, and troubleshooting eight 48-port switches
individually, you can manage all eight like a single unit using a stackable Switches.
DOCKER CONTAINER
9.WHAT IS DOCKER IN CLOUD COMPUTING? (PART B)
Docker in cloud computing is a tool that is used to automate the deployment of applications in an
environment designed to manage containers. It is a container management service. These containers
help applications to work while they are being shifted from one platform to another. Docker’s
technology is distinctive because it focuses on the requirements of developers and systems. This modern
technology enables enterprises to create and run any product from any geographic location.
There are several problems associated with cloud environments and tries to solve those issues by
creating a systematic way to distribute and augment the application. It helps to separate the applications
from other containers resulting in a smooth flow. As its job, it is possible to manage our infrastructure,
in the same ways we use to manage our applications, with the help of Docker.

158
HOW DOES DOCKER WORK
A docker container image is structured in terms oflayers.’
Example: A process for building image

1. Start with a base image


2. Load software desired
3. Commit base image software to form a new image
4. The new image can then be the base for more software
5. Image is what is transferred.
 Components of Docker Software
1. Software
2. Objects
3. Registry
REASON FOR USING DOCKER
Listed below are some of the benefits of Docker container:

1. Tailor-made: Most industries want to use a purpose-built. The Docker in cloud computing enables its
clients to make use of Docker to organize their software infrastructure.
2. Accessibility: As the docker is a cloud framework, it is accessible from anywhere, anytime. Has high
efficiency.
3. Operating System Support: It takes less space. They are lightweight and can operate several containers
simultaneously.
4. Performance: Containers have better performance as they are hosted in a single docker engine.
5. Speed: No requirement for OS to boot. Applications are made online in seconds. As the business
environment is constantly changing, technological up-gradation needs to keep pace for smoother workplace
transitions. Docker helps organizations with the speedy delivery of service.
6. Flexibility: They are a very agile container platform. It is deployed easily across clouds, providing
users with an integrated view of all their applications across different environments. Easily portable
across different platforms.
7. Scalable: It helps create immediate impact by saving on recoding time, reducing costs, and limiting
the risk of operations. Containerization helps scale easily from the pilot stage to large-scale
production.
8. Automation: Docker works on Software as a service and Platform as a service model, which enables
organizations to streamline and automate diverse applications. Docker improves the efficiency of
operations as it works with a unified operating model.
9. Space Allocation: Data volumes can be shared and reused among multiple containers.
Even though there are a lot of benefits associated with docker, it has some limitations as well, which are
as follows:
1. Missing Features: Many features like container self-registration and self-inspects are in progress.
2. Provide cross-platform compatibility: One of the issues in docker is if an application is designed to run for
windows, then it cannot work on other operating systems.
KUBERNETS

10. DISCUSS KUBERNETS (PART B)


What is Kubernetes?

159
 Kubernetes is also known as 'k8s'. This word comes from the Greek language, which means
a pilot or helmsman.
Kubernetes is an extensible, portable, and open-source platform designed by Google in 2014. It is
mainly used to automate the deployment, scaling, and operations of the container-based applications
across the cluster of nodes. It is also designed for managing the services of containerized apps using
different methods which provide the scalability, predictability, and high availability.
 It is actually an enhanced version of 'Borg' for managing the long-running processes and batch jobs.
Nowadays, many cloud services offer a Kubernetes-based infrastructure on which it can be deployed
as the platform-providing service. This technique or concept works with many container tools,
like docker, and follows the client-server architecture.

KEY OBJECTS OF KUBERNETES

Following are the key objects which exist in the Kubernetes:


 Pod
It is the smallest and simplest basic unit of the Kubernetes application. This object indicates the
processes which are running in the cluster.
 Node
A node is nothing but a single host, which is used to run the virtual or physical machines. A node in
the Kubernetes cluster is also known as a minion.
 Service
A service in a Kubernetes is a logical set of pods, which works together. With the help of services,
users can easily manage load balancing configurations.
 ReplicaSet
A ReplicaSet in the Kubernetes is used to identify the particular number of pod replicas are running
at a given time. It replaces the replication controller because it is more powerful and allows a user to
use the "set-based" label selector.
 Namespace
Kubernetes supports various virtual clusters, which are known as namespaces. It is a way of
dividing the cluster resources between two or more users.

FEATURES OF KUBERNETES

Following are the essential features of Kubernetes:

1. Pod: It is a deployment unit in Kubernetes with a single Internet protocol address.


2. Horizontal Scaling: It is an important feature in the Kubernetes. This feature uses
a HorizontalPodAutoscalar to automatically increase or decrease the number of pods in a
deployment, replication controller, replica set, or stateful set on the basis of observed CPU
utilization.
3. Automatic Bin Packing: Kubernetes helps the user to declare the maximum and minimum
resources of computers for their containers.
4. Service Discovery and load balancing: Kubernetes assigns the IP addresses and a Name of DNS
for a set of containers, and also balances the load across them.
5. Automated rollouts and rollbacks: Using the rollouts, Kubernetes distributes the changes and
updates to an application or its configuration. If any problem occurs in the system, then this
technique rollbacks those changes for you immediately.
160
6. Persistent Storage: Kubernetes provides an essential feature called 'persistent storage' for storing
the data, which cannot be lost after the pod is killed or rescheduled. Kubernetes supports various
storage systems for storing the data, such as Google Compute Engine's Persistent Disks (GCE
PD) or Amazon Elastic Block Storage (EBS). It also provides the distributed file systems: NFS or
GFS.
7. Self-Healing: This feature plays an important role in the concept of Kubernetes. Those containers
which are failed during the execution process, Kubernetes restarts them automatically. And, those
containers which do not reply to the user-defined health check, it stops them from working
automatically.

The architecture of Kubernetes actually follows the client-server architecture. It consists of the following
two main components:

1. Master Node (Control Plane)


2. Slave/worker node

Master Node or Kubernetes Control Plane


The master node in a Kubernetes architecture is used to manage the states of a cluster. It is actually an entry
point for all types of administrative tasks. In the Kubernetes cluster, more than one master node is present
for checking the fault tolerance.
Following are the four different components which exist in the Master node or Kubernetes Control plane:

1. API Server
2. Scheduler
3. Controller Manager
4. ETCD
API Server
The Kubernetes API server receives the REST commands which are sent by the user. After receiving, it
validates the REST requests, process, and then executes them. After the execution of REST commands, the
resulting state of a cluster is saved in 'etcd' as a distributed key-value store.

161
Scheduler

The scheduler in a master node schedules the tasks to the worker nodes. And, for every worker node, it is
used to store the resource usage information.
In other words, it is a process that is responsible for assigning pods to the available worker nodes.
Controller Manager
The Controller manager is also known as a controller. It is a daemon that executes in the non-terminating
control loops. The controllers in a master node perform a task and manage the state of the cluster. In the
Kubernetes, the controller manager executes the various types of controllers for handling the nodes,
endpoints, etc.
ETCD
It is an open-source, simple, distributed key-value storage which is used to store the cluster data. It is a part
of a master node which is written in a GO programming language.
Now, we have learned about the functioning and components of a master node; let's see what is the function
of a slave/worker node and what are its components.

Worker/Slave node

The Worker node in a Kubernetes is also known as minions. A worker node is a physical machine that
executes the applications using pods. It contains all the essential services which allow a user to assign the
resources to the scheduled containers.
Following are the different components which are presents in the Worker or slave node:
Kubelet

This component is an agent service that executes on each worker node in a cluster. It ensures that the pods
and their containers are running smoothly. Every kubelet in each worker node communicates with the
master node. It also starts, stops, and maintains the containers which are organized into pods directly by the
master node.
Kube-proxy
It is a proxy service of Kubernetes, which is executed simply on each worker node in the cluster. The main
aim of this component is request forwarding. Each node interacts with the Kubernetes services
through Kube-proxy.
Pods
A pod is a combination of one or more containers which logically execute together on nodes. One worker
node can easily execute multiple pods.

REFERENCE
5. www.geeksforgeeks.com
6. https://www.tutorialspoint.com/
7. https://www.geeksforgeeks.org/

****************************UNIT V COMPLETED******************************

162
POSSIBLE QUESTIONS
PART A
1. Define cloud
2. What is a virtual private cloud (VPC)?
3. Define scaling
4. Define VM

PART B
1. Explain about cloud overview
2. Explain cloud computing models
3. Explain types of scaling in cloud
4. Explain VM in detail
5. Explain Ethernet and switches in cloud
6. What is Docker in cloud computing?
7. Discuss Kubernets

*******************

163

You might also like