0% found this document useful (0 votes)
31 views18 pages

Unit 5 Updated

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views18 pages

Unit 5 Updated

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Introduction to Node js

What Is Node.js?
 Node.js, sometimes referred to as just Node, is a development framework that is based on Google’s V8
JavaScript engine. Node.js code is written in JavaScript, and then V8 compiles it into machine code to be
executed.
 Possible to write server-side code in Node.js, including the webserver and the server-side scripts and any
supporting web application functionality.
 The fact that the webserver and the supporting web application scripts are running together in the same
server-side application allows for much tighter integration between the webserver and the scripts.

The following are just a few reasons Node.js is a great framework:


 JavaScript end-to-end: One of the biggest advantages of Node.js is that it allows you to write both server-
and client-side scripts in JavaScript. There have always been difficulties in deciding whether to put logic
in client-side scripts or server-side scripts. With Node.js you can take JavaScript written on the client and
easily adapt it for the server, and vice versa. An added plus is that client developers and server developers
are speaking the same language.
 Event-driven scalability: Node.js applies a unique logic to handling web requests. Rather than having
multiple threads waiting to process web requests, with Node.js they are processed on the same thread,
using a basic event model. This allows Node.js webservers to scale in ways that traditional webservers
can’t.
 Extensibility: Node.js has a great following and an active development community. People are providing
new modules to extend Node.js functionality all the time. Also, it is simple to install and include new
modules in Node.js; you can extend a Node.js project to include new functionality in minutes.
 Fast implementation: Setting up Node.js and developing in it are super easy. In only a few minutes you
can install Node.js and have a working webserver.

Events
 In Node.js event model, work is added to an event queue and then picked up by a single thread running
an event loop. The event loop grabs the top item in the event queue, executes it, and then grabs the next
item. When executing code that is no longer live or has blocking I/O, instead of calling the function
directly, the function is added to the event queue along with a callback that is executed after the function
completes. When all events on the Node.js event queue have been executed, the Node application
terminates.

 The GetFile request first opens the file, reads the contents, and then sends the data back in a response. The
GetData request connects to the DB, queries the necessary data, and then sends the data in the response.
 The GetFile and GetData requests are added to the event queue. Node.js first picks up the GetFile request,
executes it, and then completes by adding the Open() callback function to the event queue.
 Next, it picks up the GetData request, executes it, and completes by adding the Connect() callback function
to the event queue. This continues until there are no callback functions to be executed.
 Notice in Figure that the events for each thread do not necessarily follow a direct interleaved order. For
example, the Connect request takes longer to complete than the Read request, so Send(file) is called before
Query(db).

1
Blocking I/O in Node.js:
The Node.js event model of using the event callbacks is great until you run into the problem of functions that
block waiting for I/O. Blocking I/O stops the execution of the current thread and waits for a response before
continuing.
Some examples of blocking I/O are
 Reading a file
 Querying a database
 Socket request
 Accessing a remote service

 The reason Node.js uses event callbacks is not to have to wait for blocking I/O. Therefore, any requests
that perform blocking I/O are performed on a different thread in the background.
 Node.js implements a thread pool in the background. When an event that requires blocking I/O is retrieved
from the event queue, Node.js retrieves a thread from the thread pool and executes the function there
instead of on the main event loop thread. This prevents the blocking I/O from holding up the rest of the
events in the event queue.
 The function executed on the blocking thread can still add events back to the event queue to be processed.
For example, a database query call is typically passed a callback function that parses the results and may
schedule additional work on the event queue before sending a response.

Adding Work to the Event Queue:

Once you have designed your code correctly, you can then use the event model to schedule work on the event
queue. In Node.js applications, work is scheduled on the event queue by passing a callback function using one of
these methods:
 Make a call to one of the blocking I/O library calls such as writing to a file or connecting to a database.
 Add a built-in event listener to a built-in event such as an http.request or server.connection.
 Create your own event emitters and add custom listeners to them.
 Use the process.nextTick option to schedule work to be picked up on the next cycle of the event loop.
 Use timers to schedule work to be done after a particular amount of time or at periodic intervals.

Timers
A useful feature of Node.js and JavaScript is the ability to delay execution of code for a period of time. This can
be useful for cleanup or refresh work that you do not want to always be running. There are three types of timers
you can implement in Node.js: timeout, interval, and immediate.

2
1. Delaying Work with Timeouts
Timeout timers are used to delay work for a specific amount of time. When that time expires, the callback function
is executed and the timer goes away. Use timeouts for work that only needs to be performed once.

Timeout timers are created using the setTimeout(callback, delayMilliSeconds, [args]) method built into Node.js.
When you call setTimeout(), the callback function is executed after delayMilliSeconds expires. For example,
the following executes myFunc() after 1 second:

setTimeout(myFunc, 1000);

The setTimeout() function returns a timer object ID. You can pass this ID to clearTimeout(timeoutId) at any time
before the delayMilliSeconds expires to cancel the
timeout function. For example:

myTimeout = setTimeout(myFunc, 100000);



clearTimeout(myTimeout);

2. Performing Periodic Work with Intervals


Interval timers are used to perform work on a regular delayed interval. When the delay time expires, the callback
function is executed and is then rescheduled for the delay interval again. Use intervals for work that needs to be
performed on a regular basis.
Interval timers are created using the setInterval(callback, delayMilliSeconds, [args]) method built into Node.js.
When you call setInterval(), the callback function is executed every interval after delayMilliSeconds has
expired. For example, the following executes myFunc() every second:

setInterval(myFunc, 1000);

The setInterval() function returns a timer object ID. You can pass this ID to clearInterval(intervalId) at any time
before the delayMilliSeconds expires to cancel the timeout function. For example:

myInterval = setInterval(myFunc, 100000);



clearInterval(myInterval);

3. Performing Immediate Work with an Immediate Timer


Immediate timers are used to perform work on a function as soon as the I/O event callbacks are executed,
but before any timeout or interval events are executed. This allows you to schedule work to be done after
the current events in the event queue are completed. Use immediate timers to yield long-running execution
segments to other callbacks to prevent starving the I/O events.
Immediate timers are created using the setImmediate(callback,[args]) method built into Node.js. When you call
setImmediate(), the callback function is placed on the event queue and popped off once for each iteration
through the event queue loop after I/O events have a chance to be called. For example, the following
schedules myFunc() to execute on the next cycle through the event queue:

setImmediate(myFunc(), 1000);

3
The setImmediate() function returns a timer object ID. You can pass this ID to clearImmediate(immediateId) at
any time before it is picked up off the event queue. For example:

myImmediate = setImmediate(myFunc);

clearImmediate(myImmediate);

Using nextTick to Schedule Work


A useful method of scheduling work on the event queue is the process.nextTick(callback) function. This function
schedules work to be run on the next cycle of the event loop. Unlike the setImmediate() method, nextTick()
executes before the I/O events are fired. This can result in starvation of the I/O events, so Node.js limits the
number of nextTick() events that can be executed each cycle through the event queue by the value of
process.maxTickDepth, which defaults to 1000.

Event Emitters and Listeners


Adding Custom Events to Your JavaScript Objects

Events are emitted using an EventEmitter object. This object is included in the events module. The
emit(eventName, [args]) function triggers the eventName event and includes any arguments provided. The
following code snippet shows how to implement a simple event emitter:

var events = require('events');


var emitter = new events.EventEmitter();
emitter.emit("simpleEvent");

You then can emit events directly from instances of your object. For example:

var myObj = new MyObj();


myObj.emit("someEvent");

Adding Event Listeners to Objects

Once you have an instance of an object that can emit events, you can add listeners for the events that you care
about. Listeners are added to an EventEmitter object using one of the following functions:
 .addListener(eventName, callback): Attaches the callback function to the object’s listeners. Every time the
eventName event is triggered, the callback function is placed in the event queue to be executed.
 .on(eventName, callback): Same as .addListener().
 .once(eventName, callback): Only the first time the eventName event is triggered, the callback function
is placed in the event queue to be executed.
For example, to add a listener to an instance of the MyObject EventEmitter class, use the following:
function myCallback(){

}
var myObject = new MyObj();
myObject.on("someEvent", myCallback);

Removing Listeners from Objects


4
Listeners are useful and vital parts of Node.js programming. However, they do cause overhead, and you should
use them only when necessary. Node.js provides server helper functions on the EventEmitter object that allow
you to manage the listeners that are included. These include
 .listeners(eventName): Returns an array of listener functions attached to the eventName event.
 .setMaxListeners(n): Triggers a warning if more than n listeners are added to an EventEmitter object. The
default is 10.
 .removeListener(eventName, callback): Removes the callback function from the eventName event of the
EventEmitter object.

Call-backs
Three specific implementations of callbacks: passing parameters to a callback function, handling callback
function parameters inside a loop, and nesting callbacks.

Passing Additional Parameters to Callbacks


Most callbacks have automatic parameters passed to them, such as an error or result buffer. A common question
when working with callbacks is how to pass additional parameters to them from the calling function. You do this
by implementing the parameter in an anonymous function and then call the actual callback with parameters from
the anonymous function.

Implementing Closure in Callbacks


Closure is a JavaScript term that indicates that variables are bound to a function’s scope and not the parent
function’s scope. When you execute an asynchronous callback, the parent function’s scope may have changed;
for example, when iterating through a list and altering values in each iteration.
If your callback needs access to variables in the parent function’s scope, then you need to provide closure so that
those values are available when the callback is pulled off the event queue. A basic way of doing that is by
encapsulating the asynchronous call inside a function block and passing in the variables that are needed.

Chaining Callbacks
With asynchronous functions you are not guaranteed the order that they will run if two are placed on the event
queue. The best way to resolve that is to implement callback chaining by having the callback from the
asynchronous function call the function again until there is no more work to do. That way the asynchronous
function is never on the event queue more than once.

Handling Data I/0


1. Using the Buffer Module to Buffer Data
While JavaScript is Unicode friendly, it is not good at managing binary data. However, binary data is useful when
implementing some web applications and services. For example:
 Transferring compressed files
 Generating dynamic images
 Sending serialized binary data

Buffered data is made up of a series of octets in big endian or little endian format. That means they take up
considerably less space than textual data. Therefore, Node.js provides the Buffer module that gives you the
functionality to create, read, write, and manipulate binary data in a buffer structure.

5
Buffer objects are actually raw memory allocations; therefore, their size must be determined when they are
created. The three methods for creating Buffer objects using the new keyword are;

new Buffer(sizeInBytes)
new Buffer(octetArray)
new Buffer(string, [encoding])

For example, the following lines of code define buffers using a byte size, octet buffer, and a UTF8 string:

var buf256 = new Buffer(256);


var bufOctets = new Buffer([0x6f, 0x63, 0x74, 0x65, 0x74, 0x73]);
var bufUTF8 = new Buffer("Some UTF8 Text \u00b6 \u30c6 \u20ac", 'utf8');

Writing to Buffers
You cannot extend the size of a Buffer object after it has been created, but you can write data to any location in
the buffer.

buffer.write(string, [offset], [length], [encoding]) - Writes length number of bytes from the string starting at
the offset index inside the buffer using encoding.

Reading from Buffers


There are several methods for reading from buffers. The simplest is to use the toString() method to convert all
or part of a buffer to a string. However, you can also access specific indexes in the buffer directly or by using
read().

buffer.toString([encoding], [start], [end]) - Returns a string containing the decoded characters specified by
encoding from the start index to the end index of the buffer. If start or end is not specified, then toString() uses
the beginning or end of the buffer.

2. Using the Stream Module to Stream Data


The purpose of streams is to provide a common mechanism to transfer data from one location to another. They
also expose events, such as when data is available to be read, when an error occurs, and so on. You can then
register listeners to handle the data when it becomes available in a stream or is ready to be written to.
Some common uses for streams are HTTP data and files. You can open a file as a readable stream or access the
data from an HTTP request as a readable stream and read bytes out as needed. Additionally, you can create your
own custom streams.

Readable Streams
Readable streams provide a mechanism to easily read data coming into your application from another source.
Some common examples of readable streams are
 HTTP responses on the client
 HTTP requests on the server
 fs read streams
 zlib streams
 crypto streams
 TCP sockets
 Child processes stdout and stderr
 process.stdin

6
Writable Streams
Writable streams are designed to provide a mechanism to write data into a form that can easily be consumed in
another area of code. Some common examples of Writable streams are
 HTTP requests on the client
 HTTP responses on the server
 fs write streams
 zlib streams
 crypto streams
 TCP sockets
 Child process stdin
 process.stdout, process.stderr

File Access
For all the file system calls, you need to have loaded the fs module, for example:
var fs = require('fs');

Synchronous Versus Asynchronous File System Calls

Synchronous file system Asynchronous file system


Synchronous file system calls block until the call Asynchronous calls are placed on the event queue to
completes and then control is released back to the be run later. This allows the calls to fit into the Node.js
thread. This has advantages but can also cause severe event model; however, this can be tricky when
performance issues in Node.js if synchronous calls executing your code because the calling thread
block the main event thread or too many of the continues to run before the asynchronous call gets
background thread pool threads. Therefore, picked up by the event loop.
synchronous file system calls should be limited in use
when possible.
Exceptions in synchronous calls must be handled by Exceptions are automatically handled by
your own try/catch blocks of code. asynchronous calls, and an error object is passed as the
first parameter if an exception occurs.
Synchronous calls are run immediately, and execution Asynchronous calls are placed on the event queue, and
does not return to the current thread until they are execution returns to the running thread code, but the
complete. actual call will not execute until picked up by the event
loop.

Opening and Closing Files


Node provides synchronous and asynchronous methods for opening files. Once a file is opened, you can read data
from it or write data to it depending on the flags used to open the
file. To open files in a Node.js app, use one of the following statements for asynchronous or synchronous:
fs.open(path, flags, [mode], callback)
fs.openSync(path, flags, [mode])

The path parameter specifies a standard path string for your file system. The flags parameter specifies what mode
to open the file in—read, write, append, and so on. The optional mode parameter sets the file access mode and
defaults to 0666, which is readable and writable.

7
Flags that determine how files are opened:

The following shows an example of opening and closing a file in asynchronous mode. Notice that a callback
function is specified that receives an err and an fd parameter. The fd parameter is the file descriptor that you can
use to read or write to the file:

fs.open("myFile", 'w', function(err, fd){


if (!err){
fs.close(fd);
}
});

The following shows an example of opening and closing a file in synchronous mode. Notice that a there is no
callback function and that the file descriptor used to read and write to the file is returned directly from
fs.openSync():

var fd = fs.openSync("myFile", 'w');


fs.closeSync(fd);

Writing Files
The simplest method for writing data to a file is to use one of the writeFile() methods. These methods write the
full contents of a String or Buffer to a file. The following shows the syntax for the writeFile() methods:

fs.writeFile(path, data, [options], callback)


fs.writeFileSync(path, data, [options])

8
The path parameter specifies the path to the file. The path can be relative or absolute. The data parameter specifies
the String or Buffer object to be written to the file. The optional options parameter is an object that can contain
encoding, mode, and flag properties that define the string encoding as well as the mode and flags used when
opening the file.

Reading Files

The simplest method for reading data to a file is to use one of the readFile() methods. These methods read the full
contents of a file into a data buffer. The following shows the syntax for the readFile() methods:

fs.readFile(path, [options], callback)


fs.readFileSync(path, [options])

The path parameter specifies the path to the file and can be relative or absolute. The optional options parameter
is an object that can contain encoding, mode, and flag properties that define the string encoding as well as the
mode and flags used when opening the file.

Deleting Files

To delete a file from Node.js, use one of the following commands:

fs.unlink(path, callback)
fs.unlinkSync(path)

The unlinkSync(path) returns true or false based on whether the delete is successful. The following code snippet
illustrates the process of deleting a file named new.txt using the unlink() asynchronous fs call:

fs.unlink("new.txt", function(err){
console.log(err ? "File Delete Failed" : "File Deleted");
});

HTTP Access
The Uniform Resource Locator (URL) acts as an address label for the HTTP server to handle requests from the
client. It provides all the information needed to get the request to the correct server on a specific port and access
the proper data.
The URL can be broken down into several different components, each providing a basic piece of information for
the webserver on how to route and handle the HTTP request from the client.

To use the URL information more effectively, Node.js provides the url module that provides functionality to
convert the URL string into a URL object.
To create a URL object from the URL string, pass the URL string as the first parameter to the following method:
9
url.parse(urlStr, [parseQueryString], [slashesDenoteHost])

The url.parse() method takes the URL string as the first parameter. The parseQueryString parameter is a Boolean
that when true also parses the query string portion of the URL into an object literal. The default is false. The
slashesDenoteHost is also a Boolean that when true parses a URL with the format of //host/path to {host: 'host',
pathname: '/path'} instead of {pathname: '//host/path'}. The default is false.

The following shows an example of parsing a URL string into an object and then converting it back into a string:

var url = require('url');


var urlStr = 'http://user:[email protected]:80/resource/path?query=string#hash';
var urlObj = url.parse(urlStr, true, false);
urlString = url.format(urlObj);

The http.ClientRequest Object


The ClientRequest object is created internally when you call http.request() when building the HTTP client. This
object represents the request while it is in progress to the server. You use the ClientRequest object to initiate,
monitor, and handle the response from the server.
The ClientRequest implements a Writable stream, so it provides all the functionality of a Writable stream object.
For example, you can use the write() method to write to it as well as pipe a Readable stream into it.
To implement a ClientRequest object, you use a call to http.request() using the following syntax:

http.request(options, callback)

The options parameter is an object whose properties define how to open and send the client HTTP request to the
server. Options are host, hostname, port, localaddress, socketpath, method, path, headers, authentication, agent.

The http.ServerResponse Object


The ServerResponse object is created by the HTTP server internally when a request event is received. It is passed
to the request event handler as the second argument. You use the ServerRequest object to formulate and send a
response to the client.
The ServerResponse implements a Writable stream, so it provides all the functionality of a Writable stream object.
For example, you can use the write() method to write to it as well as pipe a Readable stream into it to write data
back to the client. When handling the client request, you use the properties, events, and methods of the
ServerResponse object to build and send headers, write data, and send the response.

Socket Service
 Network sockets are endpoints of communication that flow across a computer network. Sockets live below
the HTTP layer and provide the actual point-to-point communication between servers. Virtually all
Internet communication is based on Internet sockets that flow data between two points on the Internet.
 A socket works using a socket address, which is a combination of an IP address and port. There are two
types of points in a socket connection: a server that listens for connections and a client that opens a
connection to the server. Both the server and the client require a unique IP address and port combination.
 The Node.js net module sockets communicate by sending raw data using the Transmission Control
Protocol (TCP). This protocol is responsible for packaging the data and guaranteeing that it is sent from
point to point successfully. Node.js sockets implement the Duplex stream, which allows you to read and
write streamed data between the server and client.
10
 Sockets are the underlying structure for the http module. If you do not need the functionality for handling
web requests like GET and POST and you just need to stream data from point to point, then using sockets
gives you a lighter weight solution and a bit more control.

 The net.Socket Object


Socket objects are created on both the socket server and the socket client and allow data to be written and
read back and forth between them. The Socket object implements a Duplex stream, so it provides all the
functionality that Writable and Readable streams provide. For example, you can use the write()method to
stream writes of data to the server or client and a data event handler to stream data from the server or
client.
To create a Socket object, you use one of the following methods. All the calls return a Socket object. The
only difference is the first parameters that they accept. The final parameter for all of them is a callback
function that is executed when a connection is opened to the server. Notice that for each method there is
a net.connect() and a net.createConnection() form. These work exactly the same way:

net.connect(options, [connectionListener])
net.createConnection(options, [connectionListener])
net.connect(port, [host], [connectListener])
net.createConnection(port, [host], [connectListener])
net.connect(path, [connectListener])
net.createConnection(path, [connectListener])

The first method to create a Socket object is to pass an options parameter, which is an object that contains
properties that define the socket connection. Table below lists the properties thatcan be specified when
creating the Socket object.
The second method accepts port and host values, described in Table, as direct parameters.
The third option accepts a path parameter that specifies a file system location that is a Unix socket to use
when creating the Socket object.

11
MongoDB
What Is MongoDB?
 MongoDB is an agile and scalable NoSQL database. The name Mongo comes from the word
“humongous,” emphasizing the scalability and performance MongoDB provides.
 MongoDB provides great website backend storage for high-traffic websites that need to store data such as
user comments, blogs, or other items because it is quickly scalable and easy to implement.

The following are some of the reasons that MongoDB really fits well in the Node.js stack:
 Document orientation: Because MongoDB is document-oriented, data is stored in the database in a
format that is very close to what you deal with in both server-side and client-side scripts. This eliminates
the need to transfer data from rows to objects and back.
 High performance: MongoDB is one of the highest-performing databases available. Especially today,
with more and more people interacting with websites, it is important to have a backend that can support
heavy traffic.
 High availability: MongoDB’s replication model makes it easy to maintain scalability while keeping high
performance.
 High scalability: MongoDB’s structure makes it easy to scale horizontally by sharing the data across
multiple servers.
 No SQL injection: MongoDB is not susceptible to SQL injection (that is, putting SQL statements in web
forms or other input from the browser and thereby compromising database security). This is the case
because objects are stored as objects, not using SQL strings.

SQL Vs NoSQL

 The concept of NoSQL (Not Only SQL) consists of technologies that provide storage and retrieval
without the tightly constrained models of traditional SQL relational databases. The motivation
behind NoSQL is mainly simplified designs, horizontal scaling, and finer control of the availability of
data.

 NoSQL breaks away from the traditional structure of relational databases and allows developers to
implement models in ways that more closely fit the data flow needs of their systems. This allows
NoSQL databases to be implemented in ways that traditional relational databases could never be
structured.

 MongoDB is a NoSQL database based on a document model where data objects are stored as separate
documents inside a collection. The motivation of the MongoDB language is to implement a data store
that provides high performance, high availability, and automatic scaling.

 MongoDB groups data together through collections. A collection is simply a grouping of documents that
have the same or a similar purpose. A collection acts similarly to a table in a traditional SQL database,
with one major difference. In MongoDB, a collection is not enforced by a strict schema; instead,
documents in a collection can have a slightly different structure from one another as needed. This
reduces the need to break items in a document into several different tables, which is often done in SQL
implementations.

 A document is a representation of a single entity of data in the MongoDB database. A collection is made
up of one or more related objects. A major difference between MongoDB and SQL is that documents
12
are different from rows. Row data is flat, meaning there is one column for each value in the row.
However, in MongoDB, documents can contain embedded subdocuments, thus providing a much
closer inherent data model to your applications.

 In fact, the records in MongoDB that represent documents are stored as BSON, which is a lightweight
binary form of JSON, with field:value pairs corresponding to JavaScript property:value pairs.

For example, a document in MongoDB may be structured similarly to the following with name, version,
languages, admin, and paths fields:

{
name: "New Project",
version: 1,
languages: ["JavaScript", "HTML", "CSS"],
admin: {name: "Brad", password: "****"},
paths: {temp: "/tmp", project: "/opt/project", html: "/opt/project/html"}
}

Notice that the document structure contains fields/properties that are strings, integers, arrays, and objects, just
like a JavaScript object.

Accessing and Manipulating DB with Node js

 The first step in implementing MongoDB access from your Node.js applications is to add the MongoDB
driver to your application project. The MongoDB Node.js driver is the officially supported native
Node.js driver for MongoDB. A great feature of the MongoDB Node.js driver is that it provides the
ability to create and manage databases from your Node.js applications.
 Once you have installed the Mongodb module, you can begin accessing MongoDB from your Node.js
applications by opening up a connection to the MongoDB server. The connection acts as your interface to
create, update, and access data in the MongoDB database.
 To list the databases in your system, you use the listDatabases() method on an Admin object. That means
that you need to create an instance of an Admin object first.

MongoClient.connect("mongodb://localhost/admin", function(err, db) {


var adminDB = db.admin();
adminDB.listDatabases(function(err, databases){
console.log("Before Add Database List: ");
console.log(databases);
});
});

 Databases are created automatically whenever a collection or document is added to them. Therefore, to
create a new database all you need to do is to use the db() method on the Db object provided by the
MongoClient connection to create a new Db object instance. Then call createCollection() on the new Db
object instance to create the database. The following code shows an example of creating a new database
named newDB after connecting to the server:
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect("mongodb://localhost/", function(err, db) {
var newDB = db.db("newDB");
13
newDB.createCollection("newCollection", function(err, collection){
if(!err){
console.log("New Database and Collection Created");
}
});
});

 To delete a database from MongoDB, you need to get a Db object instance that points to that database.
Then call the dropDatabase() method on that object. It may take a while for MongoDB to finalize the
deletion. If you need to verify that the deletion occurred, you can use a timeout to wait for the database
delete to occur. For example:
newDB.dropDatabase(function(err, results){
<handle database delete here>
});

Methods on the Db object:

14
15
Methods on the Admin object

Basic methods on the Collection object

16
17
DB data Types
The BSON data format provides several different types that are used when storing the JavaScript objects to binary
form. MongoDB assigns each of the data types an integer ID number from 1 to 255 that is used when querying
by type.

Data Life cycles


 One of the most commonly overlooked aspects of database design is that of the data life cycle.
Specifically, how long should documents exist in a specific collection? Some collections have documents
that should be indefinite, for example, active user accounts.
 Each document in the system incurs a performance hit when querying a collection. You should define a
TTL or time-to-live value for documents in each of your collections.
 There are several ways to implement a time-to-live mechanism in MongoDB. One way is to implement
code in your application to monitor and clean up old data.
 Another way is to use the MongoDB TTL setting on a collection, which allows you to define a profile
where documents are automatically deleted after a certain number of seconds or at a specific clock time.
 For collections where you only need the most recent documents, you can implement a capped collection
that automatically keeps the size of the collection small.

18

You might also like