Tiny Node Projects
Tiny Node Projects
1. MEAP_VERSION_3
2. Welcome
3. 1_Practical_Application
4. 2_Building_a_Node_web_server
5. 3_Password_Manager
6. 4_RSS_Feed
7. 5_Library_API
8. Appendix_A._Getting_set_up_with_installations
9. Appendix_B._Setting_up_Node_app_essentials
10. Appendix_C._Node_under_the_hood
MEAP VERSION 3
Welcome
Thank you for purchasing the MEAP for Tiny Node Projects. To get the most
out of this book, you’ll want to have some familiarity with ES6 style
JavaScript syntax and concepts. It will help to have some understanding of
how web protocols like HTTP work and the basics of a web server.
Knowledge of HTML and CSS are not required but may assist you in
enriching your projects. Read this book with creative energy to explore
Node.js and you’ll be prepared to apply the concepts best in practice.
When I first wrote Get Programming with Node.js I had envisioned a style of
learning I had not yet seen in a book’s format: a 3-month coding boot camp
intensive course. I designed that book to bring entry-level developers to an
intermediate-level understanding of Node.js web applications. Readers of that
book were receptive to the style, encouraging its use to propel one's career in
JavaScript development and even as a standard in college-level coding
courses. While I enjoyed the writing process, Node.js, and JavaScript, were
still moving quickly. I knew that by the time that book was published there
would be new popular changes to the language and leading npm packages to
recommend for use in projects. Fast forward to today, the Tiny Projects series
presents a unique opportunity well suited for Node.js projects. Tiny Node
Projects gives me the opportunity to dive into meaningful application
architecture concepts across multiple types of projects.
No longer limited to teaching one big web application, Tiny Node Projects
separates each chapter so that you can pick and choose what you want to
learn, when you want to learn it. The projects are designed to be built in a
single day and I’ll be adding content-specific guides so that you can choose
when you want to purely read and learn, and when you want to apply and
practice. I want these tiny projects to range from quick wins for new
developers, to challenging coding obstacles or new terrain for experienced
developers. That’s why you are an important part of shaping the book’s final
form.
As you read each chapter, pay close attention to what your learning and how
it can apply across many projects. Write down some of the questions you feel
are unanswered so I can help address them. Think about concepts you enjoy
and would like to see more of an in depth conversation. It’s important to me
that you enjoy working through this book and that you find it to be a helpful
and guiding resource on your journey with Node.js and server-side JavaScript
development. The goal of this book is to help you realize just how versatile
Node.js is for building your own creative solutions to life’s problems. You’ll
start with some introductory projects that can be built with Node.js’
prepackaged modules. Then you’ll move on to working with external npm
packages that deliver highly performant algorithms and functions. Because
this book is not focused on the web front, many of the projects can be built
for a command line client. Though, there will be steps to support any
additional surfaces like web browsers and mobile devices.
I am grateful that you’ve chosen to give Tiny Node Projects a chance, and I
hope it proves to be a consistent source of insight and encouragement in this
MEAP and beyond. Thank you, and enjoy! If you have any questions,
comments, or suggestions, please share them in Manning’s liveBook
discussion forum.
Jon Wexler
In this book
In this chapter, you’ll get a first glance at what Node can offer out of the box
and off the grid. You will learn to build a simple program that can run on
anyone’s computer and save real people real time and money. By the end of
the chapter, you’ll have Node locked and loaded on your computer, with a
development environment ready to build, practically, anything.
Before you get started, you’ll need to install and configure the following tools
and applications that are used in this project. Detailed instructions are
provided for you in the specified appendix. When you’ve finished, return
here and continue.
Note
Tip
To keep your coding projects separate from other work on your computer,
you can dedicate a directory for coding. Creating a directory called src at the
root level of your computer’s user directory ids a good place. That location is
/Users/<USERNAME>/src for Macs (~/src), and C:\Users\<USERNAME>\src
for Windows computers.
Next, you’ll follow steps to initialize your Node app. When complete, your
project directory structure should look like figure 1.3.
Note
These steps are also available in Appendix B: Setting up Node app essentials.
javascript
package name: (csv_writer) #1
version: (1.0.0)
description: An app to write contact information to a csv file.
entry point: (index.js) #2
test command:
git repository:
keywords:
author: Jon Wexler
license: (ISC)
This process has created a new file for you: package.json (listing 1.2). This
is your application’s configuration file, where you’ll instruct your computer
on how to run your Node app.
What’s package.json?
In addition to giving a baker a recipe for a loaf of bread, you would instruct
them on the type of oven, materials, and environment to use. The
package.json file contains those preparatory instructions and more. For
more information, visit https://nodejs.org/en/knowledge/getting-
started/npm/what-is-the-file-package-json/.
Your first step is to add "type": "module" to this file so we can use ES6
module imports. Since Node v13.2.0, ES6 modules have had stable support
and growing popularity in Node projects over CommonJS syntax for
importing code.
javascript
{
"name": "contact-list",
"version": "1.0.0",
"type": "module", #1
"description": "An app to write contact information to a csv file.",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "Jon Wexler",
"license": "ISC"
}
Next, you create the entry point to your application, a file titled index.js.
This is where the guts of your application will go.
For this project, you realize that Node comes prepackaged with everything
you need already. You can make use of the fs module - a library that helps
your application interact with your computer’s file system - to create the CSV
file. Within index.js you add the writeFileSync function from the fs
module by writing import { writeFileSync } from "fs".
You now have access to functions that can create files on your computer.
Now you can make use of the writeFileSync function to test that your Node
app can successfully create files and add text to them. Add the code in listing
1.3 to index.js. Here you are creating a variable called content with some
dummy text. Then, within a try-catch block, you run writeFileSync, a
synchronous, blocking function, to create a new file called test.txt within
your project’s directory. If the file is created with your added content, you
will see a message logged to your command line window with the text
Success! Otherwise, if an error occurs, you’ll see the stacktrace and contents
of that error logged to your command line window.
Note
With ES6 module import syntax you may make use of destructuring
assignment. Instead of importing an entire library, you may import the
functions or modules you need only through destructuring. For more
information, read https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment.
javascript
import { writeFileSync } from "fs"; #1
try { #3
writeFileSync("./test.txt", content) #4
console.log("Success!"); #5
} catch (err) {
console.error(err); #6
}
In this code, process.stdin and process.stdout are ways that your Node
app’s process streams data to and from the command line and filesystem on
your computer.
javascript
import { createInterface } from "readline"; #1
Next, you make use of this mapping by using the question function in your
readline interface. This function takes a message, or prompt you’ll display
to the user on the command line, and will return the user’s input as a return
value.
Note
Because this function is asynchronous, you can wrap its return value in a
Promise to make use of the async/await syntax from ES6. If you need a
refresher on JavaScript Promises, visit https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Reference/Global_Objects/Promise
Create a function called readLineAsync that waits for the user to reply and
press the enter key before the string value is resolved (listing 1.5). In this
way, your custom readLineAsync function will eventually resolve with a
response containing the user’s input without holding up the Node app.
javascript
const readLineAsync = message => { #1
return new Promise(resolve => { #2
readline.question(message, answer => { #3
resolve(answer); #4
});
});
}
With these functions in place, all you need is to call readLineAsync for your
application to start retrieving user input. Because your prompt has three
specific data values to save, you can create a class to encapsulate that data for
each contact. As seen in listing 1.6, within index.js you import
appendFileSync from the fs module, which will create and append to a
given file name. Then you define a Person class which takes name, number,
and email as arguments in its constructor. Finally, you add a saveToCSV
method to the Person class to save each contact’s information in a comma-
delimited format suitable for CSV to a file called contacts.csv.
javascript
import { appendFileSync } from "fs"; #1
class Person { #2
constructor(name = "", number = "", email = "") { #3
this.name = name;
this.number = number;
this.email = email;
}
saveToCSV() { #4
const content = `${this.name},${this.number},${this.email}\n`; #5
try {
appendFileSync("./contacts.csv", content); #6
console.log(`${this.name} Saved!`);
} catch (err) {
console.error(err);
}
}
}
The last step is to instantiate a new Person object for each new contact you’re
manually entering into your application. To do that you create an async
startApp function that defines the new person object and assigns the name,
number, and email values in synchronous order. This way you way for the
user input to collect each value before moving to the next one. After all the
required values are collected, you call saveToCSV() on the person instance
and ask the user if they would like to continue entering more data. If so, they
can enter the letter y. Otherwise, you close the readline interface and end
your application (listing 1.7).
javascript
const startApp = async () => { #1
const person = new Person(); #2
person.name = await readLineAsync("Contact Name: "); #3
person.number = await readLineAsync("Contact Number: ");
person.email = await readLineAsync("Contact Email: ");
person.saveToCSV(); #4
const response = await readLineAsync("Continue? [y to continue]: "); #5
if (response === "y") await startApp(); #6
else readline.close(); #7
};
Then add startApp() at the bottom of index.js to start the app when the file
is run. Your final index.js file should look like listing 1.8.
javascript
import { appendFileSync } from "fs";
import { createInterface } from "readline";
class Person {
constructor(name = "", number = "", email = "") {
this.name = name;
this.number = number;
this.email = email;
}
saveToCSV() {
const content = `${this.name},${this.number},${this.email}\n`;
try {
appendFileSync("./contacts.csv", content);
console.log(`${this.name} Saved!`);
} catch (err) {
console.error(err);
}
}
}
startApp();
In the project folder on your command line, run node index to start seeing
text prompts as seen in figure 1.4.
Figure 1.4. Command line prompts for user input
When you are done entering all the contact’s details, you can then see that the
information has been saved to a file called contacts.csv in the same folder.
Each line of that file should be comma-delimited, looking like Jon
Wexler,2156667777,[email protected].
This should be just what the travel agency needs for now to convert their
physical contact cards into a CSV file they can use in many other ways. In
the next section, you’ll explore how third-party libraries can simplify your
code even further.
To improve the readability of your code from section 2, you can install the
prompt and csv-writer packages by running npm i prompt csv-writer in
your project folder on your command line. This command will also list these
two packages in your package.json file.
Note
In your index.js file import prompt and replace your readLineAsync calls
with prompt as seen in listing 1.9. You first instantiate the prompt keyword
with prompt.start, followed by setting the prompt message to an empty
string. Now you can delete your whole readLineAsync function, realine
interface mappings, and readline module imports. prompt.get allows the
user to respond to multiple prompts before returning the resulting values back
to the Node app. The user’s responses are assigned to a responses object
with each prompt’s response matching the prompt’s name. Object.assign
sets the name, number, and email response fields on the person object. After
the person’s values are saved to a CSV, another prompt to
continue follows. This time, the prompt’s response is
destructured and assigned to the `again variable.
js
import prompt from "prompt"; #1
prompt.start(); #2
prompt.message = ""; #3
Similar to how the external prompt package displaced the fs module, csv-
writer replaces the need of your fs module imports and define a more
structured approach for writing to your CSV by including a header, as shown
in listing 1.10.
js
import { createObjectCsvWriter } from "csv-writer"; #1
...
const csvWriter = createObjectCsvWriter({ #2
path: "./contacts.csv",
append: true,
header: [
{ id: "name", title: "NAME" },
{ id: "number", title: "NUMBER" },
{ id: "email", title: "EMAIL" },
],
});
Finally, you modify your saveToCSV method on the Person class to use
csvWriter.writeRecords instead (listing 1.11).
js
...
saveToCSV() {
try {
const { name, number, email } = this; #1
csvWriter.writeRecords([{ name, number, email }]); #2
console.log(`${name} Saved!`);
} catch (err) {
console.error(err);
}
}
...
With these two changes in place your new index.js file should look like
listing 1.12.
javascript
import { createObjectCsvWriter } from "csv-writer";
import prompt from "prompt";
prompt.start();
prompt.message = "";
class Person {
constructor(name = "", number = "", email = "") {
this.name = name;
this.number = number;
this.email = email;
}
saveToCSV() {
try {
const { name, number, email } = this;
csvWriter.writeRecords([{ name, number, email }]);
console.log(`${name} Saved!`);
} catch (err) {
console.error(err);
}
}
}
startApp();
Now when you run node index, the application’s behavior should be exactly
the same as in section 2. This time your contacts.csv file should list headers
at the top of the file. This is a great example of how you can use Node out of
the box to solve a real-world problem, then refactor and improve your code
by using external packages built by the thriving online Node community!
1.4 Summary
In this chapter you
In this chapter, you’ll explore the most common use-case for Node, a web
application, and how the event loop plays a role. By the end of this chapter,
you’ll be able to use Node’s most popular application framework, Express.js,
to build both simple web servers and more extensive applications.
Before you get started, you’ll need to install and configure the following tools
and applications that are used in this project. Detailed instructions are
provided for you in the specified appendix. When you’ve finished, return
here and continue.
For that reason, packages like express offer a full web application
framework. This framework not only implements http to support web
requests and responses, but it offers an intuitive structure to organizing your
application’s files, importing other supportive packages, and building web
applications in a shorter time frame. In fact, many other web frameworks for
Node use Express as a foundation for their additional tooling. To learn more
about what Express offers visit https://expressjs.com/.
As customers visit the restaurant’s site, your Express app will route them to
the page they’ve requested. Because Express uses Node, a single-threaded
event loop will process requests to the web server as they are received. Each
request is a customer’s attempt to visit the restaurant’s website via a URL.
Node’s single thread will receive requests in the order they are received and
process them individually. That means only one customer is served at any
given moment. Node is fast, though, and as long as you’re not running any
expensive code (like computing the 50th number in the Fibonacci sequence),
your application should respond to its users instantaneously, without any
request blocking another. You sketch out how Node’s event loop might
handle requests.
The Node Event Loop is at the core of how every Node app operates.
Because JavaScript runs off a single thread, it’s important that your app
allows the single thread to process as many tasks as it can. Figure 2.2 shows
some of the ways in which you can block the Event Loop.
While the Event Loop runs on a single thread, Node can spawn a new thread
from a "thread pool", or "worker thread". These worker threads are assigned
traditionally more expensive tasks like filesystem or database operations
(I/O) or encryption tasks. While worker threads are spawned in order to free
the single threaded Event Loop, they too may slow down or halt your app
from running correctly. Meanwhile, the main thread is largely responsible for
registering asynchronous functions and processing their callbacks when
ready. If your callbacks contain nested loops, processing large quantities of
data, or CPU-intensive code, the Event Loop will not be able to respond to
requests from new clients.
Figure 2.3 demonstrates how web requests may be processed within your
completed web application running on Express. Like in figure 2.1, customers
visit the restaurant’s website. You don’t necessarily know how many
customers are visiting that URL, or at what rate, but each request enters your
application to be processed individually. Node’s event loop is able to quickly
queue incoming requests and assign a response to each request in the queue,
all while continuing to process new incoming requests. As long as there are
queues to temporarily store the order of requests, all the event loop needs to
do is handle each request as soon as it is able to. If a customer requests the
web page for the restaurant’s menu, Express will process that request by
routing the customer to a static webpage with the business' hours listed.
Note
For more information about Node’s event loop see the official documentation
page at https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/
bash
package name: (restaurant_web_server) #1
version: (1.0.0)
description: An web application for a local restaurant.
entry point: (index.js) #2
test command:
git repository:
keywords:
author: Jon Wexler
license: (ISC)
After running through the prompt, your package.json file will appear in the
project’s folder. This file contains both the application’s general
configurations and the dependent modules as well.
Next, you run npm i express to install the express package. You’ll need an
internet connection, as running this command will fetch the contents of the
express package from the npm registry at https://www.npmjs.com and add
them to your node_modules folder at the root level of your project. Unlike the
fs and http modules that are prepackaged with Node, the express module is
not offered with your initial installation. Instead, Express.js is bundled into a
packaged called express that can be downloaded and installed separately
through Node’s package management registry tool, npm.
Note
Both Node and Express are projects supported by the OpenJS Foundation.
For more information about open-source JavaScript projects from OpenJS
visit https://openjsf.org/
Note
There are a variety of ways to write npm commands, some shorter, and others
more explicit in their phrasing. Learn more about npm command line
shorthands and flags at https://docs.npmjs.com/cli/v8/using-npm/config
json
"dependencies": {
"express": "^4.17.2" #1
},
Note
^ in npm package versioning means your application will ensure that this
version, or any compatible versions of the package with minor or patch
updates, will be installed to your application. ~ before the version number
means only patch updates will be installed, but not minor version changes.
For more about package.json and how versioning works see
https://docs.npmjs.com/cli/v8/configuring-npm/package-json
With express installed, you create a file called index.js at the root level of
your project folder. Next you import express into your application on the first
line by adding import express from 'express'.
Note
As of Node v17, ES6 imports are not supported by default. You need to add
"type": "module" to your package.json file to use the import syntax.
As shown in listing 2.3, you then type const app = express() to instantiate
a new instance of an Express application and assign it to a variable called
app. You also assign another variable called port a development port number
of 3000.
Note
You can use nearly any port number to test your code in development. With
ports like 80, 443, and 22 typically reserved for standard unencrypted web
pages, SSL, and SSH, respectively, 3000 has become a reliable standard go-
to port for software engineers.
Your app object has functions you can use to interact with incoming web
requests. You add app.get on "/", which will listen for HTTP GET requests
to your web app’s home page. From there you can process the incoming
request and reply with a response. To test the application you add
res.send("Welcome to What’s Fare is Fair!") in the app.get callback
function to reply with plain text to the requesting customer’s web browser.
Last, you add app.listen and pass in your previously defined port value
and a callback function within which you log a message to your command
line console.
Note
Express' app.get is named according to the HTTP request type. The most
common requests are GET, POST, PUT, and DELETE. To get more familiar
with these request methods read more at https://developer.mozilla.org/en-
US/docs/Web/HTTP/Methods
app.listen(port, () => { #6
console.log(`Web Server is listening at localhost:${port}`); #7
});
Now, you can start your application by running node index in your project’s
command line window. You should then see a logged statement that reads:
Web Server is listening at localhost:3000. This means you can open
your favorite web browser and visit localhost:3000 to see the text in figure
2.4.
Figure 2.4. Viewing your web server’s response in your web browser
With your application’s foundation out of the way, it’s time to add some flair
to What’s Fare is Fair’s site.
2.3 Adding routes and data
With your application running, you move on to add more routes and context
to your restaurant’s site. You already added one route: a GET request to the
homepage (/). Now, you can add two more routes for the menu page and
working hours page, as depicted in listing 2.4. Each app.get provides a new
route at which your web pages are reachable.
javascript
app.get("/menu", (req, res) => { #1
res.send("TODO: Menu Page");
});
You can stop your Node server by pressing Ctrl+C in the command line of
your running application. With your new changes in place, you can start your
application again by running node index. Now when you navigate to
localhost:3000/menu and localhost:3000/hours you’ll see the text change to
your TODO messages.
This is a good start, but you’ll need to fill in some meaningful data here.
Your contact at What’s Fare is Fair provides you with pictures of their menu
(figure 2.5). This image provides insight into the structure of the data in the
restaurant’s menu. For example, each item has a title, price, and description.
With these two references for data, you can convert the menu and hours list
into JavaScript-friendly data modules. First, you create a folder called data in
your project directory, where you’ll add a menuItem.js and a
workingHours.js file. From these files you use the ES6 export default
syntax to export all of the files contents for use in other modules, as shown in
listing 2.5 and listing 2.6.
js
export default { #1
default: {
open: 11,
closed: 22,
},
monday: null,
sunday: {
open: 12,
closed: 20,
},
};
To make use of this data, you import the two relative modules at the top of
index.js (listing 2.7).
To test that these values are being loaded properly, you replace the res.send
statements in the /menu and /hours routes, and replace them with
res.json(menuItems) and res.json(workingHours), respectfully. Doing so
should replace the static text you previously saw when loading your web
page and replace it with more meaningful data provided to you by the
restaurant. This step in the process can help you validate that the data you’re
expecting is properly flowing to the webpages you intend them to reach.
With this data displayed on the browser, the next step is to format it to be
more visually appealing.
2.4 Building your UI
You could build a user interface using a frontend framework like React.js,
Vue.js, or Angular.js. To keep this app simple, you set up a server-side
rendered (SSR) template by installing Embedded JavaScript Templates (EJS).
Note
There are a variety of templating engines that work well with Node and
Express. Check out https://ejs.co/ and https://pugjs.org/api/getting-
started.html to learn more about EJS and Pug.
On the command line, you navigate to your project folder and run npm
install ejs. This installs the ejs package, which facilitates converting
HTML content with dynamic data into static HTML pages.
Next, you make use of the ejs package by setting it as your view engine in
Express (listing 2.8). You can now use the render function to display pages
written with HTML and EJS.
js
app.set("view engine", "ejs"); #1
To be able to properly render these pages, you need only to create a folder
called views at the root level of your project and then add three new files:
index.ejs, menu.ejs, and hours.ejs. These three files will be located by the
EJS templating engine in express on each request. To complete the process
you fill these files with a mix of HTML and EJS. Listing 2.9 shows an
example of your landing page, index.ejs.
html
<!DOCTYPE html> #1
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Restaurant</title>
</head>
<body>
<h1>Welcome to <%= name %></h1> #2
</body>
</html>
Note
EJS uses <%= %> to display content within in the HTML. In listing 2.10 you
use this syntax to display the business name. If you want to run JavaScript on
the page without printing anything you leave out the =.
When you restart your project and visit http://localhost:3000 your browser
should look like figure 2.8.
<h1>Our Menu</h1>
<ol>
<% for(let item of menuItems) { %> #1
<li>
<strong><%= item.name %></strong> #2
<%= item.description %> #3
<%= item.cost %>
</li>
<% }%>
</ol>
With this code in place, restart your node server and navigate to
http://localhost:3000/menu in your web browser to see a page that looks like
figure 2.9.
html
<h1>Our Hours</h1> #1
<% for(let day of days) { %> #2
<% const hoursObj = workingHours[day] || workingHours['default'] %>
<section style="display: inline-flex; flex-direction: column; padding: 5px;
<h2><%= day.toUpperCase() %></h2> #4
<div>
<% if (hoursObj.open) {%> #5
<p>Open: <%= hoursObj.open %></p> #6
<p>Closed: <%= hoursObj.closed %></p>
<% } else {%>
CLOSED #7
<% } %>
</div>
</section>
<% }%>
With this last page complete, you restart your node server and navigate to
http://localhost:3000/hours in your web browser to see a page that looks like
figure 2.10.
To quickly add a CSS library to Express. Add a public folder to the root
level of your project. Then add app.use(express.static("public")) to
your index.js file. Then you can add any file with a .css extension, images,
or other static content to your public folder and access those resources from
within your .ejs files. Look at figure 2.11, figure 2.12, and figure 2.13 to see
how the addition of stylesheets can improve the aesthetics of the pages you
built.
The landing page may have any layout of your choosing. Express has many
tools to support custom layouts and other templating engines besides EJS.
For more information on templating engines with Express visit
https://expressjs.com/en/guide/using-template-engines.html.
If you have the time to dedicate to improving the UI of a Node web app or if
you can work alongside a frontend developer, you may find the end result
more pleasing to your customer.
2.6 Summary
In this chapter you
Learned about how the Node event loop handles web requests
Built a web application with Express
Served HTML pages using separate data modules
3 Password Manager
This chapter covers
Data encryption concepts
Working with Bcrypt
Saving data to a Mongodb collection
In this chapter, you will build a password manager using the bcrypt
encryption package and mongodb for persistent storage. You’ll start by
understanding what happens under the hood with encryption and how you
can use this mechanism to build an effective productivity tool. Later, you’ll
introduce document storage with MongoDB to stay your encrypted data for
future access.
Before you get started, you’ll need to install and configure the following tools
and applications that are used in this project. Detailed instructions are
provided for you in the specified appendix. When you’ve finished, return
here and continue.
You’ve decided that you want this Node application to run on your own
machine, only allow access via an encrypted password, and save your
personal passwords in a database. To get this application working in only a
short time, you choose an existing encryption library to hash your passwords,
and MongoDB to store those passwords. Before you start programming, you
diagram the requirements of the project and your result.
Figure 3.1 shows the flow of information for your completed application.
Your application will store both an encrypted master password and plain text
passwords. The following steps detail how the application should work:
1. To start, a master password will be typed into your command line and
sent to your Node application.
2. Your application logic encrypts that password and saves it to your
database.
3. The next time you access your application, you type your master
password, which will be validated against your encrypted password.
4. If the typed password matches your master password, you may choose
to save personal passwords or view a list of saved passwords.
Figure 3.1. Project blueprint for flow of data in password manager app
As you type new passwords to save to your database, the password text enters
into your Node app. From there, application logic encrypts your password
and saves it to your database. To retrieve that list of passwords, you must re-
type a master password that only you know.
Note
bash
package name: (password_manager) #1
version: (1.0.0)
description: A Node app for storing passwords
entry point: (index.js) #2
test command:
git repository:
keywords:
author: Jon Wexler
license: (ISC)
Next, run npm i bcrypt to install the bcrypt package. The bcrypt.hashSync
function is one of many you can use to hash your password. The process of
encrypting your password involves two steps: hashing your password and
validating a plain text password against your hashed password. Figure 3.2
shows how the hashing function is a one-way procedure. In this way, it is
very difficult to reverse engineer the original password from the hashed
value.
First, you type the main password that you’ll use to access your other
passwords. Bcrypt’s hash function uses a salt (randomly generated text) to
jumble your password text a number of times equal to your salt rounds value.
The resulting hashed password is then stored in your database. Later, when
you type your password again to access your manager, your input text is
again encrypted and compared to the stored password hash. Bcrypt’s compare
function will use the same salt rounds to evaluate your plain text against the
hashed password. If your password matches the hashed password in your
database, you are authorized. In this way, Bcrypt does not reverse a hashed
password, but instead re-hashes a re-typed password and compares the result
with the password hash in the database.
Now, create your index.js file at the root level of your project directory.
This is where most of your application logic will live. You can test some of
bcrypt’s functions by adding the code in listing 3.2 to index.js.
js
import bcrypt from "bcrypt"; #1
const password = "test1234"; #2
const hash = bcrypt.hashSync(password, 10); #3
console.log(`My hashed password is: ${hash}`); #4
With this code in place you can run node index at the root level of your
project in your command line window. Your resulting output should look like
My hashed password is:
$2b$10$/mLyLstSX54RgR9nQwO.3etHggaCP53.eG1.tsFYmyb8OXVfre84C (with
a different hash value, of course).
Note
If you don’t see a logged statement in your command line window, check to
make sure your index.js file was saved in the same directory from which
you’re running the application.
With your test case working, you build out the functions needed to facilitate
saving a new encrypted password. Figure 3.3 demonstrates the flow of logic
according to the function names you’ll use. To start, your application runs a
prompt function to enable user interaction on the command line. Then, you
check whether there is already a master password hash stored. If a master
password hash exists, you run promptOldPassword to prompt the user to re-
type their password. Otherwise, you run promptNewPassword to prompt the
user to type a new master password for the first time. When the user types
their new password, the saveNewPassword function will save the resulting
hash to the database.
If the user types their existing master password, you compare their input to
the stored password hash through compareHashedPassword. If their password
is validated you display a menu of items to choose from through the
showMenu function. Within this menu, the user may choose to view their list
of passwords (viewPasswords), add a new password to your list
(promptManageNewPassword), re-verify your hashed password, or exit the
app.
The code for this logic can be written one function at a time. First, install the
prompt-sync package by running npm i prompt-sync at the root level of
your project in your command line. Then, add the bcrypt and prompt-sync
imports to your index.js file. Also, add a JavaScript object with a
passwords key mapped to an empty object to represent your database. As you
add new passwords to save, this object will get populated (listing 3.3).
Note
Listing 3.3. Add module imports and mock db to the top of index.js
js
import bcrypt from "bcrypt"; #1
import promptModule from "prompt-sync";
const prompt = promptModule(); #2
const mockDB = { passwords: {} }; #3
...
With your imports in place you can create your first function,
saveNewPassword which takes a plain text password, password, as an
argument and makes use of the Bcrypt hashSync function to convert the text
to a hashed value. That resulting value is then set in the mock database,
mockDB. You let the user know the password is saved with a log message, and
then call the showMenu function, which you’ll soon write (listing 3.4).
js
...
const saveNewPassword = (password) => {
const hash = bcrypt.hashSync(password, 10); #1
mockDB.hash = hash; #2
console.log("Password has been saved!");
showMenu(); #3
};
...
js
...
const compareHashedPassword = async (password) => {
const { hash } = mockDB; #1
return await bcrypt.compare(password, hash); #2
};
...
The next two functions will prompt the user to type a new password or re-
type an old password (listing 3.6). promptNewPassword logs a message to the
command line console for the user to type their main master password. The
typed password is subsequently saved in your saveNewPassword function.
Meanwhile, promptOldPassword prompts the user to re-type their old master
password. The input text is validated, determining whether the user can view
the menu by running showMenu, or if the user must re-type their master
password again, by re-running promptOldPassword.
js
...
const promptNewPassword = () => {
const response = prompt("Enter a main password: "); #1
saveNewPassword(response); #2
};
js
...
const showMenu = () => {
console.log(`
1. View passwords
2. Manage new password
3. Verify password
4. Exit`); #1
const response = prompt(">");
With the menu ready to display, you only need to add the functions to view
stored passwords and save new passwords to store. Add the code in listing
3.8 where viewPasswords destructs your passwords from the mockDB. With
your passwords as a key/value pair, you log both to your console for each
stored password. Then, you show the menu again, which prompts the user to
make another selection. promptManageNewPassword is the function that
prompts the user to type the source for their password; effectively an
application or website name for which they are storing their password. Then
the user is prompted for a password they want to save. The source and
password pair are saved to your mockDB and, again, you run showMenu to
prompt the menu items.
js
...
const viewPasswords = () => {
const { passwords } = mockDB; #1
Object.entries(passwords).forEach(([key, value], index) => {
console.log(`${index + 1}. ${key} => ${value}`);
}); #2
showMenu(); #3
};
mockDB.passwords[source] = password; #5
console.log(`Password for ${source} has been saved!`);
showMenu(); #6
};
...
Your application is ready to run. The last piece to add is the code in listing
3.9. Here, mockDB is checked for an existing hash value. If one does not exist
the user is prompted to create one through promptNewPassword. Otherwise,
the user is prompted to re-type their master password through
promptOldPassword.
Listing 3.9. Determine the entry point for your application in index.js
js
...
if (!mockDB.hash) promptNewPassword(); #1
else promptOldPassword();
With this code in place you have most of the logic you need to run the
password manager. The only downside is the local database temporarily
stores your managed passwords while the application is running. Because the
local database is only an in-memory object, it will get deleted each time you
start your app.
To test this, go to the root level of your project folder in your command line
and run node index. You should be prompted to type a new password like in
figure 3.4. After typing your password, it will be hashed by bcrypt and you’ll
see a menu of items to choose from.
Figure 3.4. Typing your main password to access your application menu
From here you can select 2 and press Enter to add a new password to
manage. Try typing a source like manning.com and a password as seen in
figure 3.5.
Now you can safely exit the application by typing 4 and pressing Enter. This
step safely kills the Node process and exits your command line app. The next
step is to add a persistent database so you don’t have your passwords deleted
every time you run your app.
In figure 3.7 you see a diagram with an example of how your data could be
stored. This structure is similar to JavaScript Object Notation (JSON),
making it easier to continue to work with JavaScript on the backend. Notice
in this figure you store the password_hash as an encrypted value for your
main password. Then you have a list of passwords that map a source name
to a plain text password. Additionally, MongoDB will assign an ObjectId to
new data items within a collection.
For this section, you’ll need to ensure MongoDB is properly installed. Visit
appendix A.1.4 for installation steps.
Go to your project’s root level at the command prompt and run npm i
mongodb.
Once installed, the mongodb package will provide your Node application the
tools it needs to connect to your database and start adding data. For this
reason, you no longer need your temporary in-memory storage, mockDB, from
section 3.2. Instead, you use the MongoClient to set up a new connection to
you local MongoDB server. Your development server should be running at
mongodb://localhost:27017 on your computer. Last, you set up a database
name, passwordManager, to connect to.
js
import { MongoClient } from "mongodb"; #1
let hasPasswords = false; #2
const client = new MongoClient("mongodb://localhost:27017"); #3
const dbName = "passwordManager"; #4
js
...
const main = async () => { #1
await client.connect(); #2
console.log("Connected successfully to server");
const db = client.db(dbName); #3
const authCollection = db.collection("auth"); #4
const passwordsCollection = db.collection("passwords");
const hashedPassword = await authCollection.findOne({ type: "auth" }); #5
hashedPasswords = !!hashedPassword; #6
return [passwordsCollection, authCollection]; #7
};
At the bottom of index.js add the code in listing 3.12 to call the main
function and begin processing your app.
js
...
const [passwordsCollection, authCollection] = await main(); #1
if (!hasPasswords) promptNewPassword(); #2
else promptOldPassword();
Now you can restart your Node application by exiting any running
application and typing node index. If your application successfully
connected to the database you should see "Connected successfully to
server" logged to your command line.
Note
After saving passwords, if you want to delete the database of passwords and
start from scratch, you can always add await
passwordsCollection.deleteMany({}) or await
authCollection.deleteMany({}) to delete your passwords or main hashed
password, respectively.
With your database connected, you need to modify some of your application
logic to handle reading and writing to your MongoDB collections. Change
saveNewPassword to become an async function. Within that function change
mockDB.hash = hash to await authCollection.insertOne({ "type":
"auth", hash }). This will save the hashed password hash to the
authCollection in your database.
The last three functions to change are in listing 3.13. Here, viewPasswords is
modified to pull all passwords (by source and password value) from your
passwordsCollection. showMenu will remain the same, but like the other
functions will become async. In this function you add await before each
function call, as they are now performing I/O operations. Last,
promptManageNewPassword uses the findOneAndUpdate MongoDB function
to add a new password entry if it doesn’t exist, or override and update a
password entry if an old value exists. The options returnNewDocument and
upsert tell the function to override the changed value and return a copy of
the modified value when the save operation is complete.
js
const viewPasswords = async () => {
const passwords = await passwordsCollection.find({}).toArray(); #1
Object.entries(passwords).forEach(([key, { source, password }], index) =>
console.log(`${index + 1}. ${source} => ${password}`);
});
showMenu();
};
const showMenu = async () => {
console.log(`
1. View passwords
2. Manage new password
3. Verify password
4. Exit`);
const response = prompt(">");
With this code in place, you have a fully functional database to support your
password manager application. Quit any previously running Node application
and restart the application by running node index. Nothing should change
about the prompts you see in the command line. Only this time the values you
type will persist even when you quit the application.
With this application complete, you can always run the application locally
and add or retrieve passwords secured behind your hashed main password.
Some next steps you could take would be to add a client with a UI to help
with visualizing your password data or setting up your database in the cloud,
so that your passwords persistent from computer to computer.
3.4 Summary
In this chapter you
In this chapter, you’ll build your own RSS feed reader to access and process
XML. Then you’ll create your own RSS aggregator, which will read from
multiple feeds and provide up-to-date and relevant content on demand. By
the end, you’ll have a fully functioning RSS feed aggregator that can list data
on your command-line or web client.
Before you get started, you’ll need to install and configure the following tools
and applications that are used in this project. Detailed instructions are
provided for you in the specified appendix. When you’ve finished, return
here and continue.
In this diagram, you see the layout of logic and flow of information. Starting
from the client (which may be any computer or device with a network
connection), a request is made to collect the top feed results from your Node
app. From there, your app processes data from multiple RSS sources and
parses the data to return a summary list of results. This diagram demonstrates
how a list may appear on your command line client.
Figure 4.1. Project blueprint for an application RSS reader and aggregator
Before you get coding, it may help to review how RSS works and what type
of data to expect. RSS works largely because there are sources for content
across the web. RSS content typically comes from news sites or blogs that
want allow to their viewers quick and easy access to the top headlines of the
day. For this to work, a news site must offer an API to access the RSS feed.
For example, you may use the Bon Appetit recipes RSS feed for app, which
is accessible at https://www.bonappetit.com/feed/recipes-rss-feed/rss. You
can click on this link to view the feed contents in your web browser, as
shown in figure 4.2.
XML (Extensible Markup Language) is just one of many data formats you
can use across the web. The XML structure uses tags similar to those used in
HTML web pages. The nested tags help you understand which pieces of data
belong to which sections. RSS-feed XML files typically start with an rss tag
that describes the type of XML being used. After that, there’s a tag labeled
channel to detail information about the source of data. In the case of Bon
Appetit, the channel tag lists the name, link, and written language used for
their channel’s feed. Within the channel tag is where you’ll find the main
content. Each content listing is wrapped in an item tag. Normally, you’ll
mind many item tags, and it’s your job to extract these items and present
them in your own way to your app’s clients.
Figure 4.2. RSS feed results for Bon Appetit recipes in the browser
The first step in building an application that can use XML data is to call the
RSS feed URL directly from within your Node app. Get started by creating a
food_feeds_rss_app folder and navigating to the project folder in your
command line. From here run npm init to initialize the Node app with the
default configurations, as shown in listing 4.1.
javascript
{
"name": "food_feeds_rss_app", #1
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "Jon Wexler",
"license": "ISC"
}
Note
As of Node v18.2.0 you may use the fetch api without the need to install an
external package. If you are using an earlier version of Node, you’ll need to
run npm i node-fetch to be able to use fetch in your app.
js
const main = async () => { #1
const url = "https://www.bonappetit.com/feed/recipes-rss-feed/rss";
const response = await fetch(url); #2
console.log(await response.text()); #3
}
main(); #4
Once the package is installed, you’ll notice that your package.json file
added a new dependency, and a folder called node_modules was created at
your project’s root level. Next, import the rss-parser package into your app
by adding import Parser from 'rss-parser'; to the top of your index.js
file. On the following line, you instantiate the Parser class by adding const
parser = new Parser();. Now you have a parser object you can use in
place of your Fetch API code. Replace the contents of your main function
with the code in listing 4.3. This code implements the parser.parseURL
function, by fetching the contents of your RSS feed url and preparing them in
a structured format. You’ll then have access to the feed title and items. In the
end, you only log what you want to show from that feed. In this case, it’s the
item title and link.
js
...
const url = "https://www.bonappetit.com/feed/recipes-rss-feed/rss";
const {title, items} = await parser.parseURL(url); #1
console.log(title); #2
const results = items.map(({title, link}) => ({title, link})); #3
console.table(results); #4
...
Tip
After adding the rss-parser code, save your file, navigate to your project’s
root level on your command line and run the command node index. Your
output should look similar to that in figure 4.3.
This command line RSS reader is a great way to have the latest updates from
your favorite RSS feed endpoints running on your computer. In the next
section, you’ll add more external feeds and build your own aggregator to
show only the most relevant content.
To test fetching from multiple URLs, you can use the Specialty Food lunch
feed and the Reddit /r/Recipes subreddit feed. Both of these feeds offer
varying content at different times, making it more of a challenge to parse. To
incorporate these additional feeds, you add
https://www.specialtyfood.com/rss/featured-articles/category/lunch/ and
https://www.reddit.com/r/recipes/.rss to the list of URLs to explore at the top
of index.js (listing 4.4). The urls constant will later be used to cycle
through each URL and collect its corresponding XML response.
With this list in place, you may now modify the main function by iterating
through each URL to fetch feed content (listing 4.5). First, assign a constant
feedItems to an empty array: this is where your eventual feed items will be
stored. Next, iterate through the urls array using the map function, which will
visit each URL and run the parser.parseURL function to return a Promise in
its place. In the following line, you use Promise.all which waits for all the
requests to external URLs to return with responses before completing. Each
response will be stored in a responses array. Last, you use a custom
aggregate and print function to sift through the responses and log your
desired output, respectively.
js
const main = async () => {
const feedItems = []; #1
const awaitableRequests = urls.map(url => parser.parseURL(url)); #2
const responses = await Promise.all(awaitableRequests); #3
aggregate(responses, feedItems); #4
print(feedItems); #5
}
Before re-running your application, you need to define the aggregate and
print functions. Add the code in listing 4.6 below your main function. In the
aggregate function, you collect all the feed data from each external source
and, for this project, only retain the items that contain recipes with
vegetables. First, loop through the array of responses and examine only the
items within each XML response. Then, an inner loop visits each item
destructs the title and link only, because these are the only pieces of data
you care about in this project. With access to each item’s title, you check if
the title includes the string veg. If that condition passes, you add an object
with the title and link to your feedItems array.
In your print function, you accept feedItems as an argument. Next, you
clear the console of previous logs using console.clear. Print your
feedItems to your console using console.table and then log your Last
updated time by generating a new Date object and converting it to a human-
readable string.
js
...
const aggregate = (responses, feedItems) => { #1
for (let {items} of responses) { #2
for (let {title, link} of items) { #3
if (title.toLowerCase().includes('veg')) { #4
feedItems.push({title, link});
}
}
}
return feedItems; #5
}
Now, restart your application. You’ll notice this time there are fewer results
logged to your console (figure 4.4), but the items shown are from varying
sources—all with titles indicating some vegetable or vegetarian recipe. You
can modify the aggregate condition to your liking by focusing on other key
words, or even examining data other than the title and link used in this
example.
Figure 4.4. Console output for an aggregated table of RSS feed items
Ultimately, when you share this aggregator with your colleagues, they can
add any additional RSS source URLs to increase the quantity of meaningful
results. Before you wrap this project up, you decide to add one more feature:
adding custom items to the feed.
To collect user typed input, install the prompt-sync package by running npm
i prompt-sync at the root level of your project in your command line. Then,
in index.js add import promptModule from 'prompt-sync'; to the top of
your file, followed by const prompt = promptModule({sigint: true}); to
instantiate the prompt function with a sigint config that allows you exit your
app. Last, add const customItems = []; to define an array for your custom
feed items. Next, you modify the print function by adding the code in listing
4.7 to the top of that function. The prompt function will show Add item: `
on your console and wait for a typed response. When the Enter key
is pressed, the input is saved to a `res constant. User input should
be of the format: title + , + link. Then, the title and link are extracted
by splitting the resulting input string. The new custom item object is added to
your customItems global constant array.
Listing 4.7. Modify the print function to accept user input in index.js
js
const res = prompt('Add item: '); #1
const [title, link] = res.split(','); #2
if (![title, link].includes(undefined)) customItems.push({title, link}); #3
...
Figure 4.5. Console output for an aggregated table with custom feed items
Now you have an app you can share with others in your office. You can use
the aggregator to collect relevant recipes every two seconds (or at an interval
of your choice). You can also add custom links not found in your external
sources. When users of your app publicize their results, everyone can benefit
from your new aggregated collection of quick-access recipe links. You may
continue developing the application to make it accessible across a shared
network, or build a web framework with a database into your app to allow
web clients to access the feed data.
4.5 Summary
In this chapter you
Although most people are familiar with the internet by way of their web
browser, most activity and data transfer happens behind the scenes. Some
data, like real-time train schedules, is made available not as just a standalone
web app, but as a resource others can use and implement into their own apps.
This resource is called an Application Programming Interface (API), and it
allows its users to view all or some available data belonging to a restricted
environment (like the Railroad authority). Some APIs allow the addition,
modification, and deletion of data, especially if it’s your own data or if you
are the authority over that resource itself.
In this chapter, you’ll build a RESTful API, meaning it will support access
and modification of data through a standard protocol. You’ll also connect the
API to a database—to which you can add new information and from which
you can access older records. In the end, you’ll have the translatable skills
needed to build an API for just about any type of data.
Before you get started, you’ll need to install and configure the following tools
and applications that are used in this project. Detailed instructions are
provided for you in the specified appendix. When you’ve finished, return
here and continue.
Note
Figure 5.1. Project blueprint for a Node API using four HTTP methods
HTTP Methods
There are nearly 40 HTTP methods that can be used, though only a handful
make up the majority of requests made across the internet. The following
HTTP methods are ones that you’ll use in this book:
While there are many other request methods used across HTTP, these four
are enough to get started with developing an API. For more information on
HTTP methods, visit https://developer.mozilla.org/en-
US/docs/Web/HTTP/Methods.
Because the library wants your API to bring visibility to popular and sought-
after books, the data served by your API should have enough information to
identify those books and the level of interest. By the time your database is set
up, you’ll want to store the title and author of the book, as well as a count of
how many requests that book received. In this way, mobile and web clients
that use your API can notify the library and its patrons of the most popular
requests.
Figure 5.2 shows the flow of data during a POST request for a new book. That
request contains the title and author of the book. If the book already exists in
the database, it’s request count increases. Otherwise, that book’s record is
added to the database for the first time with it’s own serial ID.
Next, create a file called index.js within your project folder. This file acts as
the entry point for your app.
This book encourages you to build each project from scratch. For that reason,
some of the packages, configurations, and installation steps that are required
are listed in a separate appendix. To continue building your app, complete the
steps in the following document:
Listing 5.1. Instantiating your Express app and start listening for requests in index.js
js
import express from 'express'; #1
const app = express(); #2
const PORT= `3000`; #3
app.listen(PORT, () => { #4
console.log(`Listening at http://localhost:${PORT}`);
});
Your app is now ready to accept any HTTP requests. For now, you can test
this by running npm start. You’ll notice the command line console prints a
statement, Listening at http://localhost:3000, to indicate that your app
is running.
Note
With the nodemon package installed, you need to run npm start only once
while developing your app. Every change you make to the project thereafter
automatically restarts your Node process and reflects the changes
immediately. Please reference appendix B for more information on installing
nodemon.
Your app is ready to process requests. Though, to support an API, you need
to instruct Express on the type of data your app anticipates receiving. The
app.use function allows you to pass middleware functions, which process
incoming requests before you get to see what’s inside.
Middleware functions, as the name implies, sit between the request being
received by your app, and the request being processed by custom app logic.
Here, you add the express.json() middleware function, which parses
incoming requests with JSON data. Because this API expects to receive and
serve JSON data, it’s necessary to include this parsing function.
Add the code in listing 5.2 below your PORT definition in index.js.
Listing 5.2. Adding JSON and URL-encoded parsing middleware to your Express app in
index.js
js
...
app.use(express.json()); #1
app.use( #2
express.urlencoded({
extended: true,
})
);
...
To complete the first stage of building your API, add the code in listing 5.3
right below your middleware functions. In this block, app.get defines a route
which listens for GET requests only. The "/" indicates that your app is
listening for requests made to the default URI endpoint:
http://localhost:3000/. That means if the HTTP request uses the GET method
and targets the default URI, the provided callback function is executed.
Express provides your callback function with a request (req) and response
(res) object as parameters. The req object is used to examine the contents of
the request, while the res object lets you assign values and package data in
the response to the client.
Note
By some conventions, variables that are not used in a function, but still
defined, have an underscore applied to their name. This helps the next
engineer know not to expect that variable to have any behavior in the
function. For that reason, the request argument in this example is called _req.
Once a request is processed, you use the res.json function to reply to the
client with structured JSON containing a key called message and value of
"ok". This response indicates to the client that the API server is functioning
correctly.
In the following block of code, app.use sets up a function to handle all other
requests that are not handled by your GET route. This is called a catch-all
error-handling middleware. In this function, the error object, e, is the first
argument passed in. In this way, if an error occurs, your app won’t just crash
but it will log the error message and return a status code of 500 (internal
server error) instead.
js
...
app.get("/", (_req, res) => { #1
res.json({ message: "ok" }); #2
});
If you don’t see a message appear in your browser, it’s possible that the URL
or port number was entered incorrectly. Also, make sure that your server is
running. In case of a server error, it’s possible that the server will exit its
process and wait for your fix before restarting.
In the next section you’ll add the other necessary routes to support your
library API.
Before you add more code to index.js, it’s important that your project
structure is maintained and organized. So far, you have one JavaScript file in
your project. By the end of this section your project directory structure should
look like listing 5.4. Now you’ll introduce a new folder to separate your
routing logic from the rest of your app logic.
Note
bash
library-api #1
|
- index.js
- routes #2
|
- index.js
- booksRouter.js
- package.json
- node_modules
Navigate to your project folder in your command line and create a new folder
called routes. Within that folder create two new files, index.js and
booksRouter.js. Within the booksRouter.js file add the code from listing
5.5.
This code introduces the Express Router. The Router class contains Express'
framework logic for handling all types of internet requests. You’ve already
used the Express app instance to create a GET route in index.js. Now, you’re
going to more explicitly define your routes with an instance of the Router
class you call booksRouter.
Because there is no database set up yet, you return the ID to the client in a
JSON structure. You wrap the response in a try-catch block in case anything
goes wrong, you’ll be able to log your errors to the API server. If something
doesn’t work as expected, the next() function passes your error to the next
middleware or error handling function in your Express app.
js
import {Router} from 'express'; #1
const booksRouter = Router(); #2
js
import {Router} from 'express'; #1
import booksRouter from "./booksRouter.js"; #2
mainRouter.use('/books', booksRouter); #4
With the mainRouter set up, you import this router into index.js at your
project’s root level by adding import router from './routes/index.js';.
Then, you have your app use this new router by adding app.use("/api",
router); right above the error handling block in index.js. This new code
defines an additional namespace called /api. This will be the final
namespace change and will allow you to reach make GET requests to the
/api/books/:id route path.
RESTful Routes
This project uses routing to navigate incoming requests through your app. A
route is simply a way to get from a specified URI endpoint to your app logic.
You can create any types of routes you choose, with whatever names you’d
like, and as many dynamic parameters. However, the way you design your
routing structure has side-effects and consequences for those using your API.
For that reason, this project uses Representational State Transfer (REST) as a
convention for structuring your routes. REST provides a standard URI
endpoint arrangement that let’s its users know what type of resource they
should except to get in return. For example, If you are looking for a particular
book in the database. Your endpoint could be:
/Frankenstein/database_books/return_a_book/. While, this route path
includes most of the information I need to get the book’s information, it may
not follow the same structure for all the resources offered by my API.
A RESTful API empowers its users to quickly and easily understand which
part of the route path refers to the resource name and which parts include the
necessary data for a database query. In this way, a route like /books can be
used for both a GET and POST request, with the server understanding that
different logic handles each request for the same resource: books.
Furthermore, a route like /books/:id adds to the resource name, but
providing a dynamic parameter: id. This standard structure makes using and
designing an API straightforward and convenient for everyone involved. For
more information on RESTful routing visit https://developer.mozilla.org/en-
US/docs/Glossary/REST.
Now, restart your app if it’s not already running and navigate to
http://localhost:300/api/books/42 in your web browser. You should see
{ "id": "42" } printing in your window. With your GET route working, it’s
time to add the routes for POST, PUT, and DELETE. Conveniently, your PUT and
DELETE routes look identical to your GET route. All three require an id param.
Duplicate the GET route twice, but change one of the duplicates' route
function to booksRouter.put and the other to booksRouter.delete. This
addition should be enough to test those routes. For the POST route, add the
code in 5.7 right above your export line at the bottom of booksRouter.js.
In this route, booksRouter.post is used to have Express listen for POST
requests, specifically. Within the action, you destructure the title and
author from the request body and return them to the client in JSON format.
The body of a request is typically where you’ll find request data when
posting to create or change information on the server.
js
...
booksRouter.post("/", async (req, res, next) => { #1
const {title, author} = req.body; #2
try {
const book = {title, author}; #3
res.json(book);
} catch (e) {
console.error("Error occurred: ", e.message);
next(e);
}
});
...
With these last changes, its time to test your other non-GET routes. To test
these routes, you open a new command line window and run a cURL
command against your API server.
Note
Client URL (cURL) is a command line tool for transferring data across the
network. Because you are no longer only requesting to see data, you can use
this approach to send data to your server directly from your command line.
Now that all four routes are accessible, it’s time for the final piece of the
puzzle: persistent storage in a database.
Despite not yet introducing other types of data other than book titles and
authors, you choose to save your data in relational database tables. You
figure that eventually this project might incorporate the massive amounts of
information elsewhere in the library system, and so a relational database may
be appropriate.
Note
Although you’ve narrowed your decision to a SQL database, there are many
different database management systems to choose from. You decide to
compare using a SQLite DB and a PostgreSQL DB.
Overall, although both databases are sufficient for this project, you find that
SQLite will get your app up and running the fastest.
You start to incorporate SQLite by installing its most recent npm package
and running the command, npm i sqlite3. Now that you have an RDBMS
installed, you could just connect to the database and start running SQL
queries to search, save, modify, and destroy data. But, what’s the fun in
developing a Node API if you couldn’t do it all purely in JavaScript.
Before you continue, take a look at your project directory. In this last section
you’ll add two more sub-folders as shown in listing 5.8. Create the first
folder, models, and within it create a file called book.js. This file contains
the code needed by Sequelize to map your book data to the database. Next,
create the db folder. Within this folder create a file called config.js, which
will contain all the configurations needed to set up your database. After
adding all the required changes, your database will live within the application
folder in a file called database.sqlite.
bash
library-api #1
|
- index.js
- routes
|
- index.js
- booksRouter.js
- models #2
|
- book.js
-db #3
|
- config.js
- database.sqlite
- package.json
- node_modules
Open your config.js file and add the code in listing 5.9. In this code, you
import Sequelize and instantiate a new database connection using SQLite,
defining the storage location within the db folder of your project. You then
authenticate the connection to the database through db.authenticate(). If
the connection is successful you’ll get a logged statement indicating so.
Otherwise you’ll log the error that occurred while trying to connect. Luckily,
there is no additional server to run with SQLite, so there should not be many
issues to troubleshoot at this step. At the end of the file you export both the
Sequelize class and db instance.
javascript
import { Sequelize } from "sequelize"; #1
export default { #5
Sequelize,
db,
};
The database is almost ready to get fired up, but first it needs some data to
map in your app. Add the code from listing 5.10 to book.js. In this file you
import the database configs and destructure the Sequelize and db values.
Then, you use the db.define function to create a Sequelize model called
Book. This model name later maps in the SQLite database to create a
corresponding table of the same name. The fields of this model reflect the
data your library wants you to store:
a title as a string(title)
author name as a string (author)
number of requests made for the book as an integer (count)
These fields are all that's needed to save countless book records in your
database (though you will be counting). `Book.sync` will initiate a sync with
the database and set up a table called `Books`. At the end of the file, you
export the model for use back in your `booksRouter.js` file.
Tip
Passing the option {force: true} to Book.sync ensures that with each
startup of the app, the sync function attempts to create a fresh table if any
changes occurred since the last run. This is helpful in development if you
don’t want to fill your database with too many test records.
javascript
import config from '../db/config.js'; #1
const {Sequelize, db} = config; #2
Book.sync(); #7
With your Sequelize model set up, you’ll need to revisit the CRUD actions
you previously built in booksRouter.js. These actions currently return the
data they receive. Now that you have access to a database, you can add the
logic needed to support actual data processing in your API.
First, import your Book model into booksRouter.js by adding import Book
from '../models/book.js'; to the top of the file. This gives access to the
Book ORM object and allows you to create, read, update, and delete Book
data. Change the values assigned to the book and books variables in each
route to use the result of your database queries (listing 5.11).
The first change makes a call to Book.findByPk, where the id from your
request params is passed in as a primary key to search within the database for
a matching book record. You use the await keyword as you’re making a
blocking call to the database, waiting for a response before you continue
executing subsequent logic. The next change is to the POST request,
booksRouter.post, where you use the Book.create function and pass in the
title and author you retrieved earlier from the request body. You wait for
the create function to complete and return the resulting created record to the
client. Similarly, the Book.update also takes in the title and author as
parameters, but this time they reflect the changed title and author values.
A second parameter in this PUT request uses a where key to identify the
record to update by its primary key: id. Last, Book.destroy uses the where
key to search for a record by the specified id` in the DELETE request and
remove that matching record from the database.
For more information about model query types and the sequelize API, visit
https://sequelize.org/docs/v6/.
javascript
...
// GET /books/:id
const book = await Book.findByPk(id); #1
...
// POST /books/
const book = await Book.create({title, author}); #2
...
// PUT /books/:id
const book = await Book.update({title, author}, { #3
where: { id }
});
...
// DELETE /books/:id
const book = await Book.destroy({ #4
where: { id }
});
...
Now test your changes, only this time, there are different outcomes because
each command results in a database action. Notice the id field that is returned
in some of the responses. Also notice the updatedAt and createdAt fields
Sequelize adds automatically to keep track of when data has entered the
database or changed. Return to your command line, open a new window, and
run the following cURL commands:
Note
If you run the POST request a second time with the same data you get a
Validation error in your server’s console. This is expected by design
because your Book model has a validation criteria that new books should
have unique titles.
Your API is not set up to handle new incoming requests to create, read,
update, and delete Book records. If you choose to expand your API, you can
add new models or modify the logic in your existing actions.
5.6 Summary
In this chapter, you built a fully functional API with Node and Express.
Moreover, you added a SQL database and used the Sequelize library to
persist data processed in your API logic. With the skills you’ve learned from
this chapter, you may now:
Your Book model can support updating the request count for books already
in the database. What can you change in the POST request to update the count
value before saving the new Book record?
Answer: You can search for a Book record by the same title (which should be
unique): const book = await Book.findOne({where: { title }});. If
that Book exists, you can get the book’s existing count book.count and add 1
to it and save it to the database:
javascript
book.count += 1;
book.save();
You may want to search for all the books in the database, not just the ones
you know by ID. What might a route for an index of all books look like?
Answer: The route would look the same as the GET route for a book by id, but
you would leave the :id param out of the route path and then search for all
book records instead of just one by its primary key. Here is an example of
what that would look like:
javascript
booksRouter.get("/", async (req, res, next) => {
try {
const books = await Book.findAll();
res.json(books);
} catch (e) {
console.error("Error occurred: ", e.message);
next(e);
}
});
Appendix A. Getting set up with
installations
In this appendix you’ll
Install development tools needed to build tiny projects
Set up your development environment with Node.js
Install all relevant database management systems for projects in this
book
This chapter will walk you through the environment setup and installation
steps needed to start programming the projects in this book. You do not need
to install every tool in this chapter, though you may benefit from installing
the required software and libraries for the projects you plan to complete.
Note
Throughout this chapter you’ll have the option to install through a Graphical
User Interface (GUI), a third-party tool, or the binary packages (non-
compiled source code). While the binary packages are sometimes pre-
compiled, there are often a few steps required to extract their contents for a
successful installation. I recommend only using the binary installation steps
when working on a machine without a graphical interface, like a standalone
server.
You may also choose to clone this repository to your computer. You may
follow the instructions at https://github.com/git-guides/git-clone. For more
information on downloading and working with Git on your computer visit
https://git-scm.com/downloads.
The code in this repository will change to reflect modifications and updates
in this book, as well as code security and patch updates to packages taught
here.
In this section you will find installation instructions for Mac, Windows, and
Linux computers. All GUI installation instructions can also be found at
https://code.visualstudio.com/.
Tip
You will be working largely from your command line. You can set up VS
Code to open from your command line window using the code keyword. To
set this up, open VS Code, open the Command Palette by typing
Cmd+Shift+P. You’ll see an input box appear where you can type Shell
Command: Install 'code' command in PATH. Select the matching option
from the dropdown. Now you may restart your command line and simply
type code and press Enter to open VS Code from your command line
window.
Note
Throughout this book, you’ll have the opportunity to develop for both the
backend and frontend of your Node applications. You may choose another
code editor or integrated development environment (IDE) such as Atom,
IntelliJ, or Eclipse. These alternatives will certainly allow you to develop an
application with Node, but you may find that they are not as supportive for
web development as VS Code is in 2022.
Tip
Note
Any version after v14 should be ok to use with modern ES6 syntax.
Once the PKG file downloads to your web browser’s specified download
location (this could be your downloads folder or desktop), double-click the
PKG file to decompress its contents. Once the contents are decompressed you
will see an installer window appear (figure A.9).
Note
You may need to restart your computer to avoid any issues with your
operating system PATH identifying Node for you.
Allow your computer to start the installation by pressing the Run button. You
can then click Next through the installation wizard until you reach the main
install page. Then, simply click Install and Node will install to your
computer. When the installation completes, click Finish to close the installer
window.
If the installation steps succeeded, Node and NPM are installed on your
computer. You can test this by opening your command line and running node.
The response should be a prompt to type in JavaScript. This is your Node
REPL (Read-Eval_Print-Loop) environment. Try typing
console.log("Hello Tiny Projects!"); and pressing the enter key to run
your first line of JavaScript in Node.
Note
You may need to restart your computer to avoid any issues with your
operating system PATH identifying Node for you.
After completing these steps, Node and NPM are installed on your computer.
You can test this by opening your command line and running node. The
response should be a prompt to type in JavaScript. This is your Node REPL
(Read-Eval_Print-Loop) environment. Try typing console.log("Hello Tiny
Projects!"); and pressing the enter key to run your first line of JavaScript
in Node.
Note
You may need to restart your computer to avoid any issues with your
operating system PATH identifying Node for you.
Like a SQL, or relational database, you may still associate data. For example,
you may store data for a customer and associate them with multiple purchase
of several different products. The main difference with SQL databases is in
how the data is stored and retrieved.
The following sections will describe how you may install Mongo on your
Mac, Windows, or Linux computers.
Note
You may also need to install Xcode tools, which don’t come preinstalled with
your Mac. Unless otherwise prompted, run xcode-select --install in any
command line window to install these tools.
First, run brew tap mongodb/brew. This command will download the Mongo
Database Tools needed to assist with your instance of Mongo on your
computer. Then, run brew install [email protected] which installs
the actual Mongo database management system on your computer. This
community version allows you to work with Mongo on all of your projects
for free and without restriction.
When these installations complete you’ll be able to run mongod to start your
Mongo server.
Note
In order for Mongo to save your data properly you’ll need to set up a db
directory. On Macs with Intel processors, that location is
/usr/local/var/mongodb and for the newer M1 models the location is
/opt/homebrew/var/mongodb. Make sure to create a db folder at these
locations.
Alternative to the mongod command, you may also start Mongo through
Hombrew. If you have not yet started your Mongo server, run brew services
start [email protected] to initiate the database server. Later, to stop
this service run brew services stop [email protected] in your
command line.
Tip
To verify that Mongo is running on your computer with Hombrew, run brew
services list in your command line window. You’ll see a list of services
currently running.
Next, reload your local packages by running sudo apt-get update. You
may now install the latest stable version of Mongo by running sudo apt-get
install -y mongodb-org.
After this last command, you should have Mongo installed on your computer.
You may now start the database server by running sudo systemctl start
mongod in your command line.
Note
If you run into an error starting Mongo, try running sudo systemctl
daemon-reload
https://docs.mongodb.com/manual/administration/install-on-linux/
Appendix B. Setting up Node app
essentials
In this appendix you’ll
Set up packages useful for developing Node apps
Install packages that can be used with any Node APIs
If you’re reading this appendix, it’s probably because you’ve begun the app
development process in one of the book’s chapters. By now you should have
an application folder, including package.json and index.js files. With these
two files in place, you can add additional configurations, package
dependencies, and app scripts to your project.
Follow the recommended steps in the next section to ensure your app will run
with the book’s suggested Node version and JavaScript syntax. From there,
you may also install the optional packages, which help with cleaning up and
organizing your code and development environment.
Note
Because
javascript
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "nodemon index.js",
},
Appendix C. Node under the hood
In this appendix you’ll
How the Node event loop operates
What modules come prepackaged with Node
Why Node is preferred for certain apps
When JavaScript was created
Life as a browser language
Simple operations allowed JavaScript to operate off of a single
thread
More complex server applications made use of CPU, cores, and
threads to run multiple operations at the same time
Explain difference in callout: https://www.guru99.com/cpu-core-
multicore-
thread.html#:~:text=KEY%20DIFFERENCE&text=Cores%20is%20an%20actu
Under stand how a computer works first. A computer’s
fundamental software is the operating system. Most of what’s
processed in a computer is done so through the computer’s Central
Processing Unit (CPU) Core. A computer’s CPU core is like the
brain of the computer. While humans can multitask in theory, our
brains can only process a single thought at a time. Likewise,
computer CPU cores generally process a single task at a time.
Diagram of person thinking about taking out the trash while
computer garbage collects, plays a video, and monitors a security
camera -Nowadays, computers have multiple cores, effectively
allowing the computer to use multiple brains at the same time to
process a greater factor of tasks at the same time. Mutlicore
processors support a computer’s ability to more rapidly handle
large tasks like video rendering, or unrelated tasks like calculating a
large equation and running computationally expensive software
(the ones that get your fans spinning). This is called concurrent, or
parallel, programming. In fact, a CPU core can further break down
a task by processing multiple parts of the task in smaller units of
execution called threads. In multithreading programming, a CPU
can execute multiple parts of a process at the same time to reduce
the latency before a completed state. So, if you are querying a
database for billions of records, you may find it faster to break the
query down into four parts that each execute on their own thread, in
parallel. That’s four times faster than on one thread. What could
take four hours to computer, only takes one hour. So, between CPU
cores, their processes, and multiple threads, modern computers
have the ability to handle large tasks in exponentially shorter
periods of time. So, what don’t we always use multiple threads?
There are tradeoffs. For one thing, the more you process in parallel,
the more power is used, which can put a strain on your computer.
Also, threads may share resources, which means they may all have
access to the same data at the same time. A common side effect of
this relationship is a condition called deadlock, where two or more
threads are waiting for each other to complete a task or use a
resource. The result is a blocked application where no operation
can be completed. With more capable and complex hardware come
complex scenarios and edge cases to account for. Enter JavaScript.
JavaScript was designed with a single call stack that uses a single
thread only to processes its tasks. This architecture is simpler to
implement and avoids the downsides of a multithreaded system.
* More on how we benefit from this architecture
Side note: While JavaScript can theoretically offload some of its
tasks to the computer’s threads, the main JavaScript process still
runs on a single thread.
Because JavaScript runs on a single thread it’s architecture allows
for some functions to execute when ready and others to execute a
callback eventually
SHOW event loop diagram
See how callback queue allows the loop to handle functions when
they are ready
This was fine until callbacks got out of hand and created callback
hell
What came next were Promises, the Promise was incorporated into
the standard ECMAScript JavaScript versions, allowing eventual
return ofa value from an inner function. Also allowed simpler
syntax
The syntax improved even more with async await, which
effectively wraps a promise response
So, does an azsync await call block the event loop?
Libuv
fs and readline modules, etc.
fast
Javascript on server
Growing community
What is Nodejs
The Platform
built in modules
Synchronous vs Async
What’s a callback
What’s a promise
Event Loop