Node interview notes
Node interview notes
//////////////////////////////
retrive network ip of the user
const ip = req.headers['x-forwarded-for'] || req.connection.remoteAddress;
const ipv4 = ip.includes('::ffff:') ? ip.split(':').pop() : ip;
console.log('IP Address:', ipv4);
//////////////////////////////
Async node :
non-blocking means that while database is finding the data on the database or any other io task
node does make the other request wait
Control flow : Async.js: Async.js is a popular library for managing asynchronous control flow in
Node.js. It provides a rich set of functions for handling asynchronous tasks in series, parallel, or
with specific flow control patterns such as waterfall, each, map, reduce, etc. Examples include
async.series(), async.parallel(), async.waterfall(), etc.
async.series() is used to execute asynchronous tast one by one
asyc.parallel() is used to execute asynchronous task in parallel
async.waterfall() is used to execute asynchronous task one by one where result of one is input
of next one
///////////////////
Node js inbuilt modules
ASSERT
assert used to match to data works like == === with some additional feature like deep assert
ASYC_HOOKS
it is used to manage the context of your node js application i works by setting a store for a
request on local storage
BUFFER
is a way to store and manipulate binary data
//////////////////////////////
Node js architecture
Event driven architecture: Node is a single thread runtime enviroment which means it only does
one action at a time
1. v8 - Node uses the v8 JavaScript engine under the hood to run its various tasks. The v8
engine is a JavaScript open-source engine that is maintained by Google and it is written in C++.
Node uses this engine with the C++ API.
2. Libuv - This is another core implementation used by Node to run its environment. It is a C
library used to abstract non-blocking I/O operations. This library maintains its disk operation
interface like the file system, DNS, child processes, signal handling, and streaming among other
concepts.
4. OpenSSL - This is used in tls and crypto modules to provide cryptographic functions to
improve its security.
/////////////////////
Advanced node js concept (codedamn)
you can use process.env.UV_THREADPOOL_SIZE=n to manually increase or decrease
threads used
you should not increase the UV_THREADPOOL_SIZE more than your logical or physical core
as it is not useful since it will not be able to achive parellism it will have to wait for the physical or
logical core to be free
microtask queue are used to handle promise and it has more priority than macro or task queue
so all the promise in the microtask will resolve on one iteration but macrotask queue will not
work sameway from macrotask only one will get executed so basically on each iteration of event
loop mircrotask queue will get empty or exhausted and one task or callback from macrotask
queue will get executed
/////////
Advanced node js by software dev diaries
Multithreading with worker threads
multi threading vs multi processing :>
Multiprocessing allocates separate memory and resources for each program — or process. But
multithreading shares the same memory and resources for threads belonging to the same
process
Worker thread
you can use worker threads to offload cpu intensive task to your other
Cluster module
It is used to create copy instance of your node application which is inbuilt in node and handles
load balamcing between application
pm2 a npm can also be used to run your application in cluster mode with some additional
features like resource usage data
Worker thread vs cluster
worker threads create thread for parellel execution can be used for cpu intensive task when you
are only running one instance of your node application
cluster can used when you want to scale your application and make it highly available
each has it own cons as using cluster can increase resource consumption but is easy to setup
while worker thread only spawn a thread when needed but may lead to more verbose and make
it harder to maintain
Streams:
There are four type of stream
Readable, Writeable, Duplex, Transform
you can use pipe to connect read write or transform streams but its has bad syntax for error
handling and may result in memory leak
So you should use pipeline instead in which you pass all your stream instead of chain them
using pipe()
Event emitters:
events are based on publisher - subscriber architecture
you can create event to handle generic event that happens after something happens and you
need to do something like sending mail or some thing instead of using service we can use event
to handle all that
////////////////////
Types of api function
Synchronous, Asynchronous
///////////
SPAWN VS FORK
spawn creates a child process that sends data back to parent
fork create its own instance of the v8
//////////
PHASES OF EVENT LOOP (tpipcc)
NOTE: Between all of the phases process.nextTick() (nexttickqueue) and microtask queue gets
exhausted in order as nextick q has more priority than microtast or promise
timers: this phase executes callbacks scheduled by setTimeout() and setInterval().
pending callbacks: executes I/O callbacks deferred to the next loop iteration.
idle, prepare: only used internally.
check: setImmediate() callbacks are invoked here.
The main advantage to using setImmediate() over setTimeout() is setImmediate() will always be
executed before any timers if scheduled within an I/O cycle (callback of fileread or something),
independently of how many timers are present.
close callbacks: some close callbacks, e.g. socket.on('close', ...).
/////
EVENT EMITTER FLOW
first we instantiate an event from events module from node js
then we create an event and listen to it by using handlers
///////
GRACEFULL SHUTDOWN
SIGINT AND SIGTERM
https://www.youtube.com/watch?v=Z82mZV2Ye38&ab_channel=MafiaCodes
These commands are used to handle what we should do when we are closing server and make
sure that all the request recieved should be server but we should not accept new request
Databases
////
points to remember for interview: Difference: DSAQ
Data model: one is properly defined other is semi or not structured at all
SCALABILITY:
ACID-BASE : one follows acid while other follows base
QUERY-language: sql use sql and all sql database shares the same sort of language while for
no sql every other database can have a different way of querying than other
/////
WHEN TO USE ONE OVER OTHER
SQL OVER NO SQL:
Data is well defined unlike to change
Acid compliance is necessary, that means your application has financial transaction where
consistency is critical
application requires complex queries
NOSQL OVER SQL:
When dealing with large volume of data which are unstructured
Scalibity and high availibity is the top priority
Where datatype can be a variable
////
Relational
DIFF BETWEEN STORED PROCEDURES, FUNCTION, TRIGGERS
stored procedure - used to execute sql queries, and change db value
function - used to get data, does not modify any data takes atleast one parameter
triggers - happens without calling it like on data insertion or updation
TRIGGERS:
Trigger are used to perform a operation on the sql server after a specified event has occured
Triggers can be attached to three things
DML (data manipulation language) triggers: it runs when you do INSERT, UPDATE, DELETE.
DDL (data defination language) triggers: it runs when you use state ment like CREATE, DROP,
ALTER, DENAY AND REVOKE.
LOGON TRIGGERS: It runs in response to a logon event when auth gets fininshed and before
user session is created
ACID PROPERTIES:
Atomicity: Atomicity ensures that a transaction is treated as a single unit of work, meaning that
either all of its operations are completed successfully, or none of them are. If any part of the
transaction fails, the entire transaction is rolled back to its initial state.
Example: Consider a bank transfer where money is being moved from one account to another.
Atomicity ensures that if the money is deducted from one account, it is also successfully
credited to the other account. If either of these operations fails (e.g., due to a system error), the
entire transaction is rolled back, ensuring that neither account is left in an inconsistent state.
Consistency: Consistency ensures that a transaction brings the database from one valid state to
another valid state. In other words, the database remains consistent before and after the
transaction, adhering to all defined rules, constraints, and integrity constraints.
Example: Suppose two users simultaneously attempt to update the same bank account
balance. Isolation ensures that each transaction is processed independently, and one user's
transaction does not affect the other user's transaction. This prevents issues such as lost
updates or dirty reads.
Durability: Durability guarantees that once a transaction is committed, its effects are permanent
and survive system failures (such as crashes or power outages). The changes made by
committed transactions are stored in non-volatile memory (such as disk) and remain intact even
in the event of a system failure.
Example: After a successful funds transfer in a banking system, the updated account balances
are permanently stored in the database, ensuring that even if the system crashes immediately
after the transaction, the changes will not be lost. When the system recovers, it can restore the
database to its last consistent state.
///////////////
INDEXING in SQL
Creating the Index: When you create an index on a column (or multiple columns) in a table, the
database management system (DBMS) creates a separate data structure, often a balanced tree
(such as a B-tree or a B+ tree), to store the indexed values.
Sorting and Storing Values: The DBMS sorts and stores the values of the indexed column(s) in
this separate data structure. Each value in the index is associated with a pointer or reference to
the corresponding row in the table.
Optimized Search: When you execute a query that involves the indexed column(s), the DBMS
utilizes the index to quickly locate the desired rows instead of scanning the entire table. It
performs a search operation within the index data structure, which is typically much faster than
a full-table scan.
Efficient Retrieval: Once the DBMS finds the desired values in the index, it uses the associated
pointers or references to directly access the corresponding rows in the table, thereby minimizing
disk I/O and improving query performance.
Maintaining the Index: As data in the table changes (e.g., rows are inserted, updated, or
deleted), the DBMS updates the index accordingly to reflect these changes. This maintenance
process ensures that the index remains synchronized with the underlying table data and
continues to provide efficient access to the data.
Basically when working with index it is important to note that i may improve your query time
while retriving data. it will also be more costly and time consuming to maintain the index during
create, update, delete operation as all the index have to be maintained and update on each of
those operations so it may actually
////////////////
NO SQL
BASE :
Basically Available: Even if some parts of the system are down or experiencing issues, the
platform will still allow users to interact with the available features. For example, if the like
feature is temporarily unavailable due to maintenance, users can still post updates or comment
on existing posts.
Soft state: The platform might implement features like post expiration, where older posts are
automatically removed after a certain period. Additionally, likes or comments may be cached
and periodically synchronized across servers rather than being immediately updated in all
replicas.
Eventually consistent: When a user likes a post or adds a comment, the update might not
immediately reflect across all servers due to network delays or partitions. However, the platform
ensures that eventually, these updates will propagate to all replicas, maintaining consistency
across the system.
What is mongodb?
Is an open source database with which is written in c++
It uses json like document with optional schema
Datatype in mongodb:
Null, boolean, Number, String,Date, Regular expression, Array, Embedded Document, Object
ID, Binary Data, Code.
INDEXING
Indexing in mongo db is the same as indexing in sql
REPLICATION
It is used to provide high availabilty and data redundency by maintaining the copies of same
data across different servers.
In replication there is a primary node on which data get insert, update, delete. And there are
secondary nodes which are used to read data only and overtime data created or update in
primary node will reflect in the secondary node as well this makes the mongodb or nosql
eventually consistent this replication or updation of data happens overtime asynchronously with
the help of mongo maintaining and oplog(operation log) this will make sure that data is
consistent even when primary node crashes and in case of failure of primary node one of the
secondary node will be made primary node this insure availability incase of network or hardware
failure as well.
Advantages:
High Availability even incase of failure
Durability : protects against complete data loss incase of failure
Read scalability: since all nodes can be used to read data it helps due to parallel processing
Trade off:
Increased complexity
More resource used
Eventually consistent
////////
SHARDING
Is used to distribute (not replicate) data between multiple nodes shards in this case. This helps
in scaling your read and write operation in your mongoDB.
Shard Key: To shard a collection, you choose a field or fields in the documents called the shard
key. MongoDB uses this key to distribute documents across shards. For example, if your
collection contains user profiles, you might choose the user_id field as the shard key.
Shard Cluster: You set up a shard cluster, which consists of multiple servers or nodes called
shards. Each shard contains a subset of the data based on the shard key. For example, you
might have three shards, each responsible for a range of user IDs (e.g., shard 1 handles user
IDs 1-1000, shard 2 handles user IDs 1001-2000, and so on).
Shard Router (mongos): To interact with the shard cluster, you use a special component called
the shard router or mongos. The mongos routes queries and write operations to the appropriate
shard based on the shard key.
Data Distribution: When you insert a new document into the collection, MongoDB uses the
shard key to determine which shard should store the document. For example, if the document's
user_id is 1500, MongoDB routes it to shard 2, which is responsible for user IDs 1001-2000.
Query Routing: When querying data, the mongos router routes the query to the appropriate
shard or shards based on the shard key. It then gathers the results from all relevant shards and
returns them to the client.
///////////
MongoDB CHARTS are integreted tool in mongodb fo r data visualizaiton
///////////
AGGREGATION FRAMEWORK
It is based on the pipeline which basically means that output of each stage of pipeline is input to
next stage of the pipeline
which usually looks like collection -> stages -> output
STAGES OF AGGREGATION
$match: This stage filters documents based on specified criteria, similar to the find() method. It
allows you to include only documents that match certain conditions.
$project: This stage reshapes documents by including, excluding, or renaming fields. It allows
you to specify which fields to include in the output documents and optionally apply expressions
to transform the data.
$group: This stage groups documents by a specified key or expression and applies accumulator
expressions to calculate aggregated values for each $group. Common accumulator expressions
include $sum, $avg, $min, $max, and $addToSet.
$sort: This stage sorts documents based on one or more fields in ascending or descending
order.
$limit: This stage limits the number of documents passed to the next stage in the pipeline.
$skip: This stage skips a specified number of documents and passes the remaining documents
to the next stage in the pipeline.
$unwind: This stage deconstructs arrays within documents, creating a separate document for
each element of the array. It's commonly used to flatten arrays before further processing.
$lookup: This stage performs a left outer join between documents from two collections. It allows
you to include related documents from another collection based on matching criteria.
$out: This stage writes the results of the aggregation pipeline to a specified collection, effectively
storing the aggregated data in a new collection.
$lookup: If you're performing a join operation using $lookup, it's typically beneficial to include
this stage early in the pipeline. This allows you to combine data from multiple collections before
applying further transformations or aggregations.
$unwind: If your documents contain arrays that you need to process individually, you should use
the $unwind stage after any $match or $lookup stages. This stage deconstructs arrays, creating
separate documents for each array element, which can then be processed independently.
$group: The $group stage is commonly used for grouping documents and calculating aggregate
values. It's often used after filtering, joining, or unwinding data to aggregate results based on
specific criteria.
$project: The $project stage is typically used to reshape documents by including, excluding, or
renaming fields. It's often used towards the end of the pipeline to define the final structure of the
output documents.
$sort, $limit, $skip: These stages are typically used towards the end of the pipeline to sort, limit,
or skip documents as needed. For example, you might sort the aggregated results, limit the
number of documents returned, or implement pagination using $skip and $limit.
$out: If you're storing the results of the aggregation pipeline in a new collection using the $out
stage, it should be the last stage in the pipeline.
//////////////////
REDIS
JS interview notes
Promise.all - Even if one of the promise fails all the promise will fail
Promise.allSettled - Will go through all the promise regardless of failure or success
Promise.any - will return the first promise that gets fullfilled or resolved if one fails than it will try
on another if another will fullfill.
Promise.race - will return the promise that gets settled first (resolved or rejected)
////
in js if you write a function declaration even after return it will get hoisted same thing is not true
with var
////
Type coercion
It means that the data type of one variable is converted to other variable.
there are two type of coercion
EXPLICIT: When we use a method or function to covert a variable to another data type it is
called explicit example Number("1").
Explicit is prefered over implicit as it is more readable.
////
FUNCTIONAL PROGRAMMING :
This is a programming paradigm(structure || way of programming) where function are the first
class citizen which means the can be stored in a variable and can be passed to another function
as an arguments.
Some of the functional programming language concept in js are:
First Class function :
// Example of first-class functions
const add = function (a, b) {
return a + b;
};
const multiply = function (a, b) {
return a * b;
};
PURE FUNCTION
// Example of a pure function
const addPure = function (a, b) {
return a + b;
};
function addToTotal(amount) {
total += amount; // Modifying external state (side effect)
return total;
}
console.log(addToTotal(5)); // Output: 5
console.log(addToTotal(3)); // Output: 8
console.log(total); // Output: 8 (external state is modified)
Imutablity:
This can be achived using const keyword so that varible declared with const cannot be
modified.
////
Object.freeze vs seal
freeze will make object and its key immutable which means no operation other than read can be
used in that object
seal will make it so that your object cannot have new property or delete old but properties can
be modified
////
Cookies : They are used to store the data of the user on their own browser
////
There are 3 scopes in javascript
Local, Global. Functional
Local : It means that the variable declared with let or const is not availble to the outside of the
scope.
Global : variable declared at the parent or at the start is available to all the child scope or
function and can be accessed modified or shadowed b the child.
Funtional : This is for var keyword specifically because var declared in a block is accesseble to
the parent scope as well but a var in child function is not available on the parent function
///
map vs foreach
map returns new array
foreach does not return anything
////
this KEYWORD
The value of this is determined dynamically at runtime based on how a function is called, and it
can behave differently depending on the context in which it is used.
Here are some common scenarios where the behavior of this can be inconsistent:
Global Context:
In the global context (outside of any function), this refers to the global object (window in a
browser, global in Node.js).
Function Context:
Inside a function, the value of this depends on how the function is called. If the function is called
as a method of an object, this refers to that object. Otherwise, in strict mode, this is undefined,
and in non-strict mode, it refers to the global object.
Arrow Functions:
Arrow functions do not have their own this context. Instead, they inherit this from the
surrounding lexical scope. This can lead to unexpected behavior, especially when using arrow
functions as methods within objects.
Event Handlers:
In event handlers attached using addEventListener, this refers to the element that triggered the
event. However, if the handler is defined as an arrow function, this will not refer to the element.
Constructor Functions:
Inside a constructor function, this refers to the newly created instance of the object being
constructed. However, if the constructor function is called without the new keyword, this may
refer to the global object, leading to unintended consequences.
////
Prototype
Everything is an object in js. Even the primitive data type like string number and integer are
wrapped inside an object which helps us give method for that specific data type like split for
string.
__proto__
Every object in js has [[Prototype]] or in modern js __proto__ which has the reference of which
object it is derived from except the base object all the object in js have __proto__ property which
can be used to get to the parent object of that object. Inheritence is possible in javasript due to
this prototype chain if a property or method is not found inside the object it will check if its
present it the parent through this. You can use Object.getPrototypeOf() to get the prototype of
the object that your object belongs to instead of __proto__ as it is more standardised and
__proto__ can get deprecated
new KEYWORD
is used to create new object from constructor function
this is an example of playing around with new keyword in this example we can see that when
we do not instantiate a constructor function without the new keyword
// Constructor function for creating Person objects
function Person(name, age) {
this.name = name;
this.age = age;
this.hi = function () {
console.log("hi")
}
}
Object.setPrototypeOf(car, vehiclePrototype);
this is used to change the prototype of an object and make it inherit properties and method of
the assigned prototype object
// instance of
// Constructor function for creating Car objects
function Car(make, model) {
this.make = make;
this.model = model;
}
////
Map
it is a data structure in js that is sort of like object but it stores the order of insertion. Any thing
can datatype can be key in this
It is a data structure in js that is also like Map but the difference are the keys can only be an
object and if the value is not used it might be garbage collected and unlike map it is not iterable
////
Call apply and bind
call is used to call methods or function with different context of this call have rest parameter
apply is the same as call just instead of rest it takes array in second param
bind is used to bind the context of this and it returns function that can be used to invoke the
function with same this context provided earlier
/////
Classes
Classes provide a more readable and structured way to work with objects and inheritance.
Constructors initialize object properties, and methods define behaviors.
Inheritance with extends allows for reusing code across classes.
Static methods operate on the class itself, while private fields enforce encapsulation.
Getters and setters allow controlled access to class properties.
JavaScript classes make it easier to implement object-oriented principles and are widely used in
modern JavaScript applications.
contructor: You can use the constructor function to set initial properties when creating an object
from a class
method: function declared inside the class is call method
properties: These are defined within the constructor and can be initialized based on parameters
passed when creating an instance.
Stateless :
This means that every request is new for the server and server does not know what what you
were doing before this.
Cacheability :
We store cache data on client so that we can reduce the server call and improve response time
and also save bandwidth.
Uniform interface :
Every resource can be accessed and modified through a unique url
eg: https://api.example.com/products/123 in this api product with 123 id will can be accessed or
updated or deleted according to the method of the api request
Layered System :
REST allows you to use a layered system architecture where you deploy the APIs on server A,
and store data on server B and authenticate requests in Server C, for example. A client cannot
ordinarily tell whether it is connected directly to the end server or an intermediary along the way.
FINAL NOTE: Other than code on demand if your application does not even follow one of the
constraint it is not a restful application
naming conventions
api name shoud be nouns not verb
which means its should be /users not /getUsers