0% found this document useful (0 votes)
7 views

Node interview notes

The document contains comprehensive notes on Node.js, covering topics such as retrieving user IP addresses, asynchronous programming, built-in modules, architecture, error handling, streams, and security practices. It also discusses database types, SQL vs NoSQL, and various SQL commands and concepts. Additionally, it highlights performance improvement strategies and advanced concepts like multithreading and event emitters.

Uploaded by

Anime HD
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Node interview notes

The document contains comprehensive notes on Node.js, covering topics such as retrieving user IP addresses, asynchronous programming, built-in modules, architecture, error handling, streams, and security practices. It also discusses database types, SQL vs NoSQL, and various SQL commands and concepts. Additionally, it highlights performance improvement strategies and advanced concepts like multithreading and event emitters.

Uploaded by

Anime HD
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Node interview notes

//////////////////////////////
retrive network ip of the user
const ip = req.headers['x-forwarded-for'] || req.connection.remoteAddress;
const ipv4 = ip.includes('::ffff:') ? ip.split(':').pop() : ip;
console.log('IP Address:', ipv4);
//////////////////////////////
Async node :
non-blocking means that while database is finding the data on the database or any other io task
node does make the other request wait

Control flow : Async.js: Async.js is a popular library for managing asynchronous control flow in
Node.js. It provides a rich set of functions for handling asynchronous tasks in series, parallel, or
with specific flow control patterns such as waterfall, each, map, reduce, etc. Examples include
async.series(), async.parallel(), async.waterfall(), etc.
async.series() is used to execute asynchronous tast one by one
asyc.parallel() is used to execute asynchronous task in parallel
async.waterfall() is used to execute asynchronous task one by one where result of one is input
of next one
///////////////////
Node js inbuilt modules
ASSERT
assert used to match to data works like == === with some additional feature like deep assert
ASYC_HOOKS
it is used to manage the context of your node js application i works by setting a store for a
request on local storage
BUFFER
is a way to store and manipulate binary data
//////////////////////////////
Node js architecture
Event driven architecture: Node is a single thread runtime enviroment which means it only does
one action at a time

LIBRARIES THAT NODE USES THAT MIGHT BE NOT JS (vlzo)


Node uses a couple of libraries to run. A few of those libraries include:

1. v8 - Node uses the v8 JavaScript engine under the hood to run its various tasks. The v8
engine is a JavaScript open-source engine that is maintained by Google and it is written in C++.
Node uses this engine with the C++ API.

2. Libuv - This is another core implementation used by Node to run its environment. It is a C
library used to abstract non-blocking I/O operations. This library maintains its disk operation
interface like the file system, DNS, child processes, signal handling, and streaming among other
concepts.

3. Zlib - This is used for compression and decompression purposes, it is an industry-standard


library, popular for its use in gzip and libpng. It is also used to implement sync, async, and
streaming compression and decompression interfaces.

4. OpenSSL - This is used in tls and crypto modules to provide cryptographic functions to
improve its security.

/////////////////////
Advanced node js concept (codedamn)
you can use process.env.UV_THREADPOOL_SIZE=n to manually increase or decrease
threads used
you should not increase the UV_THREADPOOL_SIZE more than your logical or physical core
as it is not useful since it will not be able to achive parellism it will have to wait for the physical or
logical core to be free
microtask queue are used to handle promise and it has more priority than macro or task queue
so all the promise in the microtask will resolve on one iteration but macrotask queue will not
work sameway from macrotask only one will get executed so basically on each iteration of event
loop mircrotask queue will get empty or exhausted and one task or callback from macrotask
queue will get executed

/////////
Advanced node js by software dev diaries
Multithreading with worker threads
multi threading vs multi processing :>
Multiprocessing allocates separate memory and resources for each program — or process. But
multithreading shares the same memory and resources for threads belonging to the same
process

Worker thread
you can use worker threads to offload cpu intensive task to your other
Cluster module
It is used to create copy instance of your node application which is inbuilt in node and handles
load balamcing between application
pm2 a npm can also be used to run your application in cluster mode with some additional
features like resource usage data
Worker thread vs cluster
worker threads create thread for parellel execution can be used for cpu intensive task when you
are only running one instance of your node application
cluster can used when you want to scale your application and make it highly available
each has it own cons as using cluster can increase resource consumption but is easy to setup
while worker thread only spawn a thread when needed but may lead to more verbose and make
it harder to maintain

Memory management in node js


You can use pm2 to monitor memory as well
Tips for memory management
dont use global variable
always clear your timers
don't attach some thing to global object like req unless needed
don't use cyclic object
Error handling in node js
there are two type of error operational and logical
operational means database is down or a dependency is down
logical means error on code or programmer error
extend the error class for more personalized errors
use global error handler in express and write clean code for it
exit the process(close your node server in case of error that are severe and risky basically
something that was not expected )
use logger for more structured error logging like winston
process.on with uncaughtException and unhandledRejection
unhandledRejection handles async error
uncaughtException handles all error even unhandled if not handled

Streams:
There are four type of stream
Readable, Writeable, Duplex, Transform
you can use pipe to connect read write or transform streams but its has bad syntax for error
handling and may result in memory leak
So you should use pipeline instead in which you pass all your stream instead of chain them
using pipe()

Best architecture for node js


3 layer approach : In this you have routes with uri, controllers to handle routes, service to handle
the business implementation, and model layer to handle all the database queries
pub sub pattern implementation for handling something like use creation
code

NGINX reverse proxy


is used as middle man between our client and backend server in which client request goes to
reverse proxy then later it is redirected to backend than backend responds to proxy and proxy
gives response to client
most of these things can be handled with node but we should not do this for a larger application
so that it is easier to maintain

The benefit of this is that


it can be used for ssl encrytion
it can be used for bufferiing so you buffer your response to nginx and client and after getting all
the buffer nginx will repond to client this can help you with the sloris attack
it can be used for caching as well and it is actually good at caching

Best security practises for Node js (RPAPJVOLH)


Rate limiting: You can use rate limiting to prevent ddos(distributed denial of service) which
means that attacker act as a client and keeps requesting resources from server to make your
server availibity low to actual client it can be done using a npm package called express-rate limit
or have config in your nginx for rate limiting or even aws deployed appilcation have provision for
that.
payload limit : limit the size of your payload so that user that attacks your system cannot put a
big size payload like 10gb and make your server crash
auth limit : limit the amount of request a user can make to authenticate or login to your
application // need to see how to do this
Password incription: Like using bcrypt module and hashing your password before saving it in
the database
Jwt black listing: Incase when our jwt gets compromised we can store on the database and
expire that jwt token but it is not ideal to that as jwt are supposed to be stateless so to tackle this
you can use access and refresh token
validation : validate json recieved from client using a validator like express or joi
use orm or odm: like mongoose or sequelize to add extra layer for your database and avoid
possible injections
lintercheck npms : use these to make sure you don't write code that could be vulnerable
use package like helmet to attach and modify some security header for your application

Event emitters:
events are based on publisher - subscriber architecture
you can create event to handle generic event that happens after something happens and you
need to do something like sending mail or some thing instead of using service we can use event
to handle all that

How to improve performance of your node js application (ccwp)


use caching like redis
use cluster or load balancers like nginx reverse proxy
use worker thread for cpu intensive task
do performance test make changes accordingly

////////////////////
Types of api function
Synchronous, Asynchronous
///////////
SPAWN VS FORK
spawn creates a child process that sends data back to parent
fork create its own instance of the v8

//////////
PHASES OF EVENT LOOP (tpipcc)
NOTE: Between all of the phases process.nextTick() (nexttickqueue) and microtask queue gets
exhausted in order as nextick q has more priority than microtast or promise
timers: this phase executes callbacks scheduled by setTimeout() and setInterval().
pending callbacks: executes I/O callbacks deferred to the next loop iteration.
idle, prepare: only used internally.
check: setImmediate() callbacks are invoked here.
The main advantage to using setImmediate() over setTimeout() is setImmediate() will always be
executed before any timers if scheduled within an I/O cycle (callback of fileread or something),
independently of how many timers are present.
close callbacks: some close callbacks, e.g. socket.on('close', ...).

/////
EVENT EMITTER FLOW
first we instantiate an event from events module from node js
then we create an event and listen to it by using handlers

///////
GRACEFULL SHUTDOWN
SIGINT AND SIGTERM
https://www.youtube.com/watch?v=Z82mZV2Ye38&ab_channel=MafiaCodes
These commands are used to handle what we should do when we are closing server and make
sure that all the request recieved should be server but we should not accept new request

Databases

There are two type of database Relational and non relational


The advantage of relational database is that
Relational databases have fixed schema and stores data in tabular format and follows ACID
compliance that guarantees the reliablity of database tranction meaning in case of failure
database will revert all the changes to a state of before failure
Normalization is also followed by relational databases wihch basically help in reducing
duplicates and that leads to reduced storage cost
DIsadvantage of relational database are
Scalability : as RDMS are intended to run on a single machine.this means if requirements of the
machine are insufficient, due to data size or increase in frequency for access also known as
vertical scaling. Due to this the cost may be so much that it outweighs the benefit
Flexibilty : Since the database schema is fixed you define the columns and data type for those
column in advance. although this may be benificial in some cases to identify relationship easier.
But it make making changes to the database very tough and complex. You have to decide how
all your data will look like at the start which is just not possible. When you have to make
changes you have to change all the data which make your database offline temporarily
Performance : The performance of the database is linked to complexity of tables and also the
number of them and also on amount of data in the table as these things increases the time
taken to perform query also increases

////
points to remember for interview: Difference: DSAQ
Data model: one is properly defined other is semi or not structured at all
SCALABILITY:
ACID-BASE : one follows acid while other follows base
QUERY-language: sql use sql and all sql database shares the same sort of language while for
no sql every other database can have a different way of querying than other
/////
WHEN TO USE ONE OVER OTHER
SQL OVER NO SQL:
Data is well defined unlike to change
Acid compliance is necessary, that means your application has financial transaction where
consistency is critical
application requires complex queries
NOSQL OVER SQL:
When dealing with large volume of data which are unstructured
Scalibity and high availibity is the top priority
Where datatype can be a variable

////
Relational
DIFF BETWEEN STORED PROCEDURES, FUNCTION, TRIGGERS
stored procedure - used to execute sql queries, and change db value
function - used to get data, does not modify any data takes atleast one parameter
triggers - happens without calling it like on data insertion or updation

DIFFERERNT TYPES OF JOIN IN SQL


Right join : returns all the data from right table and matching data from right to left
Left Join : same as right just from left side
Inner join : return data only if there is matching data between right and the left table
Full join : returns all the data from both table does not matter if it matches or not
Self join : joins table to itself
cartesian join : multiply

TRIGGERS:
Trigger are used to perform a operation on the sql server after a specified event has occured
Triggers can be attached to three things
DML (data manipulation language) triggers: it runs when you do INSERT, UPDATE, DELETE.
DDL (data defination language) triggers: it runs when you use state ment like CREATE, DROP,
ALTER, DENAY AND REVOKE.
LOGON TRIGGERS: It runs in response to a logon event when auth gets fininshed and before
user session is created

THE FIVE BASIC COMMAND IN SQL (DQMCT)


DDL(DATA DEFINITION LANGUAGE):
It is used to create update delete data structure in your database. Some example are:
CREATE: it is used to create table, views, index, sp, function, triggers
ALTER: It can be used to modify the property of attribute(column) or add a new attribute to the
table altogether
TRUNCATE: It can be used to remove or delete all the row in the table BUT the structure stays
DROP: It can be used to remove table with all its data and structure all together.
RENAME: Can be used to rename your tablename .

DQL(DATA QUERY LANGUAGE):


It is used to query database to access the data inside
Basically select statements are DQL
The order of DQL is
SELECT: To specify the field in the output
FROM: Which table you want to query
JOIN: Which table along with your main table (from)
WHERE: To specify the condition that the rows should met in the out put
ORDER BY: To Specify the order of the returned statement asc or desc
GROUP BY: To group the data based on matching values from different column into one
HAVING: incase of applying condition to a output column or column that does not exist but is
created to query after some aggregation(Note you can actually omit the where clause)

DML(DATA MANIPULATION LANGUAGE):


These are used to manipulate data inside your table
INSERT: Used to insert data in your table
UPDATE: Used to update the data
DELETE: used to delete the data
LOCK,
CALL: used to call stored procedure
EXPLAIN: Explain the execution plan for the select query(used to optimise)

DCL(DATA CONTROL LANGUAGE):


These are used to create or change permision of user in your server
GRANT: Used to give access to the table or a specific method on the table so that the used
does not have all the access to our system other than what he needs or works with
REVOKE: Remove the given permission from the user

DTL(DATA TRANSACTION LANGUAGE):


It is used to create a transactional system
Incase of failure the database should revert to its original state
BEGIN: To create a transaction, all the query in this can be considered as one single transaction
COMMIT: Used to finalize the data in your database
ROLLBACK: Incase of error you can use this to revert to a previous state of db
SAVEPOINT:Can be used in between query to rollback to a previous savepoint

ACID PROPERTIES:

Atomicity: Atomicity ensures that a transaction is treated as a single unit of work, meaning that
either all of its operations are completed successfully, or none of them are. If any part of the
transaction fails, the entire transaction is rolled back to its initial state.

Example: Consider a bank transfer where money is being moved from one account to another.
Atomicity ensures that if the money is deducted from one account, it is also successfully
credited to the other account. If either of these operations fails (e.g., due to a system error), the
entire transaction is rolled back, ensuring that neither account is left in an inconsistent state.

Consistency: Consistency ensures that a transaction brings the database from one valid state to
another valid state. In other words, the database remains consistent before and after the
transaction, adhering to all defined rules, constraints, and integrity constraints.

Example: In a database maintaining inventory levels, if a transaction involves subtracting five


units of a product from the inventory, the database's consistency ensures that the resulting
inventory count is accurate and doesn't violate any constraints (e.g., the inventory count cannot
be negative).
Isolation: Isolation ensures that transactions operate independently of each other, even when
they're executed concurrently. Each transaction must appear as if it is executed in isolation from
other transactions, without interference or influence.

Example: Suppose two users simultaneously attempt to update the same bank account
balance. Isolation ensures that each transaction is processed independently, and one user's
transaction does not affect the other user's transaction. This prevents issues such as lost
updates or dirty reads.

Durability: Durability guarantees that once a transaction is committed, its effects are permanent
and survive system failures (such as crashes or power outages). The changes made by
committed transactions are stored in non-volatile memory (such as disk) and remain intact even
in the event of a system failure.

Example: After a successful funds transfer in a banking system, the updated account balances
are permanently stored in the database, ensuring that even if the system crashes immediately
after the transaction, the changes will not be lost. When the system recovers, it can restore the
database to its last consistent state.

///////////////
INDEXING in SQL
Creating the Index: When you create an index on a column (or multiple columns) in a table, the
database management system (DBMS) creates a separate data structure, often a balanced tree
(such as a B-tree or a B+ tree), to store the indexed values.

Sorting and Storing Values: The DBMS sorts and stores the values of the indexed column(s) in
this separate data structure. Each value in the index is associated with a pointer or reference to
the corresponding row in the table.

Optimized Search: When you execute a query that involves the indexed column(s), the DBMS
utilizes the index to quickly locate the desired rows instead of scanning the entire table. It
performs a search operation within the index data structure, which is typically much faster than
a full-table scan.

Efficient Retrieval: Once the DBMS finds the desired values in the index, it uses the associated
pointers or references to directly access the corresponding rows in the table, thereby minimizing
disk I/O and improving query performance.

Maintaining the Index: As data in the table changes (e.g., rows are inserted, updated, or
deleted), the DBMS updates the index accordingly to reflect these changes. This maintenance
process ensures that the index remains synchronized with the underlying table data and
continues to provide efficient access to the data.
Basically when working with index it is important to note that i may improve your query time
while retriving data. it will also be more costly and time consuming to maintain the index during
create, update, delete operation as all the index have to be maintained and update on each of
those operations so it may actually
////////////////
NO SQL
BASE :
Basically Available: Even if some parts of the system are down or experiencing issues, the
platform will still allow users to interact with the available features. For example, if the like
feature is temporarily unavailable due to maintenance, users can still post updates or comment
on existing posts.

Soft state: The platform might implement features like post expiration, where older posts are
automatically removed after a certain period. Additionally, likes or comments may be cached
and periodically synchronized across servers rather than being immediately updated in all
replicas.

Eventually consistent: When a user likes a post or adds a comment, the update might not
immediately reflect across all servers due to network delays or partitions. However, the platform
ensures that eventually, these updates will propagate to all replicas, maintaining consistency
across the system.

What is mongodb?
Is an open source database with which is written in c++
It uses json like document with optional schema

Datatype in mongodb:
Null, boolean, Number, String,Date, Regular expression, Array, Embedded Document, Object
ID, Binary Data, Code.

INDEXING
Indexing in mongo db is the same as indexing in sql

REPLICATION
It is used to provide high availabilty and data redundency by maintaining the copies of same
data across different servers.
In replication there is a primary node on which data get insert, update, delete. And there are
secondary nodes which are used to read data only and overtime data created or update in
primary node will reflect in the secondary node as well this makes the mongodb or nosql
eventually consistent this replication or updation of data happens overtime asynchronously with
the help of mongo maintaining and oplog(operation log) this will make sure that data is
consistent even when primary node crashes and in case of failure of primary node one of the
secondary node will be made primary node this insure availability incase of network or hardware
failure as well.

Advantages:
High Availability even incase of failure
Durability : protects against complete data loss incase of failure
Read scalability: since all nodes can be used to read data it helps due to parallel processing

Trade off:
Increased complexity
More resource used
Eventually consistent

////////
SHARDING
Is used to distribute (not replicate) data between multiple nodes shards in this case. This helps
in scaling your read and write operation in your mongoDB.

Shard Key: To shard a collection, you choose a field or fields in the documents called the shard
key. MongoDB uses this key to distribute documents across shards. For example, if your
collection contains user profiles, you might choose the user_id field as the shard key.

Shard Cluster: You set up a shard cluster, which consists of multiple servers or nodes called
shards. Each shard contains a subset of the data based on the shard key. For example, you
might have three shards, each responsible for a range of user IDs (e.g., shard 1 handles user
IDs 1-1000, shard 2 handles user IDs 1001-2000, and so on).

Shard Router (mongos): To interact with the shard cluster, you use a special component called
the shard router or mongos. The mongos routes queries and write operations to the appropriate
shard based on the shard key.

Data Distribution: When you insert a new document into the collection, MongoDB uses the
shard key to determine which shard should store the document. For example, if the document's
user_id is 1500, MongoDB routes it to shard 2, which is responsible for user IDs 1001-2000.

Query Routing: When querying data, the mongos router routes the query to the appropriate
shard or shards based on the shard key. It then gathers the results from all relevant shards and
returns them to the client.

In summary, sharding in MongoDB enables horizontal scaling and improves scalability,


availability, and performance for large-scale data-intensive applications. However, it comes with
trade-offs in terms of complexity, operational overhead, and data distribution considerations that
need to be carefully managed and balanced based on the specific requirements of the
application.

///////////
MongoDB CHARTS are integreted tool in mongodb fo r data visualizaiton
///////////
AGGREGATION FRAMEWORK
It is based on the pipeline which basically means that output of each stage of pipeline is input to
next stage of the pipeline
which usually looks like collection -> stages -> output

STAGES OF AGGREGATION
$match: This stage filters documents based on specified criteria, similar to the find() method. It
allows you to include only documents that match certain conditions.

$project: This stage reshapes documents by including, excluding, or renaming fields. It allows
you to specify which fields to include in the output documents and optionally apply expressions
to transform the data.

$group: This stage groups documents by a specified key or expression and applies accumulator
expressions to calculate aggregated values for each $group. Common accumulator expressions
include $sum, $avg, $min, $max, and $addToSet.

$sort: This stage sorts documents based on one or more fields in ascending or descending
order.

$limit: This stage limits the number of documents passed to the next stage in the pipeline.

$skip: This stage skips a specified number of documents and passes the remaining documents
to the next stage in the pipeline.

$unwind: This stage deconstructs arrays within documents, creating a separate document for
each element of the array. It's commonly used to flatten arrays before further processing.

$lookup: This stage performs a left outer join between documents from two collections. It allows
you to include related documents from another collection based on matching criteria.

$out: This stage writes the results of the aggregation pipeline to a specified collection, effectively
storing the aggregated data in a new collection.

RECOMMENDED ORDER OF PIPELINE(mlugslso)


$match: It's often best to start with the $match stage to filter out unnecessary documents early
in the pipeline. This reduces the number of documents that need to be processed in subsequent
stages, improving performance.

$lookup: If you're performing a join operation using $lookup, it's typically beneficial to include
this stage early in the pipeline. This allows you to combine data from multiple collections before
applying further transformations or aggregations.

$unwind: If your documents contain arrays that you need to process individually, you should use
the $unwind stage after any $match or $lookup stages. This stage deconstructs arrays, creating
separate documents for each array element, which can then be processed independently.

$group: The $group stage is commonly used for grouping documents and calculating aggregate
values. It's often used after filtering, joining, or unwinding data to aggregate results based on
specific criteria.

$project: The $project stage is typically used to reshape documents by including, excluding, or
renaming fields. It's often used towards the end of the pipeline to define the final structure of the
output documents.

$sort, $limit, $skip: These stages are typically used towards the end of the pipeline to sort, limit,
or skip documents as needed. For example, you might sort the aggregated results, limit the
number of documents returned, or implement pagination using $skip and $limit.
$out: If you're storing the results of the aggregation pipeline in a new collection using the $out
stage, it should be the last stage in the pipeline.

//////////////////
REDIS

JS interview notes

Promise.all - Even if one of the promise fails all the promise will fail
Promise.allSettled - Will go through all the promise regardless of failure or success
Promise.any - will return the first promise that gets fullfilled or resolved if one fails than it will try
on another if another will fullfill.
Promise.race - will return the promise that gets settled first (resolved or rejected)

////
in js if you write a function declaration even after return it will get hoisted same thing is not true
with var
////

Type coercion
It means that the data type of one variable is converted to other variable.
there are two type of coercion

IMPLICIT : when we do 1 + "1" and js automatically converts the other 1 to string.

EXPLICIT: When we use a method or function to covert a variable to another data type it is
called explicit example Number("1").
Explicit is prefered over implicit as it is more readable.
////

JS falsy values : ezfunn


empty string, zero, false, undefined, NaN, null
////

FUNCTIONAL PROGRAMMING :
This is a programming paradigm(structure || way of programming) where function are the first
class citizen which means the can be stored in a variable and can be passed to another function
as an arguments.
Some of the functional programming language concept in js are:
First Class function :
// Example of first-class functions
const add = function (a, b) {
return a + b;
};
const multiply = function (a, b) {
return a * b;
};

const operate = function (operation, a, b) {


return operation(a, b);
};

console.log(operate(add, 3, 4)); // Output: 7


console.log(operate(multiply, 3, 4)); // Output: 12
In this add and multiple are first class function.

HIGHER ORDER FUNCTION:


// Example of higher-order function
const multiplyBy = function (factor) {
return function (number) {
return number * factor;
};
};

const double = multiplyBy(2);


console.log(double(5)); // Output: 10
When a function returns a function or takes a function as an argument it is called higher order
function

PURE FUNCTION
// Example of a pure function
const addPure = function (a, b) {
return a + b;
};

console.log(addPure(3, 4)); // Output: 7


There are function that always return the same value for the same input and does not modify
the state of the program or the appliction
Below is the eg of function that is not pure
// Impure Function
let total = 0;

function addToTotal(amount) {
total += amount; // Modifying external state (side effect)
return total;
}

console.log(addToTotal(5)); // Output: 5
console.log(addToTotal(3)); // Output: 8
console.log(total); // Output: 8 (external state is modified)
Imutablity:
This can be achived using const keyword so that varible declared with const cannot be
modified.

////

Object.freeze vs seal
freeze will make object and its key immutable which means no operation other than read can be
used in that object
seal will make it so that your object cannot have new property or delete old but properties can
be modified

////
Cookies : They are used to store the data of the user on their own browser
////
There are 3 scopes in javascript
Local, Global. Functional
Local : It means that the variable declared with let or const is not availble to the outside of the
scope.
Global : variable declared at the parent or at the start is available to all the child scope or
function and can be accessed modified or shadowed b the child.
Funtional : This is for var keyword specifically because var declared in a block is accesseble to
the parent scope as well but a var in child function is not available on the parent function
///
map vs foreach
map returns new array
foreach does not return anything
////
this KEYWORD
The value of this is determined dynamically at runtime based on how a function is called, and it
can behave differently depending on the context in which it is used.

Here are some common scenarios where the behavior of this can be inconsistent:

Global Context:

In the global context (outside of any function), this refers to the global object (window in a
browser, global in Node.js).
Function Context:

Inside a function, the value of this depends on how the function is called. If the function is called
as a method of an object, this refers to that object. Otherwise, in strict mode, this is undefined,
and in non-strict mode, it refers to the global object.
Arrow Functions:

Arrow functions do not have their own this context. Instead, they inherit this from the
surrounding lexical scope. This can lead to unexpected behavior, especially when using arrow
functions as methods within objects.
Event Handlers:

In event handlers attached using addEventListener, this refers to the element that triggered the
event. However, if the handler is defined as an arrow function, this will not refer to the element.
Constructor Functions:

Inside a constructor function, this refers to the newly created instance of the object being
constructed. However, if the constructor function is called without the new keyword, this may
refer to the global object, leading to unintended consequences.

////
Prototype
Everything is an object in js. Even the primitive data type like string number and integer are
wrapped inside an object which helps us give method for that specific data type like split for
string.

__proto__
Every object in js has [[Prototype]] or in modern js __proto__ which has the reference of which
object it is derived from except the base object all the object in js have __proto__ property which
can be used to get to the parent object of that object. Inheritence is possible in javasript due to
this prototype chain if a property or method is not found inside the object it will check if its
present it the parent through this. You can use Object.getPrototypeOf() to get the prototype of
the object that your object belongs to instead of __proto__ as it is more standardised and
__proto__ can get deprecated

prototype keyword is attached on a class or a constructor function


example of prototype keyword

// Constructor function for creating Person objects


function Person(name, age) {
this.name = name;
this.age = age;
}

// Adding a method to the prototype of the Person constructor


Person.prototype.sayHello = function() {
console.log(`Hello, my name is ${this.name} and I am ${this.age} years old.`);
};

// Creating instances of Person using the constructor function


const person1 = new Person('Alice', 30);
const person2 = new Person('Bob', 25);

// Calling the method defined in the prototype


person1.sayHello(); // Output: Hello, my name is Alice and I am 30 years old.
person2.sayHello(); // Output: Hello, my name is Bob and I am 25 years old.

new KEYWORD
is used to create new object from constructor function
this is an example of playing around with new keyword in this example we can see that when
we do not instantiate a constructor function without the new keyword
// Constructor function for creating Person objects
function Person(name, age) {
this.name = name;
this.age = age;
this.hi = function () {
console.log("hi")
}
}

// Adding a method to the prototype of the Person constructor


Person.prototype.sayHello = function() {
console.log(`Hello, my name is ${this.name} and I am ${this.age} years old.`);
};

// Creating instances of Person using the constructor function


const person1 = Person('Alice', 30);
const person2 = new Person('Bob', 25);
hi()
console.log(name )
// Calling the method defined in the prototype
// person1.sayHello(); // Output: Hello, my name is Alice and I am 30 years old.
person2.sayHello(); // Output: Hello, my name is Bob and I am 25 years old.

Ways to create an object


a = {};
a = new Object()
a = Object.create()

Object.setPrototypeOf(car, vehiclePrototype);
this is used to change the prototype of an object and make it inherit properties and method of
the assigned prototype object

// instance of
// Constructor function for creating Car objects
function Car(make, model) {
this.make = make;
this.model = model;
}

// Creating an instance of Car


const car = new Car('Toyota', 'Corolla');

// Checking if 'car' is an instance of the Car constructor


console.log(car instanceof Car); // Output: true
// Creating an instance of Object
const obj = {};

// Checking if 'obj' is an instance of the Object constructor


console.log(obj instanceof Object); // Output: true

// Checking if 'car' is an instance of the Object constructor


console.log(car instanceof Object); // Output: true, because Car.prototype is an instance of
Object

////
Map
it is a data structure in js that is sort of like object but it stores the order of insertion. Any thing
can datatype can be key in this
It is a data structure in js that is also like Map but the difference are the keys can only be an
object and if the value is not used it might be garbage collected and unlike map it is not iterable

////
Call apply and bind
call is used to call methods or function with different context of this call have rest parameter
apply is the same as call just instead of rest it takes array in second param
bind is used to bind the context of this and it returns function that can be used to invoke the
function with same this context provided earlier

/////
Classes
Classes provide a more readable and structured way to work with objects and inheritance.
Constructors initialize object properties, and methods define behaviors.
Inheritance with extends allows for reusing code across classes.
Static methods operate on the class itself, while private fields enforce encapsulation.
Getters and setters allow controlled access to class properties.
JavaScript classes make it easier to implement object-oriented principles and are widely used in
modern JavaScript applications.

contructor: You can use the constructor function to set initial properties when creating an object
from a class
method: function declared inside the class is call method
properties: These are defined within the constructor and can be initialized based on parameters
passed when creating an instance.

RestFul Notes (csculc)


Client server architecture :
This means that the client and server are independent of each other .
Which means that the client will only handle the user experience and interface.
And server will be responsible for the data storage, processing and management

Stateless :
This means that every request is new for the server and server does not know what what you
were doing before this.

Cacheability :
We store cache data on client so that we can reduce the server call and improve response time
and also save bandwidth.

Uniform interface :
Every resource can be accessed and modified through a unique url
eg: https://api.example.com/products/123 in this api product with 123 id will can be accessed or
updated or deleted according to the method of the api request

Layered System :
REST allows you to use a layered system architecture where you deploy the APIs on server A,
and store data on server B and authenticate requests in Server C, for example. A client cannot
ordinarily tell whether it is connected directly to the end server or an intermediary along the way.

Code on demand (optional) :


Most of the time, you will be sending the static representations of resources in the form of XML
or JSON. But when you need to, you are free to return executable code to support a part of your
application, e.g., clients may call your API to get a UI widget rendering code. It is permitted.

FINAL NOTE: Other than code on demand if your application does not even follow one of the
constraint it is not a restful application

naming conventions
api name shoud be nouns not verb
which means its should be /users not /getUsers

You might also like