Full Stack 250226085616 96eef0d6
Full Stack 250226085616 96eef0d6
Here are several strategies I've implemented to enhance performance in .NET projects:
These strategies can significantly improve the performance of a .NET project. It's important to measure and monitor the
impact of these changes using appropriate tools and metrics to ensure they are delivering the desired improvements.
Custom middleware in the context of ASP.NET Core is a way to plug in your request processing logic into the application's
HTTP pipeline. This pipeline consists of a sequence of middleware components that are executed for each HTTP request
and response. Custom middleware allows you to perform operations such as logging, authentication, redirection, or any
custom logic before or after the next component in the pipeline.
To implement custom middleware in a .NET 6 application, you need to follow these steps:
It’s a common practice to create an extension method for the IApplicationBuilder interface to easily add your
middleware to the pipeline. This makes it easier to register the middleware in the Startup class.
Finally, you need to register your middleware in the application’s request pipeline. This is done in the Configure method
of the Startup class or the Program.cs file in .NET 6.0 and above.
For a .NET 6 application, you would modify the Program.cs file like this:
This sequence will insert your custom middleware into the application pipeline, ensuring that your logic is executed for
every request and response.
Key Points:
Custom middleware is created by defining a class with an InvokeAsync method that processes the request and calls
the next middleware.
It's common to create an extension method for easy registration of the middleware.
Middleware is registered in the application pipeline using the UseMiddleware extension method, typically in the
Program.cs file for .NET 6 applications.
Middleware components are executed in the order they are added to the pipeline.
4. What is CORS?
Cross-origin resource sharing (CORS) is a way of integrating programs. CORS outlines how client web applications loaded
in one domain can interact with resources in another domain. This is useful since complicated apps frequently leverage
third-party APIs and resources in their client code. CORS enables the client browser to verify with third-party servers to see
if the request is authorized before transferring any data.
5. How do you define and restrict domain in CORS?
Define and restrict the domain in Cross-Origin Resource Sharing (CORS) by designating which websites are permitted to
access your API or resources from a different domain. This is accomplished via HTTP headers given by the server in
response to requests from other domains.
This specifies which websites or applications are permitted to make requests to your API or resource. There are
numerous approaches to defining the acceptable origin:
Specific origin: Using the Access-Control-Allow-Origin header, you can define a single origin, including the protocol,
hostname, and port number (for example, https://example.com).
Wildcard origin: To allow requests from any domain, specify a wildcard (*) as the origin value. This is not
recommended for security reasons, as it allows uncontrolled access.
Multiple origins: Use the Access-Control-Allow-Origin header to define a list of allowed sources separated by
commas.
2. Restricting access:
After you've set the approved origin, you can further restrict access by defining extra headers using other headers.
Access-Control-Allow-Methods: This header specifies which HTTP methods the allowed origin can use to send
requests (e.g., GET, POST, PUT).
Example
Access-Control-Allow-Methods: <method>[, <method>]*
Access-Control-Allow-Headers: This header specifies which HTTP headers the approved origin may include with
its requests (for example, Authorization, Content-Type).
Example
Access-Control-Allow-Headers: <header-name>[, <header-name>]*
Access-Control-Allow-Credentials: This header specifies whether the browser should include cookies and
authorization headers in the request to your API. Set it to true if your API requires authentication and session data.
Example
Access-Control-Allow-Credentials: true
6. For .NET Core dependency injection, once use login will come under which
category, AddSingleton AddScoped or AddTransient?
In ASP.NET Core's dependency injection, the decision between AddSingleton, AddScoped, and AddTransient is determined
by the scope and lifetime of the service being registered.
1. AddSingleton: The service is created once and then reused throughout the application's lifetime. If you register a
service as a singleton, the same instance will be utilized for each request and will be created just once when the
application is launched.
2. AddScoped: The service is only created once per request. If you register a service as scoped, it will create a new
instance for each HTTP request and it for all requested instances for the same service in that request.
3. AddTransient: The service is created upon request. If you mark a service as transient, a new instance is produced each
time it is requested.
Handling JSON Web Tokens (JWT) after creation involves several key steps to ensure secure transmission, validation, and
usage throughout your application's authentication and authorization processes. Here's a comprehensive approach to
handling JWTs after you've created them, especially in the context of a .NET application:
1. Secure Transmission
HTTPS: Always use HTTPS to transmit the JWT between the client and server to prevent man-in-the-middle
attacks.
Cookies vs. Authorization Headers: Decide whether to send the JWT in a secure, HttpOnly cookie or through
Authorization headers. Cookies can mitigate the risk of XSS attacks, while Authorization headers can prevent CSRF
attacks. Choose based on your application's security requirements and architecture.
2. Storage on the Client Side
Web Applications: If you're using Authorization headers, store the JWT in memory or in secure, HttpOnly cookies.
Avoid local storage or session storage if possible, as they are accessible through JavaScript and vulnerable to XSS
attacks.
Mobile Applications: Use secure storage solutions provided by the platform, such as Keychain on iOS or
EncryptedSharedPreferences/Keystore on Android.
3. Use in Subsequent Requests
For every subsequent request after obtaining the JWT, it must be included in the request headers. For Authorization
headers, it's typically sent as a Bearer token: Authorization: Bearer
If using cookies, ensure they're sent with requests to domains that require authentication.
4. Validation on the Server Side
Expiration: Ensure that the JWT has not expired. Signature: Verify the JWT's signature to ensure it was issued by
your server and has not been tampered with. In .NET, this is typically done using the JwtBearer middleware, which
automatically validates the token on each request. Issuer and Audience Checks: Validate that the token's issuer
(iss) and audience (aud) claims match your application's expected values. Scope/Role Checks: Optionally, verify
that the JWT contains the required scopes or roles for the requested operation.
Cross-site scripting (XSS) is an attack in which an attacker inserts malicious executable scripts into the code of a
legitimate program or website. Attackers frequently launch an XSS attack by sending a malicious link to the user
and tempting them to click it.
Lazy loading is a technique that allows specific portions of a webpage, particularly images, to be loaded only when they are
required. Instead of loading everything at once, often known as "eager" loading, the browser does not request certain
resources until the user interacts in a way the resources are required.
13. What is the key difference between promise and observable?
Promises can only return one value (or an error). Observables may produce numerous values, one value, or no values at all.
For a web app, this means that observables can be utilized in numerous cases whereas promises cannot.
For .NET projects, a wide array of tools and services can be utilized to enhance development, testing, deployment, and
monitoring. These tools and services range from IDEs (Integrated Development Environments) and libraries to cloud
services and monitoring tools. Here's an overview of essential tools and services beneficial for .NET projects:
These tools and services encompass the full spectrum of .NET project development, from initial development through
deployment and maintenance. Depending on the project requirements, developers might choose a combination of these
tools to create efficient, scalable, and robust .NET applications.
Implementing data caching in .NET applications is a common approach to improve performance by reducing the time and
resources required to fetch data repeatedly from a slower data source (like a database or an external web service). .NET
provides several options for caching, including in-memory caching, distributed caching using Redis, and caching through
third-party libraries.
1. User Authentication: Implement safe user authentication with Angular tools or other libraries such as Firebase or
Auth0. Create features for user registration, password reset, and logins.
2. Define User Roles: Define user roles like "admin," "user," and "guest" to customize access permissions for your
application's needs.
3. Route Guards: Use Angular's route guards to restrict access to specific routes based on user role. Create route guards
that validate the user's role before allowing access to specific routes.
4. Apply Route Guards: To improve Angular application security, employ route guards to enforce access restrictions
based on user roles.
To ensure safe JWT storage, avoid utilizing local storage or session storage, which is subject to XSS. Instead, save JWTs
in an httpOnly cookie. Unlike conventional cookies, httpOnly cookies are only delivered in HTTP requests, preventing
JavaScript from accessing them. This protects tokens even if third-party scripts on your page are compromised, hence
increasing overall security.
Here are ten best practices for making your Angular application faster, more efficient, and more scalable
1. Use trackBy in the ngFor loops: Angular's ngFor directive is used to iterate through arrays and display the results on the
screen. Using the trackBy function improves the list's display.
// app.component.ts @Component({
selector: 'app-root', templateUrl:
'./app.component.html', })
export class AppComponent {
items = [
{ id: 1, name: 'Item 1' },
{ id: 2, name: 'Item 2' }, { id: 3, name: 'Item
3' } ];
trackByFn(index, item) {
return item.id;
}
}
2. Use Lazy loading: Lazy loading modules improve the application's initial load time by only loading the components that
are required.
Example
// app-routing.module.ts
Example
// app.component.ts
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
changeDetection: ChangeDetectionStrategy.OnPush
})
export class AppComponent {
@Input() items: Item[];
}
5. Use immutable data structures: Immutable data structures boost application performance by reducing needless data
manipulation.
Example
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
})
export class AppComponent {
items = Object.freeze([
{ id: 1, name: 'Item 1' }, { id: 2,
name: 'Item 2' }, { id: 3, name:
'Item 3' }
]);
}
. Use the AOT compilation: Ahead of Time (AOT) compilation increases application performance by compiling template
code during the build process.
Example
ng build –aot
7. Use Angular Universal for server-side rendering: Angular Universal enables the application to be rendered on the
server, resulting in improved speed for users with slow connections.
. Use RxJS for reactive programming: RxJS enables developers to create reactive apps, in which data flows easily
between components.
Example
import { Component, OnInit } from '@angular/core';
import { Observable } from 'rxjs';
@Component({
selector: 'app-example',
template: `
{{ count$ | async }}
Increment
`
})
export class ExampleComponent implements OnInit {
count$: Observable<number>;
private count = 0;
ngOnInit() {
this.count$ = new Observable(subscriber => {
setInterval(() => {
this.count++;
subscriber.next(this.count);
}, 1000);
});
}
increment() {
this.count++;
}
}
9. Use NgRx for state management: NgRx is an Angular state management library that enhances application
performance by providing a single source of truth for the application's state.
10. Use Web Workers: Web Workers run scripts in the background, boosting application speed by moving CPU-intensive
tasks to a different thread.
Components are the basis of Angular. So we need to know how components communicate with one another.
1. Parent-to-child sharing data via input: We utilize the @input decorator to transfer data across the parent and child
components.
Example
parent.component.
import { Component } from '@angular/core';
@Component({
selector: 'app-parent',
template: `
`
})
export class ParentComponent {
message = 'Hello from the parent!';
}
child.component
import { Component, Input } from '@angular/core';
@Component({
selector: 'app-child',
template: `
Message from parent: {{ childMessage }}
`
})
export class ChildComponent {
@Input() childMessage: string;
}
2. child to parent - sharing data via viewChild with AfterViewInit: It allows the child component to be injected into the
parent component, granting the parent component access to its child's attributes and functions. Parents will only have
access after the view has been initialized.
Example
parent.component
import { Component, ViewChild, AfterViewInit } from '@angular/core';
import { ChildComponent } from './child.component';
@Component({
selector: 'app-parent',
template: `
logChildData() {
console.log(this.childComponent.getData()); // Call child component's function
}
}
child.component
import { Component } from '@angular/core';
@Component({
selector: 'app-child',
template: `...`
})
export class ChildComponent {
data = 'Data from the child';
getData() {
return this.data;
}
}
3. child to parent - sharing data via output and EventEmitter: You use this method when you want to share information
about button clicks or other occurrences. The child component utilizes @output to convey data to the parent by
triggering an event using an EventEmitter imported from @angular/core.
Example
Child.component
import { Component, Output, EventEmitter } from '@angular/core';
@Component({
selector: 'app-child',
template: `
Send Message
`
})
export class ChildComponent {
@Output() childMessage = new EventEmitter();
sendMessage() {
this.childMessage.emit('Message from child!');
}
}
Parent.Component
import { Component } from '@angular/core';
@Component({
selector: 'app-parent',
template: `
`
})
export class ParentComponent {
receiveMessage(message: string) {
console.log('Received message from child:', message);
}
}
1. App Service logs: Navigate to your App Service in the Azure Portal. Under the "Monitoring" section, select "App Service
logs". Here, you can enable application logging (for Windows and Linux), web server logging, detailed error messages,
and failed request tracing. Once enabled, you can download the logs directly from the portal or access them via FTP.
2. Log stream: Still within the Azure Portal, under "Monitoring", there's a "Log stream" option that provides real-time
logging information. This is useful for viewing logs without having to download them.
3. Diagnostics and solve problems: The "Diagnose and solve problems" feature in the App Service menu offers a suite of
diagnostic tools and guided troubleshooting that can help identify and solve common problems, including errors.
4. Kudu Console (Advanced Tools): The Kudu Console (Advanced Tools in the Azure Portal) provides low-level details for
your App Service. Access it by navigating to your App Service in the Azure Portal, then select "Advanced Tools" and
click "Go". In Kudu, you can access log files, environment settings, and more. To view logs, navigate to the "Debug
console" and select "CMD" or "PowerShell", where you can navigate to the "LogFiles" directory.
Application logging captures errors and other diagnostic information from your application code. You can enable this
in both Windows and Linux environments.
1. For Windows:
Go to the Azure Portal, navigate to your App Service, and select "App Service logs" under Monitoring.
Turn on "Application Logging (Filesystem)" for temporary logging or "Application Logging (Blob)" to store logs
in Azure Blob Storage for long-term persistence.
Choose the level of logging you need (Verbose, Information, Warning, Error).
If you choose Blob storage, configure a storage account and container.
2. For Linux:
Similar to Windows, navigate to "App Service logs".
Enable "Application Logging" and select the level of detail.
Logs can be directed to the file system or blob storage.
2. Configure Web Server Logging
Web server logs capture information at the HTTP server level, including request details, which can be useful for
diagnosing errors.
For more granular error information, especially for troubleshooting HTTP error codes:
Still in "App Service logs", enable "Detailed error messages" to capture detailed error information when your app
returns HTTP status codes such as 5XX errors.
Enable "Failed request tracing" to capture detailed information on failed HTTP requests. This is particularly useful
for debugging request processing pipelines.
4. Use Application Insights for Advanced Monitoring and Diagnostics
Application Insights provides rich, real-time performance monitoring and failure diagnostics. To set it up:
In the Azure Portal, navigate to your App Service and select "Application Insights" under Monitoring.
Choose to create a new Application Insights resource or link to an existing one.
Follow the prompts to configure Application Insights, including instrumenting your application code as necessary,
depending on your development framework.
5. Accessing and Analyzing the Logs
Once logging is enabled, you can access and analyze logs through:
The Azure Portal (e.g., via "App Service logs", "Log stream", or the Application Insights section).
Downloading logs directly or accessing them via FTP if stored on the filesystem.
Querying logs stored in Blob storage or using tools like Azure Storage Explorer.
Using Kusto Query Language (KQL) in Application Insights for advanced queries and analytics.
Azure Functions support various types of triggers, each allowing you to execute code in response to different events.
Here's an overview of the main types of Azure Function triggers:
1. HTTP Trigger
Used for building APIs or webhooks. Functions can be invoked by HTTP requests.
2. Timer Trigger
Executes a function on a schedule. Useful for recurring tasks like daily reports or database cleanup.
3. Blob Trigger
Responds to events in Azure Blob storage. A function is executed when a file is added to or updated in a Blob storage
container.
4. Queue Trigger
Responds to messages arriving in an Azure Queue storage. Useful for processing tasks asynchronously.
Triggers from messages on a Service Bus queue or topic. Ideal for integrating different systems or microservices.
Responds to events delivered via Azure Event Grid. Supports a wide range of events (e.g., changes in Azure resources,
custom events).
7. Event Hub Trigger
Designed for high-throughput event processing from Azure Event Hubs. Useful for real-time data processing
scenarios.
. Cosmos DB Trigger
Facilitates real-time communications via Azure SignalR Service. Useful for applications requiring real-time data
updates.
An extension of Azure Function that enables stateful functions in a serverless environment. It includes several types of
triggers specific to orchestrating complex workflows, such as:
Handling deadlocks in Azure Queue Storage typically involves addressing the potential for messages to become "stuck" or
unprocessable, rather than traditional database deadlocks. If a message cannot be processed successfully by a consumer
(for example, an Azure Function or a WebJob triggered by a queue message), it may repeatedly reappear in the queue and
potentially block the processing of other messages. Here are strategies to handle such scenarios:
1. Visibility Timeout
When a message is retrieved from the queue, it becomes invisible to other consumers for a specified "visibility timeout"
period. If the message processing is not completed within this period, the message becomes visible again for
processing. Adjust the visibility timeout based on the expected processing time to minimize the chance of another
consumer picking up the message too soon.
2. De-duplication
Implement logic in your application to handle duplicate messages gracefully. This can be achieved by making message
processing idempotent (i.e., processing the same message multiple times results in the same outcome) or by tracking
processed message IDs in a storage mechanism to prevent reprocessing.
A "poison message" cannot be processed successfully after several attempts. Implement logic to detect such
messages and move them to a "dead-letter" queue or a storage table for later investigation, instead of letting them
cycle endlessly. This often involves counting the number of attempts to process the message and moving it out of the
processing queue after a certain threshold is reached.
Enhance error handling in the message processing logic to manage transient failures (e.g., retry policies) and
permanent failures (e.g., moving to a dead-letter queue) differently. Utilize try-catch blocks to catch exceptions that
may occur during processing and decide on the next steps based on the error type.
For high-volume scenarios, ensure that your queue processing is scalable and that you're partitioning workloads
efficiently. This might involve using multiple queues to distribute workloads or scaling out the number of consumers
based on the queue length.
Implement monitoring and alerting on your queues to detect anomalies such as a sudden increase in queue length,
which may indicate processing issues or deadlocks. Azure Monitor and Application Insights can be configured to track
these metrics and send alerts.
7. Message Expiration
Set an appropriate time-to-live (TTL) for messages. Messages that are not processed within the TTL are automatically
removed from the queue. This can prevent a system from being clogged by messages that cannot be processed.
Control the number of concurrent message processing operations to avoid overwhelming downstream systems or the
message processing application itself. Implementing throttling can help maintain stability and prevent deadlocks from
occurring due to resource contention.
Handling deadlocks in Azure Queue Storage is more about preventing a single or a set of messages from blocking the
queue's processing capabilities. By implementing robust error handling, poison message handling, and thoughtful
design around message visibility and processing, you can mitigate the impact of unprocessable messages on your
system.