Literature Review
Literature Review
Literature Review
This chapter will review the literature that has been written in the topic of cybersecurity in
web development. More specifically it will present the different types of cybersecurity threats and
attacks on websites, along with ways to protect and neutralize those threats. Later on, this chapter
will focus on penetration testing techniques, security holes, vulnerabilities and the ways it can affect
the development of a web page.
Malware is a term that describes different types of malicious software that compromise a
system without the permission of the user. The was created by combining the words of “malicious”
and “software. The number of websites that are affected by malwares continues to grow, as this
malicious piece of software continues to evolve and become more complex. Malware can also infect
other executable code, therefore creating excessive pressure on a network, resulting to denial of
service. When a user unknowingly runs a malware, it loads in the memory and can infect other
pieces of software that are installed on the system. In case the operating system has a vulnerability,
the malware can use the OS to infect other computer systems that are on the same network. Some
types of malware are also notorious for slowing down a system to the point it cannot be functional
anymore. A typical way to tackle and get riderid of a malware from the system is through a typical
anti-virus system. The antivirus works by recognising the binary code patterns of the malware’s
source code. The method of checking for virus signatures stopped being efficient when the creators
of those malicious pieces of software started writing polymorphic and metamorphic malware. These
new kinds of malware avoided the antivirus detection with the help of encryption techniques that
helped trick the signature-based detection. Even though antivirus programs continue to evolve, the
different malicious software, use new techniques in order to embed and hide its code in the original
program in order to avoid detection (Tahir, 2018).
The first type of malware is the typical virus. They are small programs with the ability to
replicate themselves. In the case of metamorphic viruses, they can modify themselves into new
variants. The virus operates when the user runs an executable file that has virus code appended in it.
As mentioned above the virus can spread to multiples systems though networks. Viruses usually
target binary executable files, script files, documents etc. Worms are another type of malware that it
uses the network to replicate and send copies of itself, without authorization by the user. Worms
tend to cause problems in a network by consuming its bandwidth. The worm can act by itself and
does not require a support file. It can delete and encrypt files or send junk email. Spyware on the
other hand is a collective term which describes a software that collects and gathers personal
information about the user, such as email addresses and frequently used websites. Their main goal is
to monitor Internet users’ activities. Trojan horses often look like a useful program in order to trick
the user into downloading it. Its main purpose is to steal sensitive information, observe the user’s
activity and corrupt important system files or resources. Another important type of malware it the
botnet. It is a remotely controlled software that allows the attacker to control the system that has
infected. Botnets usually target large networks, with the sole purpose of infecting their systems and
later using them in order to perform DOS attacks, sent spam messages or steal credentials from
databases. Botnets that work on large scale connect to each other by a hierarchical structure, in
which a master bot connects to thousands of bots (Or-Meir, Nissim, Elovici and Rokach, 2019).
2.1.2 Phishing
1
One of the simplest and most effective cybersecurity attacks is phishing. In this kind of
attacks the so called “phisher” will try and send a huge number of emails that impersonate a bank or
a state agency. The email is usually created in a way to alarm the receiver that something is wrong
with their bank account details or the different passwords that he uses. In this case, any action that
the user conducts using the links that are supplied by the malicious email, is sent to the phisher’s
fake website instead of the original one. The main reason the phishing attacks are usually preferred
by malicious hackers is the lack of computer knowledge and training among the users. Many users
do not know, nor they understand the importance of the Uniform Resource Locators (URLs),
therefore they cannot differentiate the fake malicious website that appears to be legit, with the
original website that is provided by the company. Also, the lack of knowledge by the user stops them
from using the security measures provided by the web browsers. Attackers also utilize visual
deception by manipulating textual or graphical from. For example, the phisher’s may substitute the
letter (l) with the number (1) in the URL. Resulting in a website URL that looks like the original but it
is not e.g (www.paypal.com vs www.paypa1.com). Another clever attack that malicious parties use is
manipulation of graphical forms. In this case they use Javascript to show the security padlock icon in
the address bar in order to user to be tricked into thinking it is a secure website and input their
credentials such as passwords etc. The security measures that are usually take in this case are two,
the list-based approach and the heuristic-based. The list-based checks the suspicious website against
a black list, if there is a match, it informs the user. The heuristic approach its using a mechanism to
extract some distinctive features from the suspected website in order to compere them with other
websites that are known to be malicious. The list based approach is fast and effective but it needs to
always be kept to up to date, while the heuristic approach uses more computational power but it is
more flexible in detecting a new phishing page (Chiew, Chang, Sze and Tiong, 2015).
In the literature, most heuristic-based methods they are either determining the identity of a
webpage based on the direct evaluation using some form of phishing characteristics without
knowing the identity or they determine the identity on textual elements. Regarding the textual
elements its main drawback is its textual sematic gap. While the evaluation using the phishing
characteristics is in a sense random because it is based on finding evidence without any baseline,
therefore it has many uncertainties. While it might be able to effectively detect existing phishing
websites with existing and commonly used characteristics, it is very ineffective towards zero-day
phishing attacks. One example of this is the evaluation of the URL for domain name obfuscation will
fail when the legitimate website is injected with a phishing webpage. Another example is by
assessing the structure of an HTML page (DOM) for abnormality, in which case the screening will fail
because the hacker can have the exact copy of the website (Chang, Chiew, Sze and Tiong, 2013).
Web applications that use content that is database driven are very common and widespread
nowadays. After they receive information from the user, such as password and username, they
2
interface with databases that manage credit card numbers, customer names, preferences etc. SQL
injection is a type of an attack that is maliciously exploiting applications that use SQL statements
with client-supplied data. The malicious parties target the structure of the SQL queries and trick
them in executing unintended commands, using input sources. By doing that they manage to gain
access to a database, this allows them to manipulate and view classified data. By utilizing the SQL
injection vulnerabilities, the hacker can delete, modify, and read the database information. The SQL
attack uses a code that takes advantage of the lack of validation of the input sources. When the
developers combine hard-coded strings, along with user-provided inputs in order to create queries,
an SQL injection can be possible. In case that the input sources are not validated and checked
properly, the attacker can change the original SQL query and insert new SQL keywords using input
strings. One important type of input validation is sanitization. This validation removes elements that
can be malicious from the input sources. Also, sanitization is applied before the use of external input
parameters in target operations. Most security protocols and measures try to implement regular
expression or validation function to the input source. This does not always protect from an SQL
injection even though there has been applied sanitization function to some malicious values. Various
tools that protect against these attacks such as proxy and instruction detection system (IDS) often
fail to repel the attack. That occurs because the SQL attacks are implemented using ports that are
meant for regular traffic. Structural matching techniques in some cases also fail to identify the attack
(Jang and Choi, 2014).
SQL injection attacks are classified in 3 main categories which are the (out-of-band, in-band
and inferential). In the in-band category the information is taken from the identical is implemented
for the SQL attack, this method tends to be the easier and most widely used. The other category is
the out-of-band, in this case an extracted information gets delivered back to the attacker which
depends on the different channel, for example an email. The last category is inferential, which is also
known as the blind injection because the attack does not return data from the server. The main goal
in this attack is to reconstruct the data that is stored to the database, this is implemented through a
different attack and observe the behavior from the server and web application. Some of the
techniques the hackers use to implement those attacks are the (Union query, Timing attacks,
Tautologies). In the union query technique, the main target is to trick the database by returning
results from a table which is different from the one intended, this technique is usually used to
bypass verification and extract data. In the timing attack technique, the hacker tries to collect special
information from the database that he is targeting, by spotting the timing delays that are generated
from the database response (THIYAB, ALI and ABDULQADER, 2021).
3
1(Maraj, Rogova, Jakupi and Grajqevci, 2017)
One of the most common type of web attack is Cross Site Scripting. It happens when
malicious code in script form, is executed or send from the victim’s browser using the web
application (Jamil et al., 2018). When the execution occurs, the hacker is able to intercept personal
information or steal the cookies in order hijack the user’s identity in a fraudulent session. This way a
hacker can steal personal data or even take control of various devices. According to Imperva, XSS
attacks have had the highest number of web application vulnerabilities in the year 2017 (Wibowo
and Sulaksono, 2021). Major web browser providers have developed various filters that work on the
client side, and therefore help against and XSS attack. In case that a hacker take advantage of a
security flaw in a webpage, he may manage to execute some malicious code on the system, that
could scale on the network of the whole organization. Most web apps tend to have flaws in their
source code, this may increase the chances of a successful attack and give to the hacker access to
the sessions of a user, this can help steal their cookies and take control of their browser. Usually in
this case the victim is the user and not the application (Rodríguez, Torres, Flores and Benavides,
2020).
4
Cross Site Scripting attacks are implemented using JavaScript, HTML, VBScript, ActiveX and other
client-side languages. This is carried out by using a weak input validation on the web app that allows
for data to be gathered by the hijacked account. Essentially the attacker embeds malicious code such
as HTML or JS into a vulnerable dynamic web page. This can go to a step further and allow the
attacker to instal malicious code on the end-user systems. The data that contain the malicious code
are usually concealed as a hyperlink and is distributed though the internet. The three main XSS
attacks are the (Persistent, non- persistent and DOM based attacks). In the persistent XSS the
attacker injects the malicious code to the page, resulting the code to be stored in the target servers
as an html text. If the users engages to the page where the XSS code exists, the code will execute on
the victim’s browser. This will send the users sensitive information to the hacker. This type of attack
is also known ass “Stored xss”. Another type of XSS attack is the non-persistent cross-site scripting
vulnerability, which is also the most popular. In this attack the code is not stored in the servers,
instead it is reflected back to the user. The code is injected back the user off the server, in form of an
error message. To do this the hackers send an email to the user with the malicious link, when the
users clicks on it the vulnerable web app will display the web page with the info given to it in this link
(Shinde and Ardhapukar 2016). This info has the malicious code which is part of the page that is sent
to the browser of the user where it is executed. The last main XSS attack is the DOM-based cross-site
scripting in which the attack is done by changing the DOM “environment” in the client side, as
opposed to sending any malicious code to the server. DOM lets the scripting to change the HTML
document, therefore it can be modified by a hackers scripting program. This type of XSS attack is vey
different from the other two types, since does not store or inject malicious code in the page. The
hackers use the DOM environment to execute the attack payload, in order to exploit the victim. The
image above shows the architecture of the sequence of the steps need to be followed, to exploit the
reflected CSS vulnerability (Nithya, Pandian and Malarvizhi, 2015).
The so-called DOS attack begins its action by sending multiple unnecessary messages to the server in
order to get requests authentication, these messages though have invalid return addresses. Once
the server cannot find the address for authentication, it waits for a certain time before ending the
connection. Once the connection has ended, the attacker sends again new messages with invalid
return addresses for authentication. This way the server gets stuck and becomes engaged, which
means it stops providing the meaningful services to the other users. These attacks can last for many
days, resulting in reduced reliability and bad image for the attacked web page (Shinde and
Chatterjee, 2018). Essentially what denial of service attack does is to flood the active server in order
for the service to become unavailable because large numbers of request are pending and
overflowing the service queue. Another variant of DOS attack is the Distributed DOS or DDOS attack.
In this version the attackers are group of machines that target a particular service. The numbers of
DDOS attack tend to increase by the year and it is calculated that at least 20% of the large
enterprises have had a DDOS attack on their systems. DDOS attacks usually tend to occur in cloud
services, some popular ones that have drawn the attention are the cloud-based gaming services of
Microsoft and Sony. Amazon EC2 cloud servers have also received a massive DDOS attack. The
results of those attacks were downtime in their services and websites, business losses, and declining
reputation towards the image of the company (Somani et al., 2017).
The DDOS attacks are implemented using various software tools, the new tools that have
recently appeared they can be used in different operating systems and do not require substantial
5
technical knowledge in order to be used. It should be noted that not all of the DDOS tools were
developed for malicious intent, many of them have been created with the sole purpose of testing
the server capabilities. Low orbit Ion Cannon (LIOC) is an open source piece of software for testing
the capabilities of a network. It does that by generating intense traffic for the servers and it is
intended for Windows and Mac operating systems. Slowloris is another tool that allows strong DDOS
attacks on medium-level servers. Its function is relatively simple, and it is proven to be very efficient
towards attacking Apache 1.x and 2.x servers. It works by opening many connections on the target
and keeps them active for a period of time by continuously sending HTTP incomplete requests
(Papadie and Apostol, 2017).
6
Penetration tests are done manually or automatically. In the case of a vulnerability that is
difficult to be detected using automated tools, that vulnerability is found by a manual scan. Web
apps based on, their skills and knowledge of system, so that a pen tester can perform better those
attacks for it. This method is performed by a human and not a program since it requires social
engineering. In the manual testing required design, business logic and code verification. The social
engineering technique is usually implemented by hackers in order to hack a website or web app. In
case of the automatic testing there is a big variety of tools that can be used for vulnerability
assessment and penetration testing. For Web Apps the software of choice is usually Accunetix or
Burpsuit. The automated tools are usually used just for vulnerability assessment while the actual
penetration testing is done manually. That occurs because pen testers take the next step based on
the results of the previous action. Meaning that each website is different, therefore the tester must
use different approach and tools for each case (Nagpure and Kurkure, 2017). The image bellow
shows the different types of attacks that can be implemented using manual or automated testing in
a case of a web app.
7
HTTPS (Hypertext Transfer Protocol Secure) is a version of HTTP which offers enhanced security. It
uses SSL/TLS protocol in order to encrypt and authenticate. It is specified by RFC 2818 and uses port
443. The HTTPS protocol allows for sensitive data to be transmitted through the internet with
increased security. The way HTTPS works is by wrapping the HTTP inside a SSL/TLS protocol
(tunnelling), resulting in all messages being encrypted between a client and a server. Even though
the eavesdropper can potentially access some data like IP, port numbers and domain names, the
actual data that are exchanged (Requested URL, webpage content, query, headers, coockies) remain
securely encrypted by the SSL/TLS. Nowadays it has become a standard security protocol not only
for online banking and other high-risk communication but for virtually every website on the web.
HTTP was originally created as a clear text protocol, which made it susceptible for eavesdropping
and “man in the middle” attacks. Using public-key cryptography and SSL/TLS handshake the
communication session is encrypted. A webpage’s SSL/TLS certificate offers a public key that the
web browser uses to confirm the documents that are being send by the server, those documents
would have been digitally signed by someone who has the private key(Kaur and Kaur 2017). When a
server’s certificate is signed by a publicly trusted certificate authority, the browser accepts that the
information included in the certificate, has been validated by a trusted third party. Sometimes the
HTTPS websites are configured for mutual authentication, in which the web browser presents a
client certificate identifying the user. Each document like an image or a JavaScript file that is send to
the browser by a server with HTTPS, it includes a digital signature that the web browser uses to
determine if the document has been altered by a malicious party while it was in transit. The server
calculates the cryptographic hash of the document’s contents, along with its digital certificate, which
the browsers uses to calculate the document’s integrity. Certificate Authorities use three basic
validation methods when they issue digital certificates. The (DV) Domain Validation, the
Organization / Individual Validation (OV/IV) and the Extended Validation (VA). The validation method
is used to determine the information that will get included in a webpage’s SSL/TLS certificates
(Russell, 2020).
The SSL (Secured Sockets Layer) and TLS (Transport Layer Security) are protocols that are used for
creating authenticated and encrypted links between networks. They work by binding websites to
cryptographic key pairs through digitally documents that are known as X.509 certificates. Each pair
has a private and public key, the public gets distributed via the certificate while the private is kept
secure. The holder of the private key can sign webpages, while anyone who has obtained the public
key can verify the signature. When the SSL/TLS certificate is signed by a publicly trusted certificate
authority (CA), the certificate will be trusted by the web browsers. The trusted CAs are approved by
big software companies, in order to validate identities that will be trusted on their platforms.The
SSL/TLS protocol in comprised of four sub-protocols which are the (Handshake, ChangeCipher,
Record, Alert). The Handshake sub-protocol allows the client to do the authentication of the
communicating sever using the server’s public key certificate. The key exchange between client and
server is archived by using public key algorithm followed by keying material generation. The data
confidentiality for the traffic is implemented by using a symmetric key encryption algorithm. After
the handshake protocol and the confirmation of the server authentication, the client and the server
establish a master secret key using a key exchange algorithm usually based on RSA. The Record sub-
protocol secures the data by using the mac and encryption keys that are computed in the Handshake
sub-protocol. The Alert sub-protocol is activated when errors or warnings occur while running the
other sub-protocols (Das and Samdaria, 2014).
2.3.2 Security for user account, log-in module etc.
Often a website can have a user account with a default username “admin” which is way
easier for an attacker to find and attack or query to your website. It is a bad security practice since it
makes things already easier for an attacker who already knows on of the credentials required for log-
in. It is strongly recommended for users to use a strong password or use a password generator that
create very strong passwords. Also, a measure to reduce the possibility of a brute force log-in attack
8
is to use “log-in lockdown”, this feature can lock out of the system certain users that have a specific
IP range, for a certain period of time based on the configuration settings. Security measures while
registering a user must also be implemented. For example, the use of manual approval for all
registered accounts can minimize spam and fake registrations. Captcha can also help validate the
user. Security can be increased in database modules by modifying the table prefixes in order to make
it harder for the attacker to predict the table prefix. The functionality to blacklist and whitelist an IP
should also be used in order to identify and block spam bots and search engine bots (Taral and Gite,
2014).
2.3.3 Website Vulnerability scanners
Web vulnerability scanners detect threats and protect websites and web apps. Web
vulnerability scanners go through the pages of websites and search for malware, logical flaws,
vulnerabilities etc. They archive that by creating malicious inputs, while evaluating the website’s
responses. These tools, which are also called “dynamic application security testing” (DAST), they
perform functional testing only and do not go through the webpages source code. One commonly
used vulnerability scanner is the Netsparker. It is a cloud-based tool that manages the entire
website’s security lifecycle by using an automated vulnerability assessment. The detection and
verification of the security holes is done in a safe read-only environment. In order to reduce false
positives the vulnerabilities are reported only when they are reproduced in the test environment.
Another highly rated DAST tool is the “insightAppSec” by Rapid7. This tool automatically assesses
and crawls web apps in order to find any vulnerabilities such as SQL injection, XSS and CSRF. It
features a universal translator for normalizing traffic by understanding the protocols and the
development technologies. The insightAppSec tests websites for more than 95 different attack types
and offers “attack replay” which is a tool that can be used by developers to reproduce and confirm
the vulnerabilities (Peterson, 2020).
In case a website has a form or a URL parameter that allows outside users to supply information, it
becomes susceptible to SQL injections. One common implementation required in order to protect a
website from SQL injection is to use parameterized queries. This ensures the code has enough
specific parameters, making it harder for malicious parties to attack. Another way to enhance the
security of a webpage is to make fields or functions that allow input, as explicit as possible regrading
to what is allowed in. Content Security Policy (CSP) is a tool that helps in protecting websites against
9
XSS attacks. It allows you to specify which domains a browser should consider valid sources of
executable scripts when on your page. The browser then will avoid any malicious scripts. Regarding
file permissions, a file that is assigned a permission code that allows everyone on the web to write
and execute, has a very low security in contrast with a file that has been locked down with rights
only given to the owner. Of course, some files have to be open to other groups of users, such as in
anonymous FTP upload for example, but they have to be closely considered in order any security
risks to be avoided. To change the permissions in Filezilla, one must right click on the file or folder
and select the “file permissions” option. This will result in opening a screen with checkboxes that
show the different permission status, allowing the owner of the file to change them (Hicks, 2020).
In the process of Threat Modeling (TM), which occurs during the design phase, the designers
try to identify, prioritize and mitigate potential threats in order to the system and data. Threat
modelling is considered as the next step after a broader risk assessment (K Ballal, 2020). There are
five basic steps for threat modelling. The first one is to define the security requirements, this is done
to know what to protect and what the threats can target. Secondly is the mapping of the websites or
web app structure, this helps understand the data flows and parties involved. The third step is to
identify the threats and categorize them, this is done based on defined security criteria that help
identify the threat and analyse it. The forth step is to mitigate those treats in order to make sure that
the identified threats won’t occur in real life attacks. The last step is more of a double check to make
sure everything specified in the threat model will remain a theoretical risk. A good threat model is
always part of the software design and development process. An ideal threat model for a website or
web app is defined and maintained from the start of the development. That being said, sometimes
can also be added to an existing website (Application Threat Modeling | OWASP, 2021). The threat
modelling resides close to the code and be updated once the security conditions change. For
instance when a web app changes its local database to cloud storage, this may introduce new
threats and therefore require for changes to be made in the threat model. A good and widely used
product for threat modelling is the “Microsoft Threat Modeling Tool”, it is a free to use product that
should be used as a part of the Security Development Lifecycle approach. On the downside this tool
is for Windows OS only which does not make it useful for every web deployment. Another free and
open-source tool is the OWASP Threat Dragon, which is a modelling tool focused on development. It
offers integration with GitHub and provides visual representation of data stores, threats, processes
etc. This allows for data flow diagrams to be created and threat models that are saved as JSON files
(Banach, 2020).
In the Development Phase, one good security practice is to always use the latest versions of
libraries and third-party codes. Also developers must keep up with latest “Open Web Application
Security” OWASP’s TOP 10. This will guide them to what issues and mistakes should avoid during
coding. Bellow are the Top 10 Web App security risks, according to OWASP’s top ten list (OWASP Top
Ten Web Application Security Risks | OWASP, 2021).
1. Injection flaws such as the NoSQL, LDAP, SQL that occur when untrusted data is sent to an
interpreter as a part of a query.
2. Broken Authentication, for example application functions related to session management
are implemented incorrectly, allowing hackers to gain access to passwords, keys etc.
10
3. Sensitive data exposure. Attackers may modify or steal weakly protected data to conduct
credit card fraud etc.
4. XML External Entities (XXE). Poor configured XML processors that evaluate external entities
can be used to disclose internal file using internal port scanning or remote code execution.
5. Brocken Access control, in this case attacker exploit flaws to access unauthorised
functionalities and data such as user accounts or sensitive files.
6. Security misconfiguration, which is also the most common issue, is a result of
misconfigurations of HTTP headers, open cloud storage and verbose error messages.
7. XSS flaws that occur when an app includes untrusted data in a webpage with no proper
validation or escaping.
8. Insecure Deserialization that leads to remote code execution. This leads to replay, injection
and privilege escalation attacks.
9. Using components with known vulnerabilities may undermine the application’s defences
and enable various attacks and impacts.
10. Insufficient Logging & Monitoring allows hackers to tamper, extract or destroy data since
they have more type to further deploy their attacks due to no sufficient monitoring.
In 2016 Liu and Gupta proposed for a holistic framework towards internet security, that was
based on analysing the different cybersecurity threats and arguing that the standalone application
security design was the basic building block. In order to prevent the front-end and back-end
vulnerability issues from the web apps ecosystem, full-stack development must follow secured
software guidelines. When a web server deploys HTTPS for its webpage, it mitigates “Man in the
middle” attacks in most cases. However, HTTPS can still be vulnerable to CSRF attacks, and can not
11
replace the security measures seen in front-end and back-end. For front-end frameworks for
example, Angular has incorporated protections against the usual web application attacks and
vulnerabilities. This includes protection against XSS, CSSI and CSRF (Angular, 2021). In the image
bellow, Angular’s recommended best practices are provided in their build-in protections. For
instance Angular recommends the use of offline temple compiler that prevents the attacker’s inputs
from entering the source code template. Regarding the application-level security, Angular assigns to
the back-end to deal with it. This includes issues with the authorisation and authentication. The
HTTPClient library that is offered by Angular, also supports for client-facing end for CSRF, this can be
seen in the image bellow. Despite the fact that Angular offers build in sanitization, the form
validation offers adds-on front-door security checkpoints(Kolev, 2018). When form control objects
are created, all validators can be assigned, this also can be seen in the image bellow.
In the case of back-end, one popular web app framework is ASP.NET Core. This framework can
structure back-end services for API, database and identity. ASP.NET also features security
technologies such as SSL enforcement, anti-request forgery protection and CORS management. The
image bellow shows how to configure the ant forgery protection features, using “IAantiforgery”. It
requests antiforgery in the Configure method of the Startup class. Then it requires antiforgery
validation with the ValidateAntiForgeryToken tha is set to individual action, controller of globally (Liu
and Gupta, 2019).
12
7(Liu and Gupta, 2019)
As can be seen in the code that is provided in the image bellow, in order to prevent the
redirect attacks, the LocalRedirect helper method should be used and the ISLocalUrl method to test
URLs before redirecting. The two methods just mentioned can be seen in the image bellow shown in
red colour.
In order to override the browser’s default same origin policy, the ASP.NET frameworks
allows the use of CORS (Cross Origin Resource Sharing), for a specific cross-origin request through
applying CORS policies per controller or per action, as well as applying the policies globally. For the
implementation of the authentication, the ASP.NET Core Identity membership systems should be
used, in order to add login functionality and use the IdentityServer4 for securing the app. The
IdentityServer4 is a framework that is used in ASP.NET for authentication using the OpenID Connect
and OAuth2.0 (Liu and Gupta, 2019). OAuth works by using access tokens that offer identity proof,
without specifying the format that tokens can take, JWT (JSON Web Token) can be the token choice.
The image bellow shows how to generate a JSON web token in ASP.NET Core and how to retrieve it
in the Angular front-end after the validation of the users credentials has occurred (Spasojevic, 2020).
13
9(Liu and Gupta, 2019)
2.5 Conclusion
This Literature review identified the different types of cybersecurity threats, such as DDos,
Malware, Phishing etc. They were analysed and compered as to what damage can induce in a
website. Also, it was shown in what manner they compromised a website or web app. Towards the
end there was a description of how the development of a website is affected and different types of
frameworks for front-end and back-end web development were shown (Angular and ASP.NET core).
Apart from that also code samples with explanations were provided for better understanding.
14