Cyber Security Solutions
Cyber Security Solutions
Prof N K GOYAL
President CMAI
Chairman Emeritus TEMA
www.cmai.asia
www.tematelecom.in
Contents
Acknowledgement ...................................................................................................................... 3
Digital India.................................................................................................................................... 4
Cashless India................................................................................................................................ 5
Introduction ................................................................................................................................... 8
Major Cyber Attacks Past One Year ........................................................................................ 14
Cyber-Security Solutions ............................................................................................................ 30
Antivirus & Mobile App Security................................................................................................ 31
Authentication ............................................................................................................................ 41
Biometrics..................................................................................................................................... 50
Cryptography.............................................................................................................................. 57
Data Breach ................................................................................................................................ 63
Data Loss Prevention (DLP) ....................................................................................................... 72
DDOS Attack Protection ............................................................................................................ 84
Embedded System Security ...................................................................................................... 94
Firewall.......................................................................................................................................... 98
Fraud Detection and Prevention ........................................................................................... 105
IAM- Identity & Access Management ................................................................................... 113
Incident Response .................................................................................................................... 121
Intrusion Detection ................................................................................................................... 128
Log Analysis & Management.................................................................................................. 135
Mainframe Security .................................................................................................................. 140
Machine Learning Security- Adversarial Learning ............................................................... 142
Network Security Monitoring ................................................................................................... 147
Next Generation Firewall ......................................................................................................... 154
Password Management .......................................................................................................... 159
Patch Management ................................................................................................................ 163
Penetration testing ................................................................................................................... 167
Privileged Access Management (PAM) ................................................................................ 178
Public Key Infrastructure (PKI) ................................................................................................. 186
Risk Analysis................................................................................................................................ 193
1|Page
SAP ERP Security........................................................................................................................ 200
Software Development Security............................................................................................. 205
Unified Threat Management................................................................................................... 212
Web App & Website Security.................................................................................................. 215
WAF- Web Application Firewall .............................................................................................. 225
Wireless/Wi-Fi Security .............................................................................................................. 230
Conclusion ................................................................................................................................. 241
Recommendations................................................................................................................... 242
References ................................................................................................................................ 244
2|Page
Acknowledgement
“It is impossible to prepare a project report without the assistance & encouragement of
other people. This one is certainly no exception.”
Cyber security is the need of the hour in India and this report is dedicated to the citizens
of India.
3|Page
Digital India
It was launched on 2 July 2015 by Honorable Prime Minister Dr Narendra Modi. The
initiative includes plans to connect rural areas with high-speed internet networks. Digital
India consists of three core components. They are:
Digital Technologies which include Cloud Computing and Mobile Applications have
emerged as catalysts for rapid economic growth and citizen empowerment across the
globe. Digital technologies are being increasingly used by us in everyday lives from
retail stores to government offices. They help us to connect with each other and also to
share information on issues and concerns faced by us. In some cases they also enable
resolution of those issues in near real time.
The objective of the Digital India Group is to come out with innovative ideas and
practical solutions to realise Prime Minister Modi’s vision of a digital India. Prime Minister
Modi envisions transforming our nation and creating opportunities for all citizens by
harnessing digital technologies. His vision is to empower every citizen with access to
digital services, knowledge and information. This Group will come up with policies and
best practices from around the world to make this vision of a digital India a reality.
4|Page
Cashless India
The Digital India program is a flagship program of the Government of India with a vision
to transform India into a digitally empowered society and knowledge economy.
“Faceless, Paperless, Cashless” is one of professed role of Digital India.
As part of promoting cashless transactions and converting India into less-cash society,
various modes of digital payments are available.
USSD
The innovative payment service *99# works on Unstructured Supplementary Service
Data (USSD) channel. This service allows mobile banking transactions using basic
feature mobile phone, there is no need to have mobile internet data facility for using
USSD based mobile banking. It is envisioned to provide financial deepening and
inclusion of underbanked society in the mainstream banking services.
*99# service has been launched to take the banking services to every common man
across the country. Banking customers can avail this service by dialling *99#, a
“Common number across all Telecom Service Providers (TSPs)” on their mobile phone
and transact through an interactive menu displayed on the mobile screen. Key services
offered under *99# service include, interbank account to account fund transfer,
balance enquiry, mini statement besides host of other services. *99# service is currently
offered by 51 leading banks & all GSM service providers and can be accessed in 12
different languages including Hindi & English as on 30.11.2016 (Source: NPCI). *99#
service is a unique interoperable direct to consumer service that brings together the
diverse ecosystem partners such as Banks & TSPs (Telecom Service Providers).
5|Page
Aadhaar Enabled Payment System (AEPS)
AEPS is a bank led model which allows online interoperable financial transaction at PoS
(Point of Sale / Micro ATM) through the Business Correspondent (BC)/Bank Mitra of any
bank using the Aadhaar authentication.
UPI
Unified Payments Interface (UPI) is a system that powers multiple bank accounts into a
single mobile application (of any participating bank), merging several banking
features, seamless fund routing & merchant payments into one hood. It also caters to
the “Peer to Peer” collect request which can be scheduled and paid as per
requirement and convenience. Each Bank provides its own UPI App for Android,
Windows and iOS mobile platform(s).
Mobile Wallets
A mobile wallet is a way to carry cash in digital format. You can link your credit card or
debit card information in mobile device to mobile wallet application or you can
transfer money online to mobile wallet. Instead of using your physical plastic card to
make purchases, you can pay with your smartphone, tablet, or smart watch. An
individual's account is required to be linked to the digital wallet to load money in it.
Most banks have their e-wallets and some private companies. e.g. Paytm, Freecharge,
Mobikwik, Oxigen, mRuppee, Airtel Money, Jio Money, SBI Buddy, itz Cash, Citrus Pay,
Vodafone M-Pesa, Axis Bank Lime, ICICI Pockets, SpeedPay etc.
Point of sale
A point of sale (PoS) is the place where sales are made. On a macro level, a PoS may
be a mall, a market or a city. On a micro level, retailers consider a PoS to be the area
where a customer completes a transaction, such as a checkout counter. It is also
known as a point of purchase.
Internet Banking
Internet banking, also known as online banking, e-banking or virtual banking, is an
electronic payment system that enables customers of a bank or other financial
institution to conduct a range of financial transactions through the financial institution's
website.
6|Page
Mobile Banking
Mobile banking is a service provided by a bank or other financial institution that allows
its customers to conduct different types of financial transactions remotely using a
mobile device such as a mobile phone or tablet. It uses software, usually called an app,
provided by the banks or financial institution for the purpose. Each Bank provides its
own mobile banking App for Android, Windows and iOS mobile platform(s).
Micro ATMs
Micro ATM meant to be a device that is used by a million Business Correspondents (BC)
to deliver basic banking services. The platform will enable Business Correspondents
(who could be a local kirana shop owner and will act as ‘micro ATM’) to conduct
instant transactions.
The micro platform will enable function through low cost devices (micro ATMs) that will
be connected to banks across the country. This would enable a person to instantly
deposit or withdraw funds regardless of the bank associated with a particular BC. This
device will be based on a mobile phone connection and would be made available at
every BC. Customers would just have to get their identity authenticated and withdraw
or put money into their bank accounts. This money will come from the cash drawer of
the BC. Essentially, BCs will act as bank for the customers and all they need to do is
verify the authenticity of customer using customers’ UID. The basic transaction types, to
be supported by micro ATM, are Deposit, Withdrawal, Fund transfer and Balance
enquiry.
7|Page
Introduction
This report is in continuation to the report titled “Cyber Business Security-Threats and
Solutions”, available at
http://cmai.asia/cybersecurity/docs/CyberBusinessSecurityTheatsSolutions.pdf ,
published in November 2015.
Many high profile security breaches have highlighted the issue of Cyber-Attacks. These
attacks have left companies struggling to improvise on these issues, but what becomes
an even major problem is regaining the trust of the customers and reassuring them that
their sites and accounts are safe from any further attacks.
Cyber-crime today, has become a business, and the hackers are looking for real
dollars, & this business is expanding day by day. Various businesses, big or small fall into
this trap every day.”
In this part of the report, the topic at hand is, all the solutions available in the market as
of now to safeguard ourselves against cyber-attacks, the solutions individuals as well as
organizations can use before, during and after the cyber-attacks to curb the effects of
cyber-attacks since eliminating it all together is not possible.
8|Page
Spread of cyber-crime across nations
The popularity of internet has been growing day by day and today it is no more a luxury
but a necessity. Agreeing to this fact, Internet brings along with it consequences, cyber-
crime that affect everyone across the globe. Top 20 countries, worst affected by cyber-
attacks are:
9|Page
Cyber-attack No. 1: Socially engineered Trojans
Socially engineered Trojans provide the No. 1 method of attack (not an exploit or a
misconfiguration or a buffer overflow). An end-user browses to a website usually trusted
-- which prompts him or her to run a Trojan. Most of the time the website is a legitimate,
innocent victim that has been temporarily compromised by hackers.
Usually, the website tells users they are infected by viruses and need to run fake antivirus
software. Also, they're nearly out of free disk space and need a fake disk defragger.
Finally, they must install an otherwise unnecessary program, often a fake Adobe Reader
or an equally well-known program. The user executes the malware, clicking past
browser warnings that the program could possibly be harmful. Voilà, exploit
accomplished! Socially engineered Trojans are responsible for hundreds of millions of
successful hacks each year. Against those numbers, all other hacking types are just
noise.
Countermeasure: Stop what you're doing right now and make sure your patching is
perfect. If you can't, make sure it's perfect around the top most exploited products,
including Java, Adobe, browser admins, OS patches, and more. Everyone knows that
better patching is a great way to decrease risk. Become one of the few organizations
that actually does it.
I think of an effective phishing email as a corrupted work of art: Everything looks great; it
even warns the reader not to fall for fraudulent emails. The only thing that gives them
away is the rogue link asking for confidential information.
10 | P a g e
Countermeasure: Decreasing risk from phishing attacks is mostly accomplished through
better end-user education -- and with better antiphishing tools. Make sure your browser
has antiphishing capabilities. I also love browsers that highlight the domain name of a
host in a URL string.
A very popular method is for APT attackers to send a very specific phishing campaign --
known as spearphishing -- to multiple employee email addresses. The phishing email
contains a Trojan attachment, which at least one employee is tricked into running. After
the initial execution and first computer takeover, APT attackers can compromise an
entire enterprise in a matter of hours. It's easy to accomplish, but a royal pain to clean
up.
11 | P a g e
Types of attackers
PROFILE: The Hacker Apprentice is likely to be young, perhaps mid to late teens and
male, perhaps an introvert. I am sure more females will enter this field as we see more
females enter programming in general.
MOTIVATION: They will be interested in programming; probably learning to write code
since their early childhood. Being a hacker seems glamourous and a way of ‘showing
off’ their skills. Invariably, they aren’t too technically savvy (yet) and their hacking
expertise is low grade, only being able to hack weakly guarded systems. They’ll use
YouTube hacking videos to learn their trade. But don’t be complacent. They can work
their way up the cybercriminal ladder as they get older and more experienced if they
are that way inclined. Most however, will mature out of this stage and move into
working in computer or network focused professions.
12 | P a g e
MOTIVATION: This cybercriminal is after information and often also to create havoc,
even potentially, warfare. Information on your business, such as company account
details, manufacturing information, intellectual property, schematics and so on is all
game for Mr. Bond. But this cybercriminal becomes most sinister when they are state
sponsored and attack critical infrastructures, which can affect not only digital
resources, but real world ones too. Probably the most famous cyberespionage attack is
Stuxnet where Iranian nuclear facilities were targeted with the intention of taking them
over. The cost of cyberespionage to the U.S. is massive. MacAfee in their 2013 report
on The Economic Impact of Cybercrime and Cyberespionage has placed estimates of
Intellectual Property losses of up to $140 billion, so this must be one of the most
successful and profitable cybercriminal personas.
PROFILE: An individual or group that wants to make a stand against something they
think is wrong or for something they believe in. They have taken activism in the real
world and placed it online. There are a number of groups that carry out attacks against
targets that they have a grievance with. For example, the international group,
Anonymous, carry out Denial of Service (DDOS) attacks against, mainly, government
and religious websites.
MOTIVATION: To carry out political acts of defiance. Just like real world activists take on
issues that they believe need to be addressed, such as climate change, animal rights
and so on. Hacktivists do the same thing, but using digital methods to spread their word
and that often comes in the form of a cyber-attack. Motivation can be for good and
bad. Sometimes hacktivism is used to attack foreign government policy. For example,
Chinese hackers attacked U.S. government sites to protest against perceived U.S.
Government wrongdoing against China (you can read more here in a previous blog
post). Other times it is used to make a stand. Anonymous have recently targeted IS
sympathisers by hijacking their Twitter accounts and either shutting them down, or
flooding them with images of Japanese anime characters to alter search engine results
for the word IS. Sometimes hacktivism is used as an excuse for hacking certain types of
websites. For example, the recent attack on the customer accounts of the adultery
website, Ashley Madison, was said to have been carried out to shame the users of the
13 | P a g e
site (rather than sell on their user account details – we shall wait and see how that pans
out).
14 | P a g e
Major Cyber Attacks past One Year
15 | P a g e
Ashley Madison Data Breach
The company received attention on July 15, 2015, after hackers, “The Impact
Group”, stole all of its customer data and threatened to post all the data online if
Ashley Madison and fellow Avid Life Media site EstablishedMen.com were not
permanently closed.
The Ashley Madison breach included usernames, first and last names and hashed
passwords for 33 million accounts, as well as partial credit card data, street names and
phone numbers for a huge number of users. There were also records documenting 9.6
million transactions and 36 million email addresses.
The leak included PayPal accounts used by Ashley Madison executives, Windows
domain credentials for employees and numerous proprietary internal documents.
Passwords were protected by the bcrypt hashing algorithm and were considered
secure — but were they?
16 | P a g e
Lessons to be learnt:
Storage is cheap and the data is very valuable. Since we have unlimited storage
on the clouds, doesn’t mean all of it is secure, even though it is encrypted. Thus if
there is no privacy, there is no business.
Putting all the data in one place is not a very good idea. That is exactly what
happened here. If the data collected by the site would have been split and
stored, the hacker would not have been able to access all storage points
leading to exposure of large amount of data.
As soon as a security related problem is found, it should immediately be sorted
without any delay. The passwords of 11 million users were compromised days
after the breach. The company did change its encryptions for the password but
only for those who were singing up new. The encryption techniques for the old
passwords were left as it is and they got compromised.
When we know we are living in a not so secure cyber world, it is our prime duty to
stay alert and aware of the new developments in the field so that we can
safeguard ourselves.
17 | P a g e
“Hacking Team” Hacked
Phineas Fisher, the hacker who claimed responsibility for breaching Hacking Team last
year published an explainer guide detailing his process in executing the attack. In July
2015, the hacker breached 400GB of Hacking Team's confidential documents, emails,
and source code, which exposed the company's client list, which included the FBI and
the U.S. Drug Enforcement Agency.
The leaked documents also demonstrated that the company sold its surveillance tools
to several countries have been cited for human rights abuses, including Egypt,
Bahrain, Morocco, Russia Uganda, among others.
The hacker was also linked to hacking Gamma International, a U.K. company that sold
a spyware product similar in functionally similar to the exploits used by Hacking Team.
Lessons to be learnt:
Given the opportunity, the right amount of offence and the lack of the right
amount of defence; anybody could be vulnerable. The hacker user zero day
attack developed for the embedded systems of the server to get to the
information, also the uninstalled updates for MangoDB, were a plus for the
hacker.
All the software updates should be timely installed to avoid any sort risks.
Security shouldn’t be taken lightly. When a company deals with high profile
clients which for for some purpose kept mysterious, the company should be very
careful regarding the privacy they provide to their customers.
One should be aware of the new developments and the activities on the server
should be regularly monitored for any unauthorized or illegal accesses.
18 | P a g e
The data was stored in decrypted form, which is not a legal way to store
encrypted data, the poses a greater threat. Data stored in decrypted form, and
encrypted using weak encryption techniques are equally bad for such an
organisation.
19 | P a g e
John Brennan’s Email ID Hacked
John Brennan became the Director of the Central Intelligence Agency in March 2013,
replacing General David Petraeus who was forced to step down after becoming
embroiled in a classified information mishandling scandal.
The teen, working with a group called “Crackas With Attitude,” said he fooled Verizon
into providing him with Brennan’s personal data. The hacker said he used a reverse
phone-number lookup to determine that Brennan has a Verizon Wireless account. He
then called the company, posing as a technician whose “tools were down” to get
details about the account, including Brennan’s AOL email address. With that
information, the teen called AOL and convinced a representative to reset the
password, using Brennan’s personal details provided by Verizon.
20 | P a g e
Lessons to be learnt:
First, if one company is a weak link in the security chain, it can bring down other
companies with it. In this case, it was Verizon failing to authenticate the attackers
properly that eventually led to them being able to access the AOL email
account.
Do not send any sensitive information out over email if at all possible. With each
document that John Brennan sent to his personal account, he effectively
increased the odds that the document would be compromised. Corporate
environment is very safe, secured with all the firewalls and latest technologies to
avoid any data breach, by sending the sensitive data to a personal email id,
takes the information outside the safe environment.
When you set up an account and a company asks you to supply answers to
those annoying questions, take an extra moment to make it hard on1 a hacker.
Answers to easy questions can always be gained though some or the other
ways, so the answers should be tricky, maybe a false answer.
21 | P a g e
TalkTalk Hacked
TalkTalk had come under a Distributed Denial of Service (DDOS) attack, where hackers
flooded the company’s site with internet traffic in an effort to overload digital systems
and take them offline. But security analysts said that because customer information was
taken it appears that a second attack may have been occurring at the same time,
with intruders going after TalkTalk’s customer database, which is a common tactic, with
a DDOS attack used as a distraction to enact a more specific data breach.
22 | P a g e
Lessons to be learnt:
One should periodically investigate and identify potential threats and then
eliminate them time to time so that they do not become the reason for a big
disaster in future. TalkTalk didn’t correct security measures in place, such as
firewalls that could detect the basic SQL vulnerabilities that lead to the attack.
Companies need to ensure their Web applications are coded in a secure
manner and that they are regularly tested for potential vulnerabilities.
TalkTalk didn’t have proper data storage methods in place too. They didn’t
tokenize the Credit Card details removing the initial 6 digits. The customer and
bank account information were not encrypted and were stored in plaintext
instead, which is considered as the worst practice when you have a lot of
customers and you have to store their personally identifiable information. These
mistakes often lead the company to come short of their goal to remain safe.
More laws will not prevent criminals from attacking websites and systems. Nor will
more laws make companies necessarily more secure, particularly if the focus in
those companies is on being compliant with laws and regulations. What is
required is a cultural change by consumers, regulators, and governments to
ensure companies take a risk-based approach to security.
A few companies fail to understand that data breaches can be expensive. The
company cannot run on its name alone, every customer today demands
security if the share their sensitive information with you.
23 | P a g e
Vodafone Hacked
The criminals got access to the user data from external sources, and the internal
systems were not compromised, though after the incident. The company said that the
banks of the affected customers were notified. The company also tried contacting the
affected customers and helped them change the account details to regain control.
Vodafone on its part was very alert and immediately came to the rescue, preventing a
huge data breach. The users were warned well in time before any big incident could
take place and all precautionary steps were taken.
24 | P a g e
Russian Hackers Spy in Germany
Germany's domestic secret service declared that it had evidence that Russia was
behind a series of cyber-attacks, including one that targeted the German parliament in
2015.
The operations cited by the BfV intelligence agency ranged from an aggressive attack
called Sofacy or APT 28 that hit NATO members and knocked French TV station
TV5Monde off air, to a hacking campaign called Sandstorm that brought down part of
Ukraine's power grid last year.
Cyberspace is a place for hybrid warfare. The campaigns BfV monitored, were
generally about obtaining information i.e., spying, however, Russian secret services had
also shown a readiness to carry out sabotage.
2015 attack on the German lower house of the parliament was when Germany itself fell
victim to one of these rogue operations, with the Sofacy attack.
Chancellor Angela Merkel's CDU party confirmed it had been targeted in April, adding
that "we have adapted our IT infrastructure as a result".
The BfV said the that, the cyber-attacks carried out by Russian secret services are part
of multi-year international operations that are aimed at obtaining strategic information.
25 | P a g e
Air India’s Loyalty Scheme Hacked
A gang generated 20 email ids and diverted reward points earned by passengers, with
possible help from some airline employees. The months-long investigation revealed that
about 170 tickets were purchased by unfair means using driving licenses as ID, while
many of them had the same signature, said Dhananjay Kumar, a senior manager with
the national carrier. He said that as boarding passes were issued directly in these
instances and driving licenses are not considered valid proof, the likelihood of insider
involvement is strong.
Tickets worth almost Rs. 16 were sold on the basis of the stolen miles, say sources,
adding that the probe may have merely scratched the surface as almost 20 lakh
passengers are beneficiaries of AI’s flying-returns program. The loot was first noticed in
June’16 during the verification of “know your customer” documents uploaded by a
member. The passenger submitted a driving license as identity proof, which is not
legitimate, but the account was still approved.
On further investigation it was found out that these suspect user IDs had hacked various
membership accounts and redeemed miles of genuine Flying Returns members. The
details of the number of miles redeemed from each such account as well as the tickets
issued along with ticket number, name has been retrieved.
26 | P a g e
Lessons to be learnt:
Disgruntled employees can be a great threat to any company and past many
years Air India has been unable to keep its employees happy. As claimed, this
incident would not have been possible due to any insider involvement; it clearly
indicates any company needs to take proper care of its employees. Keeping
employees happy is one thing, but making sure they abide by the company
regulations is another which is equally important.
Moreover, regular audits are very important for timely reporting of any unwanted
activities. The loss would not have been this large, if it would have been
detected earlier. Moreover, this had been happening from a long time,
indicating that Air India never aware that such a thing could happen.
Learning from past and being alert for the future is very important. Air India and
other airlines have faced such hacks earlier. This time the detection was just by
chance during a Know you Customer Survey, leaving the possibility that if this
would not have happened, the damage would have been bigger. If Air India
would have been alert this incident would have been avoided.
27 | P a g e
OurMine’s Unique way of selling its Security Products
Be whatever company, selling whichever product, to sell their product, they need to
market it. Today every company is looking for an innovative idea to showcase their
products and OurMine has outpaced them all.
OurMine as the Wired calls them are: hackers whose black hats are covered in the
thinnest coat of white paint, or so patchwork that even they don’t seem to remember
which color they’re wearing.
OurMine told Wired, “We don’t need money, but we are selling security services
because there is a lot [of] people [who] want to check their security…We are not black
hat hackers, we are just a security group…we are just trying to tell people that nobody
is safe.”
The group has its own social networking accounts on which they are quite active,
showcasing their work and polling for the next hack, though not many people follow
them.
Also, many have come forward in protest of the group, describing the groups’ activities
to be unethical. Why? Because they hacked your Icon’s Social Networking account?
28 | P a g e
OurMine is exploiting the database stolen in LinkedIn 2011 data breach which was sold
to the dark web.
A lot said and done, why do we not blame the victims for such attacks. The Tech Heads
of today, to which people look up to, are not being able to implement enough security.
Using 4 year old data to hack their accounts today says enough about how seriously
Cyber Crime is been taken today. Being famous, puts you on a greater risk of being
attacked, because people want to overshadow you.
Cyber-crimes are not a joke, and it’s high time that we pay close attention to the
matter and be safe on the internet which is no longer a luxury.
But not to forget, marketing strategy developed by OurMine was undoubtedly eye-
opening.
29 | P a g e
TATA assets management CEO’s email account hacked
On June 14, the finance head of the firm received a mail from the 'CEO', asking him to
transfer money to an account number. They had supposedly tried to reach the CEO to
confirm, but he wasn't reachable as he was in the US, the company said.
Due to the pressure created by the hacker through subsequent mails, the finance
official transferred Rs 7 lakh into the account without being aware that the sender was
a hacker. The fraud came to fore when the firm sent the bank's settlement report of the
wired money to the CEO's email ID and he claimed ignorance about having made any
such request or receiving any money.
Then on June 15, another email reached the finance head from the same ID, this time
demanding Rs 20 lakh be wired to another account with ICICI Bank, Allahabad. It was
clear that the company had been conned.
The fact that the mails first came in when the CEO was abroad indicates that the
hacker was aware that he could be out of reach and the firm might wire the money if
there was any distress communication.
30 | P a g e
Cyber-Security Solutions
The cyber space is increasing every day and so is the Dark web growing and imposing
a greater threat to the development of Cyber Space. To protect the same, various
technologies have been developed to detect the threats and safeguard systems
against the evolving cyber-crimes. The technologies are:
31 | P a g e
Antivirus & Mobile App Security
32 | P a g e
Types of Malware:
Virus –
Software that can replicate itself and spreads to other computers or are
programmed to damage a computer by deleting files, reformatting the hard
disk, or using up computer memory.
Adware –
Software that is financially supported (or financially supports another program)
by displaying ads when you're connected to the Internet.
Spyware –
Spyware is software that surreptitiously gathers information and transmits it to
interested parties. A type of information that is gathered includes the Websites
visited, browser and system information, and your computer IP address.
Browser hijacking software –
Advertising software that modifies the browser settings (e.g., default home page,
search bars, toolbars), creates desktop shortcuts, and displays intermittent
advertising pop-ups comes under this. Once a browser is hijacked, the software
may also redirect links to other sites that advertise, or sites that collect Web
usage information.
Trojan Horses-
A Trojan horse or Trojan is a type of malware that is often disguised as legitimate
software. Trojans can be employed by cyber-thieves and hackers trying to gain
access to users' systems. Users are typically tricked by some form of social
engineering into loading and executing Trojans on their systems. Once
activated, Trojans can enable cyber-criminals to spy on you, steal your sensitive
data, and gain backdoor access to your system. These actions can include
deleting data, blocking data, modifying data, copying data, disrupting the
performance of computers or computer networks.
Rootkits-
Rootkits are designed to conceal certain objects or activities in your
system. Often their main purpose is to prevent malicious programs being
detected – in order to extend the period in which programs can run on an
infected computer.
Backdoors-
A backdoor Trojan gives malicious users remote control over the infected
computer. They enable the author to do anything they wish on the infected
computer – including sending, receiving, launching, and deleting files, displaying
data, and rebooting the computer. Backdoor Trojans are often used to unite a
group of victim computers to form a botnet or zombie network that can be used
for criminal purposes.
33 | P a g e
Ransomware-
This type of Trojan can modify data on your computer – so that your computer
doesn’t run correctly or you can no longer use specific data. The criminal will
only restore your computer’s performance or unblock your data, after you have
paid them the ransom money that they demand.
Once malware makes its way into a system, they begin to damage a system’s boot
sector, data files; software installed in it and even the system BIOS. This further corrupts
your files and your system might shut down as well. The main problem is that these
malicious software programs are designed to spread in a system.
There is no end to the channels through which malware can attack your computer and
once inside your system, these spread automatically and disrupts internet traffic as well.
Some of these even give access to your computer. Malware like Trojan horses does not
replicate themselves, but they can damage a system badly and these generally come
in the form of screensavers or free games. Fortunately, there are ways through which
you can protect your system from these malware attacks and you just need to be a
little vigilant to avoid such attacks.
34 | P a g e
What is Antivirus?
Anti-virus software is a program or set of programs that are designed to prevent, search
for, detect, and remove software viruses, and other malicious software like worms,
trojans, adware, and more.
These tools are critical for users to have installed and up-to-date because a computer
without anti-virus software installed will be infected within minutes of connecting to the
internet. The bombardment is constant, with anti-virus companies update their
detection tools constantly to deal with the more than 60,000 new pieces of malware
created daily.
There are several different companies that build and offer anti-virus software and what
each offers can vary but all perform some basic functions:
Scan specific files or directories for any malware or known malicious patterns
Allow you to schedule scans to automatically run for you
Allow you to initiate a scan of a specific file or of your computer, or of a CD or
flash drive at any time.
Remove any malicious code detected –sometimes you will be notified of an
infection and asked if you want to clean the file, other programs will
automatically do this behind the scenes.
Show you the ‘health’ of your computer
Always be sure you have the best, up-to-date security software installed to protect your
computers, laptops, tablets and smartphones.
35 | P a g e
Features of Antivirus Software
Background Scanning
Full System Scans
Virus Definitions
Background Scanning
Antivirus software scans all the files that you open from the back-end; this is also termed
as on access scanning. It gives a real time protection safeguarding the computer from
threats and other malicious attacks.
Virus Definitions
Antivirus software depends on the virus definitions to identify malware. That is the reason
it updates on the new viruses definitions. Malware definitions contain signatures for any
new viruses and other malware that has been classified as wild. If the antivirus software
scans any application or file and if it finds the file infected by a malware that is similar to
the malware in the malware definition. Then antivirus software terminates the file from
executing pushing it to the quarantine. The malware is processed accordingly
corresponding to the type of antivirus software.
It is really essential for all the antivirus companies to update the definitions with the latest
malware to ensure PC protection combating even the latest form of malicious threat.
36 | P a g e
Methods used to identify Malware
Signature-based detection
Heuristic-based detection
Behavioral-based detection
Sandbox detection
Data mining techniques
Signature-based detection
This is most common in Traditional antivirus software that checks all the .EXE files and
validates it with the known list of viruses and other types of malware. Or it checks if the
unknown executable files shows any misbehavior as a sign of unknown viruses.
Files, programs and applications are basically scanned when they in use. Once an
executable file is downloaded, it is scanned for any malware instantly. Antivirus
software can also be used without the background on access scanning, but it is always
advisable to use on access scanning because it is complex to remove malware once it
infects your system
Heuristic-based detection
This type of detection is most commonly used in combination with signature-based
detection. Heuristic technology is deployed in most of the antivirus programs. This helps
the antivirus software to detect new or a variant or an altered version of malware, even
in the absence of the latest virus definitions.
Behavioral-based detection
This type of detection is used in Intrusion Detection mechanism. This concentrates more
in detecting the characteristics of the malware during execution. This mechanism
detects malware only while the malware performs malware actions.
Sandbox detection
It functions most likely to that of behavioral based detection method. It executes any
applications in the virtual environment to track what kind of actions it performs.
Verifying the actions of the program that are logged in, the antivirus software can
identify if the program is malicious or not.
37 | P a g e
Data mining techniques
This is of the latest trends in detecting a malware. With a set of program features, Data
mining helps to find if the program is malicious or not.
Features of Antiviruses:
PROTECTION FROM VIRUSES
Malware Blocker
An industry first, the Malware Blocker feature blocks threats on Google Play before
they can be installed and damage your device or data
Unlimited Updates
Automatically updates virus protection files
Cloud Scanner
Features unlimited cloud scanning connections to ensure continuous protection
Malware Cleaner
Downloads a dedicated removal tool in accordance with the type of malware threat
detected. Removes and restores the smartphone back to its normal settings
Privacy Scanner
Detects spyware by scanning all apps with Trend Micro Mobile App Reputation to
identify ones that collect and potentially steal private information
38 | P a g e
they are legitimate
SAFE SURFING
Parental Controls
Filters inappropriate websites with age-based restrictions
Remote Locate
Helps you find your device on a Google map using GPS, Cell Towers, or Wi-Fi
Remote Scream
Enables you to trigger an alarm on your device - even if it is on silent
Remote Lock
Enables you to remotely lock your device (Accessing the phone again will require
that you insert your Trend Micro password or a unique unlock code)
Remote Wipe
Allows you to perform a factory reset of the device from the web portal to wipe all
your personal data
39 | P a g e
Last Known Location
Automatically locates your device when the following actions take place: SIM
removal, SIM replacement, Phone Restart
ONLINE STORAGE
Scan Facebook
Protects your privacy on Facebook by checking settings and recommending
enhancements
SYSTEM OPTIMIZATION
40 | P a g e
Just-a-Phone
Turns off power draining features not required for phone and text message use,
including 3G/4G, WiFi, Bluetooth, and running apps
Auto Just-a-Phone
Automatically turns on Just-a-Phone feature guided by a set schedule or a
percentage of battery power remaining
Uninstall Protection
Prevents unauthorized removal of the app (Uninstalling Mobile Security will require
that you insert your Trend Micro password)
No Advertising
Does not allow third-party advertising to be displayed in the app
41 | P a g e
Authentication
Access permissions, however, work only if you are able to verify the identity of the user
who is attempting to access the resources. That’s where authentication comes in. In this
Daily Drill Down, we will look at the role played by authentication in a network security
plan, popular types of authentication, how authentication works, and the most
commonly used authentication methods and protocols.
For example, when a user who belongs to a Windows domain logs onto the network, his
or her identity is verified via one of several authentication types. Then the user is issued
an access token, which contains information about the security groups to which the
user belongs. When the user tries to access a network resource (open a file, print to a
42 | P a g e
printer, etc.), the access control list (ACL) associated with that resource is checked
against the access token. If the ACL shows that members of the Managers group have
permission to access the resource, and the user’s access token shows that he or she is a
member of the Managers group, that user will be granted access (unless the user’s
account, or a group to which the user belongs, has been explicitly denied access to
the resource).
Logon authentication
Most network operating systems require that a user be authenticated in order to log
onto the network. This can be done by entering a password, inserting a smart card and
entering the associated PIN, providing a fingerprint, voice pattern sample, or retinal
scan, or using some other means to prove to the system that you are who you claim to
be.
IPSec authentication
IP Security (IPSec) provides a means for users to encrypt and/or sign messages that are
sent across the network to guarantee confidentiality, integrity, and authenticity. IPSec
transmissions can use a variety of authentication methods, including the Kerberos
protocol, public key certificates issued by a trusted certificate authority (CA), or a
simple pre-shared secret key (a string of characters known to both the sender and the
recipient).
An important consideration is that both the sending and receiving computers must be
configured to use a common authentication method or they will not be able to
engage in secured communications.
43 | P a g e
IPSec configuration
If IPSec policies have been configured to require that communications be secured, the
sending and receiving computers will not be able to communicate at all if they do not
support a common authentication method.
Remote authentication
There are a number of authentication methods that can be used to confirm the identity
of users who connect to the network via a remote connection such as dial-up or VPN.
These include:
Remote users can be authenticated via a Remote Authentication Dial-In User Service
(RADIUS) or the Internet Authentication Service (IAS). Each of these will be discussed in
more detail in the section titled Authentication Methods and Protocols.
There are a number of SSO products on the market that allow for single sign-on in a
mixed (hybrid) environment that incorporates, for example, Microsoft Windows servers,
Novell NetWare, and UNIX.
44 | P a g e
Authentication types
There are several physical means by which you can provide your authentication
credentials to the system. The most common—but not the most secure—is password
authentication. Today’s competitive business environment demands options that offer
more protection when network resources include highly sensitive data. Smart cards and
biometric authentication types provide this extra protection.
Password authentication
Most of us are familiar with password authentication. To log onto a computer or
network, you enter a user account name and the password assigned to that account.
This password is checked against a database that contains all authorized users and
their passwords. In a Windows 2000 network, for example, this information is contained
in Active Directory.
To preserve the security of the network, passwords must be “strong,” that is, they should
contain a combination of alpha and numeric characters and symbols, they should not
be words that are found in a dictionary, and they should be relatively long (eight
characters or more). In short, they should not be easily guessed.
Smart cards use cryptography-based authentication and provide stronger security than
a password because in order to gain access, the user must be in physical possession of
the card and must know the PIN.
45 | P a g e
Biometric authentication
An even more secure type of authentication than smart cards, biometric
authentication involves the use of biological statistics that show that the probability of
two people having identical biological characteristics such as fingerprints is
infinitesimally small; thus, these biological traits can be used to positively identify a
person.
In addition to fingerprints, voice, retinal, and iris patterns are virtually unique to each
individual and can be used for authentication purposes. This method of proving one’s
identity is very difficult to falsify, although it requires expensive equipment to input the
fingerprint, voice sample, or eye scan. Another advantage over smart cards is that the
user does not have to remember to carry a device; his or her biological credentials are
never left at home.
When the user wants to log on, he or she provides the credentials and the system
checks the database for the original entry and makes the comparison. If the credentials
provided by the user match those in the database, access is granted.
Kerberos
SSL
Microsoft NTLM
PAP and SPAP
CHAP and MS-CHAP
EAP
RADIUS
Certificate services
These are by no means the only authentication methods in existence, but they are
some of the most common.
Kerberos
Kerberos was developed at MIT to provide secure authentication for UNIX networks. It
has become an Internet standard and is supported by Microsoft’s latest network
operating system, Windows 2000. Kerberos uses temporary certificates called tickets,
which contain the credentials that identify the user to the servers on the network. In the
current version of Kerberos, v5, the data contained in the tickets is encrypted, including
the user’s password.
A Key Distribution Center (KDC) is a service that runs on a network server, which issues a
ticket called a Ticket Granting Ticket (TGT) to the clients that authenticates to the Ticket
Granting Service (TGS). The client uses this TGT to access the TGS (which can run on the
same computer as the KDC). The TGS issues a service or session ticket, which is used to
access a network service or resource.
SSL operates at the application layer of the DoD networking model. This means
applications must be written to use it, unlike other security protocols (such as IPSec) that
operate at lower layers. The Transport Layer Security (TLS) Internet standard is based on
SSL.
47 | P a g e
SSL authentication is based on digital certificates that allow Web servers and clients to
verify each other’s identities before they establish a connection. (This is called mutual
authentication.) Thus, two types of certificates are used: client certificates and server
certificates.
Native mode
If you convert your Windows 2000 domain’s status to native mode, NTLM support will be
disabled.
NTLM uses a method called challenge/response, using the credentials that were
provided when the user logged on each time that user tries to access a resource. This
means the user’s credentials do not get transferred across the network when resources
are accessed, which increases security. The client and server must reside in the same
domain or there must be a trust relationship established between their domains in order
for authentication to succeed.
PAP
PAP is used for authenticating a user over a remote access control. An important
characteristic of PAP is that it sends user passwords across the network to the
authenticating server in plain text. This poses a significant security risk, as an
unauthorized user could capture the data packets using a protocol analyzer (sniffer)
and obtain the password.
The advantage of PAP is that it is compatible with many server types running different
operating systems. PAP should be used only when necessary for compatibility purposes.
SPAP
SPAP is an improvement over PAP in terms of the security level, as it uses an encryption
method (used by Shiva remote access servers, thus the name).
The client sends the user name along with the encrypted password, and the remote
server decrypts the password. If the username and password match the information in
the server’s database, the remote server sends an Acknowledgment (ACK) message
and allows the connection. If not, a Negative Acknowledgment (NAK) is sent, and the
connection is refused.
48 | P a g e
CHAP and MS-CHAP
CHAP is another authentication protocol used for remote access security. It is an
Internet standard that uses MD5, a one-way encryption method, which performs a hash
operation on the password and transmits the hash result—instead of the password
itself—over the network.
This has obvious security advantages over PAP/SPAP, as the password does not go
across the network and cannot be captured.
The hash algorithm ensures that the operation cannot be reverse engineered to obtain
the original password from the hash results. CHAP is, however, vulnerable to remote
server impersonation.
EAP
EAP is a means of authenticating a Point-to-Point Protocol (PPP) connection that allows
the communicating computers to negotiate a specific authentication scheme (called
an EAP type).
A key characteristic of EAP is its extensibility, indicated by its name. Plug-in modules can
be added at both client and server sides to support new EAP types.
EAP can be used with TLS (called EAP-TLS) to provide mutual authentication via the
exchange of user and machine certificates.
EAP can also be used with RADIUS (see below).
RADIUS
RADIUS is often used by Internet service providers (ISPs) to authenticate and authorize
dial-up or VPN users. The standards for RADIUS are defined in RFCs 2138 and 2139. A
RADIUS server receives user credentials and connection information from dial-up clients
and authenticates them to the network.
RADIUS can also perform accounting services, and EAP messages can be passed to a
RADIUS server for authentication. EAP only needs to be installed on the RADIUS server;
it’s not required on the client machine.
Windows 2000 Server includes a RADIUS server service called Internet Authentication
Services (IAS), which implements the RADIUS standards and allows the use of PAP,
CHAP, or MS-CHAP, as well as EAP.
49 | P a g e
Certificate services
Digital certificates consist of data that is used for authentication and securing of
communications, especially on unsecured networks (for example, the Internet).
Certificates associate a public key to a user or other entity (a computer or service) that
has the corresponding private key.
Certificates are issued by certification authorities (CAs), which are trusted entities that
“vouch for” the identity of the user or computer. The CA digitally signs the certificates it
issues, using its private key. The certificates are only valid for a specified time period;
when a certificate expires, a new one must be issued. The issuing authority can also
revoke certificates.
Certificate services are part of a network’s Public Key Infrastructure (PKI). Standards for
the most commonly used certificates are based on the X.509 specifications.
50 | P a g e
Biometrics
Behavioral characteristics are related to the pattern of the behavior of a person, such
as typing rhythm, gait, gestures and voice. Certain biometric identifiers, such as
monitoring keystrokes or gait in real time, can be used to provide continuous
authentication instead of a single one-off authentication check.
51 | P a g e
Characteristics of Biometrics
A number of biometric characteristics may be captured in the first phase of processing.
However, automated capturing and automated comparison with previously stored
data requires that the biometric characteristics satisfy the following characteristics:
Universal
Every person must possess the characteristic/attribute. The attribute must be one that is
universal and seldom lost to accident or disease.
Invariance of properties
They should be constant over a long period of time. The attribute should not be subject
to significant differences based on age either episodic or chronic disease.
Measurability
The properties should be suitable for capture without waiting time and must be easy to
gather the attribute data passively.
Singularity
Each expression of the attribute must be unique to the individual. The characteristics
should have sufficient unique properties to distinguish one person from any other.
Height, weight, hair and eye color are all attributes that are unique assuming a
particularly precise measure, but do not offer enough points of differentiation to be
useful for more than categorizing.
Acceptance
The capturing should be possible in a way acceptable to a large percentage of the
population. Excluded are particularly invasive technologies, i.e. technologies which
require a part of the human body to be taken or which (apparently) impair the human
body.
Reducibility
The captured data should be capable of being reduced to a file which is easy to
handle.
Privacy
The process should not violate the privacy of the person.
52 | P a g e
Comparable
Should be able to reduce the attribute to a state that makes it digitally comparable to
others. The less probabilistic the matching involved, the more authoritative the
identification.
Inimitable
The attribute must be irreproducible by other means. The less reproducible the attribute,
the more likely it will be authoritative.
Among the various biometric technologies being considered, the attributes which
satisfy the above requirements are fingerprint, facial features, hand geometry, voice,
iris, retina, vein patterns, palm print, DNA, keystroke dynamics, ear shape, odor,
signature etc.
53 | P a g e
Biometric System Modules
A sensor collects the raw biometric data and converts the information to a
digital format. The quality of the data captured typically depends on the
intuitiveness of the interface and the characteristics of the sensor itself.
A matching algorithm compares the new biometric template (the query) to one
or more templates kept in data storage and creates a “match score.” A large
match score indicates a greater similarity between the query and the stored
template. In some cases, the goal is to measure the dissimilarity in which case
the score is referred to as a “distance score.”
Lastly, a decision process uses the results from the matching component to
make a system-level decision. This can either be automated or human-assisted.
54 | P a g e
Multimodal Biometric Systems
Multimodal biometric systems are those that utilize more than one physiological or
behavioral characteristic for enrollment, verification, or identification. In applications
such as border entry/exit, access control, civil identification, and network security, multi-
modal biometric systems are looked to as a means of
Abstract level
The output from each module is only a set of possible labels without any
confidence value associated with the labels; in this case a simple majority rule
may be used to reach a more reliable decision.
Rank level
The output from each module is a set of possible labels ranked by decreasing
confidence values, but the confidence values themselves are not specified.
Measurement level
The output from each module is a set of possible labels with associated
confidence values; in this case, more accurate decisions can be made by
integrating different confidence values.
55 | P a g e
Biometric Authentication Systems
Looking at biometric systems in a more general way will reveal certain things all
biometric-based authentication systems have in common. In general such systems work
in two modes:
• Enrollment mode
In this mode biometric user data is acquired. This is mostly done with some type of
biometric reader. Afterwards the gathered information is stored in a database where it
is labeled with a user identity (e.g. name, identification number) to facilitate
authentication.
• Authentication mode
Again biometric user data is acquired first and used by the system to either verify the
users claimed identity or to identify who the user is. While identification involves the
process of comparing the user’s biometric data against all users in the database, the
process of verification compares the biometric data against only those entries in the
database which are corresponding to the users claimed identity.
56 | P a g e
Biometric verification becoming common
Accuracy of biometrics
The accuracy and cost of readers has until recently been a limiting factor in the
adoption of biometric authentication solutions but the presence of high quality
cameras, microphones, and fingerprint readers in many of today’s mobile devices
means biometrics is likely to become a considerably more common method of
authenticating users, particularly as the new FIDO specification means that two-factor
authentication using biometrics is finally becoming cost effective and in a position to
be rolled out to the consumer market.
The quality of biometric readers is improving all the time, but they can still produce false
negatives and false positives. One problem with fingerprints is that people inadvertently
leave their fingerprints on many surfaces they touch, and it’s fairly easy to copy them
and create a replica in silicone. People also leave DNA everywhere they go and
someone’s voice is also easily captured. Dynamic biometrics like gestures and facial
expressions can change, but they can be captured by HD cameras and copied. Also,
whatever biometric is being measured, if the measurement data is exposed at any
point during the authentication process, there is always the possibility it can be
intercepted. This is a big problem, as people can’t change their physical attributes as
they can a password. While limitations in biometric authentication schemes are real,
biometrics is a great improvement over passwords as a means of authenticating an
individual.
57 | P a g e
Cryptography
2) Integrity (the information cannot be altered in storage or transit between sender and
intended receiver without the alteration being detected)
4) Authentication (the sender and receiver can confirm each other’s identity and the
origin/destination of the information)
Procedures and protocols that meet some or all of the above criteria are known as
cryptosystems. Cryptosystems are often thought to refer only to mathematical
procedures and computer programs; however, they also include the regulation of
human behavior, such as choosing hard-to-guess passwords, logging off unused
systems, and not discussing sensitive procedures with outsiders.
58 | P a g e
Types of Cryptosystems
Fundamentally, there are two types of cryptosystems based on the manner in which
encryption-decryption is carried out in the system −
The main difference between these cryptosystems is the relationship between the
encryption and the decryption key. Logically, in any cryptosystem, both the keys are
closely associated. It is practically impossible to decrypt the cipher text with the key
that is unrelated to the encryption key.
The encryption process where same keys are used for encrypting and decrypting the
information is known as Symmetric Key Encryption.
Prior to 1970, all cryptosystems employed symmetric key encryption. Even today, its
relevance is very high and it is being used extensively in many cryptosystems. It is very
unlikely that this encryption will fade away, as it has certain advantages over
asymmetric key encryption.
59 | P a g e
The salient features of cryptosystem based on symmetric key encryption are −
Persons using symmetric key encryption must share a common key prior to
exchange of information.
Keys are recommended to be changed regularly to prevent any attack on the
system.
A robust mechanism needs to exist to exchange the key between the
communicating parties. As keys are required to be changed regularly, this
mechanism becomes expensive and cumbersome.
In a group of n people, to enable two-party communication between any two
persons, the number of keys required for group is n × (n – 1)/2.
Length of Key (number of bits) in this encryption is smaller and hence, process of
encryption-decryption is faster than asymmetric key encryption.
Processing power of computer system required to run symmetric algorithm is less.
KEY ESTABLISHMENT
Before any communication, both the sender and the receiver need to agree on a
secret symmetric key. It requires a secure key establishment mechanism in place.
TRUST ISSUE
Since the sender and the receiver use the same symmetric key, there is an implicit
requirement that the sender and the receiver ‘trust’ each other. For example, it may
happen that the receiver has lost the key to an attacker and the sender is not informed.
These two challenges are highly restraining for modern day communication. Today,
people need to exchange information with non-familiar and non-trusted parties. For
example, a communication between online seller and customer. These limitations of
symmetric key encryption gave rise to asymmetric key encryption schemes.
60 | P a g e
Asymmetric Key Encryption was invented in the 20th century to come over the
necessity of pre-shared secret key between communicating persons. The salient
features of this encryption scheme are as follows −
Every user in this system needs to have a pair of dissimilar keys, private key and public
key. These keys are mathematically related − when one key is used for encryption, the
other can decrypt the cipher text back to the original plaintext.
It requires to put the public key in public repository and the private key as a well-
guarded secret. Hence, this scheme of encryption is also called Public Key Encryption.
Though public and private keys of the user are related, it is computationally not feasible
to find one from another. This is a strength of this scheme.
When Host1 needs to send data to Host2, he obtains the public key ofHost2 from
repository, encrypts the data, and transmits.
Length of Keys (number of bits) in this encryption is large and hence, the process of
encryption-decryption is slower than symmetric key encryption.
61 | P a g e
Symmetric cryptosystems are a natural concept. In contrast, public-key cryptosystems
are quite difficult to comprehend.
You may think, how can the encryption key and the decryption key are ‘related’, and
yet it is impossible to determine the decryption key from the encryption key? The
answer lies in the mathematical concepts. It is possible to design a cryptosystem whose
keys have this property. The concept of public-key cryptography is relatively new. There
are fewer public-key algorithms known than symmetric algorithms.
This is usually accomplished through a Public Key Infrastructure (PKI) consisting a trusted
third party. The third party securely manages and attests to the authenticity of public
keys. When the third party is requested to provide the public key for any
communicating person X, they are trusted to provide the correct public key.
The third party satisfies itself about user identity by the process of attestation,
notarization, or some other process − that X is the one and only, or globally unique, X.
The most common method of making the verified public keys available is to embed
them in a certificate which is digitally signed by the trusted third party.
Due to the advantages and disadvantage of both the systems, symmetric key and
public-key cryptosystems are often used together in the practical information security
systems.
Now, we get to the basic types of cryptography. While reading about these types of
cryptography, it may be helpful to think of a key as a key to a door.
62 | P a g e
One Time Pad
A one time pad is considered the only perfect encryption in the world. The sender and
receiver must each have a copy of the same pad (a bunch of completely random
numbers), which must be transmitted over a secure line. The pad is used as a symmetric
key; however, once the pad is used, it is destroyed. This makes it perfect for extremely
high security situations (for example, national secrets), but virtually unusable for
everyday use (such as email).
Steganography
Steganography is actually the science of hiding information from people who would
snoop on you. The difference between this and encryption is that the would-be
snoopers may not be able to tell there's any hidden information in the first place. As an
example, picture files typically have a lot of unused space in them. This space could be
used to send hidden messages. If you do research on encryption, you may see the term
steganography used on occasion. It is not, however, true encryption (though it can still
be quite effective), and as such, we only mention it here for completeness.
63 | P a g e
Data Breach
A data breach is an incident where information is stolen or taken from a system without
the knowledge or authorization of the system’s owner. Victims of data breaches are
usually large companies or organizations, and the data stolen may typically be
sensitive, proprietary or confidential in nature (such as credit card numbers, customer
data, trade secrets or matters of national security). Damage created by such incidents
often presents itself as loss to the target company’s reputation with their customer, due
to a perceived ‘betrayal of trust’. The damage may also involve the company’s
finances as well as that of their customers’ should financial records be part of the
information stolen.
64 | P a g e
65 | P a g e
Background
Data breaches may be a result of cybercriminal activity (targeted attacks) or by complete
accident/human error (misplaced business laptop/smartphone).
Research. The cybercriminal, having picked his target, looks for weaknesses that he
can exploit: the target’s employees, its systems, or its networks. This entails long hours
of research on the cybercriminal’s part, and may involve stalking employees’ social
networking profiles to finding what sort of infrastructure the company has.
Attack. Having scoped out his target’s weaknesses, the cybercriminal makes initial
contact through either a network-based attack or through a social attacks
In a network attack, the cybercriminal uses the weaknesses in the target’s
infrastructure to get into its network. These weaknesses may include (but are not
limited to) SQL injection, vulnerability exploitation, and/or session hijacking.
In a social attack, the cybercriminal uses social engineering in order to infiltrate the
target’s network. This may involve a maliciously-crafted email to one of the
employees, tailor-made to catch that specific employee’s attention. The mail could
be a phishing mail, where the reader is fooled into supplying personal information to
the sender, or one that comes with attached malware set to execute once
accessed.
Either attack, if successful, allows the cybercriminal to:
Exfiltrate. Once inside the network, the cybercriminal is free to extract the data he
needs from the company’s infrastructure and transmit it back to himself. This data
may be used for either blackmail or black propaganda. It may also result in the
cybercriminal having enough data for a more damaging attack on the
infrastructure as well.
66 | P a g e
Other Causes of Data Breaches
Disgruntled employees. Employees who mean to do harm to their employers by willingly
stealing information from the company.
Lost or stolen devices. Company devices that may be lost or stolen by employees who
bring them home.
67 | P a g e
Record Data Breaches
YEAR ORGANIZATION INDUSTRY RECORDS STOLEN
68 | P a g e
Best practices
For enterprises
Create contingencies.
Put an effective disaster recovery plan in place. In the event of a data breach,
minimize confusion by being ready with contact persons, disclosure strategies, actual
mitigation steps, and the like. Make sure that your employees are made aware of this
plan for proper mobilization once a breach is discovered.
69 | P a g e
For employees
70 | P a g e
The 8 Most Common Causes of Data Breach
It seems as though not a day goes by without a headline screaming that some
organization has experienced a data breach, putting the business — and its customers
and partners — at risk. To keep your own organization out of the news, it’s important to
understand the most common causes of data breaches and what you can do to
mitigate the threats they present.
Simple Solution: Keep all software and hardware solutions fully patched and up to
date.
Malware
The use of both direct and in-direct Malware is on the rise. Malware is by definition,
malicious software; software loaded without intention that opens up access for a
hacker to exploit a system and potentially other connected systems.
Simple Solution: Be wary of accessing web sites which are not what they seem or
opening emails where you are suspicious of their origin, both of which are popular
methods of spreading malware!
Social Engineering
As a hacker, why go to the hassle of creating your own access point to exploit when
you can persuade others with a more legitimate claim to the much sought after data,
to create it for you?
Simple Solution: If it looks too good to be true then it probably is too good to be true.
Recognising which mail is genuine and which is not is very important.
71 | P a g e
Too Many Permissions
Overly complex access permissions are a gift to a hacker. Businesses that don’t keep a
tight rein on who has access to what within their organisation are likely to have either
given the wrong permissions to the wrong people or have left out of date permissions
around for a smiling hacker to exploit!
Insider Threats
The phrase “Keep your friends close and your enemies closer” could not be any more
relevant. The rouge employee, the disgruntled contractor or simply those not bright
enough to know better have already been given permission to access your data,
what’s stopping them copying, altering or stealing it?
Simple Solution: Know who you are dealing with, act swiftly when there is a hint of a
problem and cover everything with process and procedure backed up with training.
Physical Attacks
Is your building safe and secure? Hackers don’t just sit in back bedrooms in far off lands,
they have high visibility jackets and a strong line in plausible patter to enable them to
work their way into your building and onto your computer systems.
Simple Solution: Be vigilant, look out for anything suspicious and report it.
Simple Solution: With the correct professionals in charge of securing your data and the
relevant and robust processes and procedures in place to prevent user error then
mistakes and errors can be kept to a minimum and kept to those areas where they are
less likely to lead to a major data breach.
72 | P a g e
Data Loss Prevention (DLP)
Data loss prevention (DLP) is a strategy for making sure that end users do not send
sensitive or critical information outside the corporate network. The term is also used to
describe software products that help a network administrator control what data end
users can transfer.
DLP software products use business rules to classify and protect confidential and critical
information so that unauthorized end users cannot accidentally or maliciously share
data whose disclosure could put the organization at risk. For example, if an employee
tried to forward a business email outside the corporate domain or upload a corporate
file to a consumer cloud storage service like Dropbox, the employee would be denied
permission.
DLP products may also be referred to as data leak prevention, information loss
prevention or extrusion prevention products.
Overview
Every organization fears losing its critical, confidential, highly restricted or restricted
data. Fear of losing data amplifies for an organization if their critical data is hosted
outside their premises, say onto a cloud model. To address this fear or issue that
organizations face, a security concept known as “Data Loss Prevention” has evolved,
and it comes in product flavors in the market. The most famous among them are
Symantec, McAfee, Web-sense, etc. Each DLP product is designed to detect and
prevent data from being leaked. These products are applied to prevent all channels
through which data can be leaked.
Data is classified in the category of in-store, in-use and in-transit. We will lean about
these classifications later in this article. Before starting the article, we have to keep in
mind that the information is leaking from within the organization.
73 | P a g e
Types of Data to Protect
First of all we need to understand what type of data is needed to be protected. In DLP,
data is classified in three categories:
Data in motion: Data that needs to be protected when in transit i.e. data on the
wire. This includes channels like HTTP/S, FTP, IM, P2P, SMTP.
Data in use: Data that resides on the end user workstation and needs to be
protected from being leaked through removable media devices like USB, DVD,
CD’s etc. will fall under this category.
Data at rest: Data that resides on file servers and DBs and needs to be monitored
from being getting leaked will fall under this category.
DLP Strategy
DLP products come with inbuilt policies that are already compliant with compliance
standards like PCI, HIPPA, SOX, etc. Organizations just need to tune these policies with
their organizational footprint. But the most important thing in DLP strategy is to identify
the data to protect, because if an organization simply puts DLP across the whole
organization, then a large number of false positives will result. The below section covers
the data classification exercise.
The first thing every organization should do is to identify all the confidential, restricted,
and highly restricted data across the whole organization and across the three channels,
i.e. for data in-transit, in-store and in-use. DLP products work with signatures to identify
any restricted data when it is crossing boundaries. To identify the critical data and
develop its signatures, there is a term in DLP products known as fingerprinting. Data is
stored in various forms at various locations in an organization and it requires identifying
and fingerprinting. Various products comes with a discovery engine which crawl all the
data, index it and made it accessible though an intuitive interface which allows quick
searching on data to find its sensitivity and ownership details.
74 | P a g e
Defining Policies
Once the sensitive data is discovered, an organization should build policies to protect
the sensitive data. Every policy must consist of some rules, such as to protect credit card
numbers, PII, and social security numbers. If there is a requirement for an organization to
protect sensitive information and the DLP product does not support it out of the box,
then organizations should create rules using regular expressions (regex). It should be
noted that DLP policies at this stage should only be defined and not applied.
Deployment Scenarios
As discussed earlier, sensitive data falls under three categories, i.e. data in motion, data
at rest and data in use. After identifying the sensitive data and defining policies, the
stage is then set up for the deployment of the DLP product. The below section covers
the DLP deployment scenario of all three types:
Data in motion: Data that needs to be protected when in transit, i.e. data on the
wire. This includes channel like HTTP/S, FTP, IM, P2P, SMTP etc. The below diagram
75 | P a g e
shows the common implementation of DLP.
As in the above diagram, it is clear that DLP is not put in inline mode but rather
put on a span port. It is very important to not put DLP protector appliance or
software directly inline with the traffic, as every organization should start with a
minimal basis and if put inline, it would result in huge number of false positives. In
addition, if the DLP appliance is put in place, there is always a fear of network
outage if the inline device fails. So the best approach is to deploy the DLP
appliance in a span port first, and then after the DLP strategy is mature, then put
into inline mode.
To mitigate the second risk, there can be two options. First, deploy DLP in High
Availability mode, and second, configure the inline DLP product in bypass mode,
which will enable the traffic to bypass the inline DLP product in case the DLP
product is down.
Data in Use: Data that resides on the end user workstation and needs to be
protected from being leaked through removable media devices like USB, DVD,
CDs, etc. will fall under this category. In Data in Use, an agent is installed in every
endpoint device like laptop, desktop, etc. which is loaded with policies and is
managed by the centralized DLP management server. Agents can be
distributed on the endpoints via pushing strategies like SMS, GPO, etc. Since a
DLP agent on the endpoint needs to interact with the centralized DLP
management server in order to report incidents and get refreshed policies, the
communication port must be added as an exception in the local firewall list.
Data in Store: Data that resides on file servers and DBs and needs to be
monitored from being getting leaked will fall under this category. All the data
that resides in storage servers or devices are crawled using a DLP crawling agent.
After crawling, data is fingerprinted to see any unstructured data is present or
not.
76 | P a g e
DLP Operations
77 | P a g e
Triaging phase: In this phase, the security operation’s team will monitor the alert
fired or triggered by the policies set up in the DLP product. As mentioned earlier,
DLP first should be put in observation mode to see and remove all the false
positives. So when the security team receives the alert, the team will triage that
event against various conditions like what type of data has been leaked, who
has leaked it, through which channel it got leaked, any policy mis-configuration,
etc. After performing this triaging, the team will declare the alert as an incident
and start the incident classification phase where the team will process the
incident with a risk profile. A risk profile is a text-based sheet which includes
important information about the incident like type of policy, data type, channel
type, severity type (low, medium, and high), etc. After processing and updating
the risk profile, the security team will assign the incident to the respective team.
Incident Reporting and Escalation phase: In this phase, the security team will
assign the incident to the respective team. First, the security team will consult
with the respective team to check whether the loss is a business acceptable risk
or not. This can be due to reasons like change in policies at the backend, etc. If
yes, the incident will be considered a false positive and moved to the tuning
phase. If not, then the security team will escalate the incident along with proofs
to the respective team. After escalating, security team will prepare the report as
a part of monthly deliverable or for audit, and after this, the security team will
close the incident and archive the incident. Archiving is important as some
compliance requires it during a forensic investigation.
Tuning phase: In this phase, all the incidents which are considered to be false
positive are passed here. The security team’s responsibility is to fine tune the
policies as a result of some mis-configurations earlier or due to some business
change and apply the changes to the DLP product as a draft version. To check
whether the applied changes are fine, the incident is replicated and then
checked whether the alert is generated or not. If not, then the changes are
made final and applied, but if yes then fine tuning is required in the policies
which are set up in the DLP product.
It should be noted that in DLP, there is no incident resolution phase, since any
reported incident is a data loss (if it is not a false positive) and is thus escalated and
then corresponding action is taken.
78 | P a g e
Best Practices for a Successful DLP Implementation
Below are some of the best practices that should be adopted in order to have a
successful pre and post DLP deployment.
Before choosing a DLP product, organizations should identify the business need
for DLP.
Organizations should identify sensitive data prior to DLP deployment.
While choosing a DLP product, organizations should check whether the DLP
product supports the data formats in which data is stored in their environment.
After choosing a DLP product, DLP implementation should start with a minimal
base to handle false positives and the base should be increasing with more
identification of critical or sensitive data.
DLP operations should be effective in triaging to eliminate false positives and fine
tuning of DLP policies.
A RACI matrix should be setup to draw out the responsibilities of DLP policies,
implementation etc.
A regular updating of risk profiles and a thorough documentation of the DLP
incidents
Data Loss Prevention can provide some powerful protection for your sensitive
information. It can be used to discover Personal Information (PI) within your
environment, identify various forms of PI from names and phone numbers to
government identifiers and credit card numbers, assemble multiple subsets of PI to
accurately identify a whole record, and even do all of this in multiple languages.
It can also discover and identify Intellectual Property (IP), and even be trained to learn
the difference between your IP and the IP of your business partners. It can alert you
when someone tries to copy or share PI or IP. It can block or encrypt attempts to email,
IM, blog, copy, or print this sensitive data. DLP can also "fingerprint" certain documents
that you specifically want to protect or ignore.
DLP provides a strong set of capabilities, but it is primarily used to protect against
unauthorized movements of sensitive data (e.g., the various ways you may transmit,
copy or print sensitive data from one location to another). And, it is intended to provide
this protection in one direction (inside-out). It is not intended to protect you from
receiving sensitive data, but rather it is intended to protect the data you already have.
79 | P a g e
Pre-Installation Research
By implementing DLP you are about to invest a substantial amount of the company's
money, time and resources. As a first step, research is important. Consult with research
analysts such as Forrester or Gartner and gain a basic to intermediate understanding of
the industry, the vendors and solutions available, and their particular strengths and
weaknesses. Some DLP solutions offer robust features and support while others offer
much less (i.e. "DLP Lite"). Understand company’s environment and the ways in which
sensitive data moves about before undertaking DLP.
Also, leverage your professional network. Ask what your peers are doing with DLP and
what success or pains they've had. Talk to several vendors and narrow the field to a
few. After narrowing the field, request preliminary pricing estimates — you will need this
information for budgetary planning.
Note that far and away, most companies buy too much DLP. Plan to start small, pilot
test in key areas, and grow into it. You will find that it will take you far longer to install,
configure, optimize and find a way to effectively manage than you could have
imagined. It does you, nor your company, no good to spend money on product or
subscription licenses that go unused or are poorly deployed.
Give some thought to where DLP will be needed, and what it must accomplish to be
successful.
Don't apply a shotgun approach unless it makes sense for your organization. Installing
DLP on everything, everywhere can be very expensive and difficult to maintain. Think
about the key applications and teams within your business that really need DLP
technology due to the sensitivity of the data they have access to. You may find that
you are able to apply an envelope of DLP protection around just your high-risk teams.
One way to think about this is to consider Pareto's "Law of the Vital Few" (or 80/20 rule).
This principle states that 80% of your risks come from 20% of your sources. By focusing
your DLP protections in your high-risk areas, you will make a significant positive impact
on your company's risk profile and be able to share attractive ROI figures with senior
management at the same time.
80 | P a g e
Identifying business requirements
Before diving into the technology and available vendor solutions, you should first build a
good understanding of what your business requirements for DLP will be. Be sure that
your business requirements include the following:
81 | P a g e
Define security requirements
After identifying your business requirements, next sketch out a set of security
requirements to support them. You may decide you need to encrypt any PI when
someone attempts to copy it to USB, or whenever someone attempts to move it off disk
in any way. Perhaps you only care about large quantities of PI, so above a certain
threshold you choose to block it from being moved. Or maybe you simply want DLP to
alert support staff without blocking or encrypting anything. Each business has a different
set of requirements. Define a set of security requirements that fit your specific business
needs.
Communications
If you are pitching DLP to leadership, think "safety net" rather than "big brother." DLP
should be considered a collaborative solution. Sell it in a positive light explaining how it
can protect your sensitive data, keep your business out of the media (for the wrong
reasons), and afford you a competitive advantage. Plan to involve key stakeholders
from across the company early on. These key groups typically include IT, HR, Finance,
Legal and Internal Audit. Later when you are ready to implement DLP, you will want
and need support from these business leaders.
When you are ready to implement DLP, ensure that you apply good communications
practices. Keep business leaders, stakeholders and users appropriately informed of your
plans and timelines. The rule of thumb I follow for communications is:
It seems redundant, but you will find this approach is highly effective in getting your
message across. You will want to develop different communications for each segment
of your business community; one for executive leadership; one for team leadership; and
one for the end user population. Don't surprise anyone with DLP. Surprise in this case
can quickly appear like "big brother" just moved in, and that is likely not the image you
want.
82 | P a g e
Software-based DLP solutions include perpetual or subscription based licenses for
endpoint clients and the management server. You will need to separately provide for
the underlying computer hardware, operating system and virtualization software (if
appropriate), a database server and management server.
Hardware based solutions include one or more DLP appliances. Minimally you will need
to separately provide one or more Mail Transfer Agents (if you intend to encrypt or
block emails), a database server and management server.
Cloud based DLP solutions typically represent a zero footprint subscription solution.
Endpoint users are directed to your DLP cloud provider via either Web Cache
Communication Protocol (WCCP) configurations on your routers, or a PAC file that is
installed on each endpoint to redirect their outbound traffic to the DLP provider's cloud.
Each RACI entry is important, however, there are two particular items that you should
include. First, ensure that you build in a segregation of duties to help prevent misuse. Do
this by assigning rights to the security team allowing them to create DLP policies but not
the ability to implement them. Then, assign rights to your support team (IT for example)
allowing them to implement the DLP policies developed by the security team but not
the ability to create policies. By applying this check and balance, we prevent a single
team from subverting the solution or in causing harm by implementing something that
should not have been implemented.
Secondly, it is very important to note that DLP will collect and report on the most
sensitive information traversing your systems or networks. Think of all of the sensitive
email discussions and documents shared between business leaders and board
members, and HR for example. Allowing your support teams to be able to see this data
is clearly inappropriate. You will therefore want to restrict access to the content of the
DLP event (i.e., John Smith copied 1,000 names and social security numbers to a USB
thumb drive and here are all of the social security numbers and names he copied).
On the other hand, the context of the DLP event should be available to support teams
so they can address the event (i.e., John Smith copied 1,000 names and social security
numbers to a USB thumb drive). Many DLP solutions provide for these distinctions. In fact,
83 | P a g e
it should be a showstopper if this capability does not exist in the solution you are
considering.
Begin by enabling monitoring only. Don't start out with blocking or auto-encrypting
data until you are truly ready and understand the implications of getting any of this
wrong. Expect help desk calls, and prepare your support teams so they are able to
respond to them effectively. Determine what you will do when you learn of a given
policy violation and gain alignment with stakeholders (Legal, HR, IT) for each scenario
that is likely to occur.
Ensure that you document everything related to the architecture and deployment of
DLP. If you were to burn it all to the ground, your documentation should be able to
guide you through full re-deployment. If it cannot, then your documentation is
insufficient. Lastly, share reports and metrics with leadership that illustrate the positive
impact DLP is having on your ability to protect sensitive information. They will want to
know how effectively their organization's money and resources have been spent.
84 | P a g e
DDOS Attack Protection
1. Application layer attacks (a.k.a., layer 7 attacks) can be either DoS or DDoS threats
that seek to overload a server by sending a large number of requests requiring
resource-intensive handling and processing. Among other attack vectors, this category
includes HTTP floods, slow attacks (e.g.,Slowloris or RUDY) and DNS query flood attacks.
Gaming website hit with a massive DNS flood, peaking at over 25 million packets per second
The size of application layer attacks is typically measured in requests per second (RPS),
with no more than 50 to 100 RPS being required to cripple most mid-sized websites.
2. Network layer attacks (a.k.a., layer 3–4 attacks) are almost always DDoS assaults set
up to clog the “pipelines” connecting your network. Attack vectors in this category
include UDP flood, SYN flood, NTP amplification and DNS amplification attacks, and
more.
85 | P a g e
Any of these can be used to prevent access to your servers, while also causing severe
operational damages, such as account suspension and massive overage charges.
DDoS attacks are almost always high-traffic events, commonly measured in gigabits per
second (Gbps) or packets per second (PPS). The largest network layer assaults can
exceed 200 Gbps; however, 20 to 40 Gbps are enough to completely shut down most
network infrastructures.
Attacker Motivations
DoS attacks are launched by individuals, businesses and even nation-states, each with
their own particular motivation:
Less technically-savvy than other types of attackers, hactivists tend to use premade
tools to wage assaults against their targets. Anonymous is perhaps one of the best
known hacktivist groups. They’re responsible for the cyberattack in February
2015 against ISIS, following the latter’s terrorist attack against the Paris offices of Charlie
Hebdo, as well as the attack against the Brazilian government and World Cup sponsors
in June 2014.
Cyber vandalism – Cyber vandals are often referred to as “script kiddies”—for their
reliance on premade scripts and tools to cause grief to their fellow Internet citizens.
These vandals are often bored teenagers looking for an adrenaline rush, or seeking to
vent their anger or frustration against an institution (e.g., school) or person they feel has
wronged them. Some are, of course, just looking for attention and the respect of their
peers.
Alongside premade tools and scripts, cyber vandals will also result to using DDoS-for-hire
services (a.k.a., booters or stressers), which can purchased online for as little as $19 a
pop.
86 | P a g e
Example of booter advertised prices and capacities.
Similar to cyber vandalism, this type of attack is enabled by the existence of stresser
and booter services.
Personal rivalry – DoS attacks can be used to settle personal scores or to disrupt online
competitions. Such assaults often occur in the context of multiplayer online games,
where players launch DDoS barrages against one another, and even against gaming
servers, to gain an edge or to avoid imminent defeat by “flipping the table.”
Attacks against players are often DoS assaults, executed with widely available
malicious software. Conversely, attacks against gaming servers are likely to be DDoS
assaults, launched from stressers and booters .
87 | P a g e
Business competition – DDoS attacks are increasingly being used as a competitive
business tool. Some of these assaults are designed to keep a competitor from
participating in a significant event (e.g., Cyber Monday), while others are launched
with a goal of completely shutting down online businesses for months.
One way or another, the idea is to cause disruption that will encourage your customers
to flock to the competitor while also causing financial and reputational damage. An
average cost of a DDoS attack to an organization can run $40,000 per hour.
Business-feud attacks are often well-funded and executed by professional "hired guns,"
who conduct early reconnaissance and use proprietary tools and resources to sustain
extremely aggressive and persistent DDoS attacks .
88 | P a g e
Four common categories of attacks
89 | P a g e
Two ways attacks can multiply traffic they can send.
Apache2
This attack is mounted against an Apache Web server where the client asks for a
service by sending a request with many HTTP headers. However, when an Apache
Web server receives many such requests, it cannot confront the load and it crashes.
Poison attacks require the attacker to have access to the victim's LAN. The attacker
deludes the hosts of a specific LAN by providing them with wrong MAC addresses for
hosts with already-known IP addresses. This can be achieved by the attacker through
the following process: The network is monitored for "arp who-has" requests. As soon as
such a request is received, the malevolent attacker tries to respond as quickly as
possible to the questioning host in order to mislead it for the requested address.
90 | P a g e
Back
This attack is launched against an apache Web server, which is flooded with requests
containing a large number of front-slash ( / ) characters in the URL description. As the
server tries to process all these requests, it becomes unable to process other
legitimate requests and hence it denies service to its customers.
CrashIIS
The victim of a CrashIIS attack is commonly a Microsoft Windows NT IIS Web server.
The attacker sends the victim a malformed GET request, which can crash the Web
server.
DoSNuke
In this kind of attack, the Microsoft Windows NT victim is inundated with "out-of-band"
data (MSG_OOB). The packets being sent by the attacking machines are flagged
"urg" because of the MSG_OOB flag. As a result, the target is weighed down, and the
victim's machine could display a "blue screen of death."
Land
In Land attacks, the attacker sends the victim a TCP SYN packet that contains the
same IP address as the source and destination addresses. Such a packet completely
locks the victim's system.
Mailbomb
SYN Flood
A SYN flood attack occurs during the three-way handshake that marks the onset of a
TCP connection. In the three-way handshake, a client requests a new connection by
sending a TCP SYN packet to a server. After that, the server sends a SYN/ACK packet
back to the client and places the connection request in a queue. Finally, the client
acknowledges the SYN/ACK packet. If an attack occurs, however, the attacker sends
an abundance of TCP SYN packets to the victim, obliging it both to open a lot of TCP
connections and to respond to them. Then the attacker does not execute the third
step of the three-way handshake that follows, rendering the victim unable to accept
any new incoming connections, because its queue is full of half-open TCP
91 | P a g e
connections.
Ping of Death
In Ping of Death attacks, the attacker creates a packet that contains more than
65,536 bytes, which is the limit that the IP protocol defines. This packet can cause
different kinds of damage to the machine that receives it, such as crashing and
rebooting.
Process Table
This attack exploits the feature of some network services to generate a new process
each time a new TCP/IP connection is set up. The attacker tries to make as many
uncompleted connections to the victim as possible in order to force the victim's
system to generate an abundance of processes. Hence, because the number of
processes that are running on the system cannot be boundlessly large, the attack
renders the victim unable to serve any other request.
Smurf Attack
Like the Process Table attack, this attack makes hundreds of connections to the
victim with the Secure Shell(SSH) Protocol without completing the login process. In this
way, the daemon contacted by the SSH on the victim's system is obliged to start so
many SSH processes that it is exhausted.
Syslogd
The Syslogd attack crashes the syslogd program on a Solaris 2.5 server by sending it a
message with an invalid source IP address.
TCP Reset
92 | P a g e
In TCP Reset attacks, the network is monitored for "tcpconnection" requests to the
victim. As soon as such a request is found, the malevolent attacker sends a spoofed
TCP RESET packet to the victim and obliges it to terminate the TCP connection.
Teardrop
While a packet is traveling from the source machine to the destination machine, it
may be broken up into smaller fragments, through the process of fragmentation. A
Teardrop attack creates a stream of IP fragments with their offset field overloaded.
The destination host that tries to reassemble these malformed fragments eventually
crashes or reboots.
UDP Storm
Some of the best tools to help protect against DDoS attacks are:
1.Cloudflare
Cloudfare's layer 3 and 4 protection absorbs an attack before it reaches a server,
which load balancers, firewalls, and routers do not.
Its layer 7 protection differentiates between beneficial and harmful traffic. Cloudflare
clients include Cisco, Nasdaq, MIT and...the Eurovision song contest.
2. F5 Networks
F5 Networks Silverline has a huge traffic scrubbing capacity, and offers protection either
onsite, in the cloud, or a combination of the two.
It offers protection across levels 3 to 7. Silverline can prevent high volume networks,
stopping them reaching a company's network. 24/7 support is available.
93 | P a g e
3. Black Lotus
The firm's Protection for Networks service was designed with a focus on the hosting
industry, and can be white labelled for their use.
Its protection for Services tool can be filtered and proxied at Layer 4, and requests are
mitigated at layer 7. It also has a patent pending on Human Behaviour Analysis
technology, to improve its service.
4. Arbor networks
From the security division of Netscout, Arbor Cloud offers both on site cloud protection
for state-exhausting attacks against security infrastructure.
It also offers a multi-terabit, on-demand traffic scrubbing service, and 24/7 DDoS
support via its Security Operations Center.
5. Incapsula
The Top Ten Reviews listing site gave Incapsula a gold award for its DDoS protection
service this year. It has a global network of data centres, so can provide more
scrubbing centres than many other providers.
94 | P a g e
Embedded System Security
In the past, the large number of embedded operating systems and the fact that these
systems did not typically have direct Internet communication provided some degree of
security, both through obscurity and the fact that they were not convenient targets.
The similarities between embedded OSes and live firmware updating in conjunction
with the increased number of communication points create a large increase in
the attack surface: Each communication point is a potential point of entry for hackers.
A device’s firmware may be hacked to spy on and take control of everything from
Internet and wireless access points, USB accessories, IP cameras and security systems to
pace makers, drones and industrial control systems.
1. Processing Gap
Existing embedded system architectures are not capable of keeping up with the
computational demands of security processing, due to increasing data rates and
complexity of security protocols. These shortcomings are most felt in systems that need
to process very high data rates or a large number of transactions (e.g., network routers,
firewalls, and web servers), and in systems with modest processing and memory
95 | P a g e
2. Battery Gap
3. Flexibility
4. Tamper Resistance
Attacks due to malicious software such as viruses and trojan horses are the most
common threats to any embedded system that is capable of executing downloaded
applications [Howard and LeBlanc 2002; Hoglund and McGraw 2004; Ravi et al. 2004].
These attacks can exploit vulnerabilities in the operating system (OS) or application
software, procure access to system internals, and disrupt its normal functioning.
Because these attacks manipulate sensitive data or processes (integrity attacks),
disclose confidential information (privacy attacks), and/or deny access to system
resources (availability attacks), it is necessary to develop and deploy various HW/SW
96 | P a g e
countermeasures against these attacks. In many embedded systems such as
smartcards, new and sophisticated attack techniques, such as bus probing, timing
analysis, fault induction, power analysis, electromagnetic analysis, and so on, have
been demonstrated to be successful in easily breaking their security [Ravi et al. 2004;
Anderson and Kuhn 1996, 1997; Kommerling and Kuhn 1999; Rankl and Effing; Hess et al.
2000; Quisquater and Samyde 2002; Kelsey et al. 1998]. Tamper resistance measures
must, therefore, secure the system implementation when it is subject to various physical
and side-channel attacks. Later in this paper (see Section 6), we will discuss some
examples of embedded system attacks and related countermeasures.
5. Assurance Gap
It is well known that truly reliable systems are much more difficult to build than those that
merely work most of the time. Reliable ACM Transactions on Embedded Computing
Systems, Vol. 3, No. 3, August 2004. 468 • S. Ravi et al. systems must be able to handle
the wide range of situations that may occur by chance. Secure systems face an even
greater challenge: they must continue to operate reliably despite attacks from
intelligent adversaries who intentionally seek out undesirable failure modes. As systems
become more complicated, there are inevitably more possible failure modes that need
to be addressed. Increases in embedded system complexity are making it more and
more difficult for embedded system designers to be confident that they have not
overlooked a serious weakness.
6. Cost
97 | P a g e
to provide increasing levels of security using increasingly advanced measures, albeit at
higher system costs, design effort, and design time. It is the designer’s responsibility to
balance the security requirements of an embedded system against the cost of
implementing the corresponding security measures.
98 | P a g e
Firewall
Firewalls are essential since they provide a single block point, where security and
auditing can be imposed. Firewalls provide an important logging and auditing function;
often, they provide summaries to the administrator about what type/volume of
traffic has been processed through it. This is an important benefit: Providing this block
point can serve the same purpose on your network as an armed guard does for your
physical premises.
99 | P a g e
Types of firewalls
The National Institute of Standards and Technology (NIST) 800-10 divides firewalls into
three basic types:
Packet filters
Stateful inspection
Proxys
These three categories, however, are not mutually exclusive, as most modern firewalls
have a mix of abilities that may place them in more than one of the three.
100 | P a g e
Firewall implementation
The firewall remains a vital component in any network security architecture, and today's
organizations have several types to choose from. It's essential that IT
professionals identify the type of firewall that best suits the organization's network
security needs.
Once selected, one of the key questions that shapes a protection strategy is "Where
should the firewall be placed?" There are three common firewall topologies: the bastion
host, screened subnet and dual-firewall architectures. Enterprise security depends on
choosing the right firewall topology.
The next decision to be made, after the topology chosen, is where to place individual
firewall systems in it. At this point, there are several types to consider, such as bastion
host, screened subnet and multi-homed firewalls.
One important distinction many network layer firewalls possess is that they route traffic
directly through them, which means in order to use one, you either need to have a
validly assigned IP address block or a private Internet address block. Network layer
firewalls tend to be very fast and almost transparent to their users.
In some cases, having an application in the way may impact performance and make
the firewall less transparent. Older application layer firewalls that are still in use are not
101 | P a g e
particularly transparent to end users and may require some user training. However,
more modern application layer firewalls are often totally transparent. Application layer
firewalls tend to provide more detailed audit reports and tend to enforce more
conservative security models than network layer firewalls.
Proxy firewalls
Proxy firewalls offer more security than other types of firewalls, but at the expense of
speed and functionality, as they can limit which applications the network supports.
Unlike stateful firewalls or application layer firewalls, which allow or block network
packets from passing to and from a protected network, traffic does not flow through a
proxy. Instead, computers establish a connection to the proxy, which serves as an
intermediary, and initiate a new network connection on behalf of the request. This
prevents direct connections between systems on either side of the firewall and makes it
harder for an attacker to discover where the network is, because they don't receive
packets created directly by their target system.
Proxy firewalls also provide comprehensive, protocol-aware security analysis for the
protocols they support. This allows them to make better security decisions than products
that focus purely on packet header information.
102 | P a g e
Placement of a firewall
When developing a perimeter protection strategy for an organization, one of the most
common questions is "Where should I place firewalls for maximum effectiveness?"
Security expert Mike Chapple breaks up firewall placement into three basic topology
options: bastion host, screened subnetand dual firewalls.
The first, bastion host topology, is the most basic option, and is well suited for relatively
simple networks. This topology would work well if you're merely using the firewall to
protect a corporate network that is used mainly for surfing the Internet, but it is
probably not sufficient if you host a website or email server.
The screened subnet option provides a solution that allows organizations to offer
services securely to Internet users. Any servers that host public services are placed in the
demilitarized zone (DMZ), which is separated from both the Internet and the trusted
network by the firewall. Therefore, if a malicious user does manage to compromise the
firewall, he or she does not have access to the Intranet (providing that the firewall is
properly configured).
The most secure (and most expensive) option is to implement a screened subnet using
two firewalls. The use of two firewalls still allows the organization to offer services to
Internet users through the use of a DMZ, but provides an added layer of protection.
103 | P a g e
Are two firewalls better than one?
Most enterprises use a combination of firewalls, virtual private networks (VPNs) and
intrusion detection/prevention systems (IDS/IPS) to limit access to internal networks.
Generally speaking, there isn't much work to do in these areas; it's about maintaining
these controls and adapting them as dynamic infrastructures change. The maturity of
the technology offers the opportunity to focus limited financial and human resources
on more challenging problems, such as endpoint/server management and application
security.
Two firewalls from different vendors may not cause processing delays, but if not used
and arranged correctly, the devices can become a hassle for IT teams. If you're
experiencing network latency by adding an additional firewall, consider the placement
of the firewalls. Are they both directly connected to each other with nothing else in
between? If that's the case, consider using a different firewall topology that will get the
most out of the two firewalls.
Many people think that as long as their SAN or NAS is behind a firewall then everything
is protected. This is a myth of network security. Most storage environments span across
multiple networks, both private and public.
Storage devices are serving up multiple network segments and creating a virtual bridge
that basically negates any sort of firewall put in place. This can provide a conduit into
the storage environment, especially when a system is attacked and taken control of in
the DMZ or public segment. The storage back end can then be fully accessible to the
attacker because there is a path for the attack.
We can only dream that once you've made it through the challenging phases of
firewall selection and architecture design, you're finished setting up a DMZ. In the real
world of firewall management, we're faced with balancing a continuous stream of
change requests and vendor patches against the operational management of our
firewalls. Configurations change quickly and often, making it difficult to keep on top of
routine maintenance tasks.
According to the Network security, expert Michael Chapple four practical areas where
some basic log analysis can provide valuable firewall management data are:
104 | P a g e
Monitor rule activity
System administrators tend to be quick on the trigger to ask for new rules, but not quite
so eager to let you know when a rule is no longer necessary. Monitoring rule activity
can provide some valuable insight to assist you with managing the rulebase. If a rule
that was once heavily used suddenly goes quiet, you should investigate whether the
rule is still needed. If it's no longer necessary, trim it from your rulebase. Legacy rules
have a way of piling up and adding unnecessary complexity.
Over the years, Chapple had a chance to analyze the rulebases of many production
firewalls, and estimates that at least 20% of the average firewall's rulebase is
unnecessary. There are systems where this ratio is as high as 60%.
Traffic flows
Monitor logs for abnormal traffic patterns. If servers that normally receive a low volume
of traffic are suddenly responsible for a significant portion of traffic passing through the
firewall (either in total connections or bytes passed), then you have a situation worthy of
further investigation. While flash crowds are to be expected in some situations (such as
a Web server during a period of unusual interest), they are also often signs of
misconfigured systems or attacks in progress.
Rule violations
Looking at traffic denied by your firewall may lead to interesting findings. This is
especially true for traffic that originates from inside your network. The most common
cause of this activity is a misconfigured system or a user who isn't aware of traffic
restrictions, but analysis of rule violations may also uncover attempts at passing
malicious traffic through the device.
Denied probes
If you've ever analyzed the log of a firewall that's connected to the Internet, you know
that it's futile to investigate probes directed at your network from the Internet. They're
far too frequent and often represent dead ends. However, you may not have
considered analyzing logs for probes originating from inside the trusted network. These
are extremely interesting, as they most likely represent either a compromised internal
system seeking to scan Internet hosts or an internal user running a scanning tool -- both
scenarios that merit attention.
Firewall audit logs are a veritable goldmine of network security intelligence whose
advantage the organization can take.
105 | P a g e
Fraud Detection and Prevention
Risk and Materiality are two concepts that are well known and understood by auditors.
In the area of fraud these concepts apply to the risk of experiencing a fraud and the
materiality of the losses to fraud. The assessment of the importance of these factors will,
to some degree, determine how serious the company treats the prevention and
detection of fraud. It will also affect the resources devoted to fraud related tasks by
audit, so it is important for all auditors to given proper consideration to the risk and
material of fraud in their organization.
What is Fraud?
There are many definitions for fraud and a number of possible criminal charges,
including: fraud, theft, embezzlement, and larceny. The legal definition usually refers to
a situation where:
It should be noted that persons inside the organization or external to it could commit
fraud. Further, it can be to the benefit of an individual; to part of an organization; or to
the whole organization itself.
However, the most expensive and most difficult fraud for auditors to deal with is one
that is committed by senior management - particularly if it is ‘for’ the benefit of the
organization.
106 | P a g e
Why Does Fraud Happen?
Interviews with persons who committed fraud have shown that most people do not
originally set out to commit fraud. Often they simply took advantage of an opportunity;
many times the first fraudulent act was an accident – perhaps they mistakenly
processed the same invoice twice. But when they realized that it wasn’t noticed, the
fraudulent acts became deliberate and more frequent. Fraud investigators talk about
the 10 - 80 - 10 law which states that 10% of people will never commit fraud; 80% of
people will commit fraud under the right circumstances; and 10% actively seek out
opportunities for fraud. So we need to be vigilant for the 10% who are out to get us and
we should try to protect the 80% from making a mistake that could ruin their lives.
Opportunity
An opportunity is likely to occur when there are weaknesses in the internal control
framework or when a person abuses a position of trust. For example:
• Organizational expediency – ‘it was a high profile rush project and we had to cut
corners’
• Downsizing meant that there were fewer people and separation of duties no longer
existed
• Business re-engineering brought in new application systems that changed the control
framework, removing some of the key checks and balances
Pressure
The pressures are usually financial in nature, but this is not always true. For example,
unrealistic corporate targets can encourage a salesperson or production manager to
commit fraud. The desire for revenge – to get back at the organization for some
perceived wrong; or poor self-esteem - the need to be seen as the top salesman, at
any cost; are also examples of non-financial pressures that can lead to fraud.
107 | P a g e
Rationalization
In the criminal’s mind rationalization usually includes the belief that the activity is not
criminal. The often feel that everyone else is doing it; or that no one will get hurt; or it’s
just a temporary loan, I’ll pay it back, and so on.
Interestingly, studies have shown that the removal of the pressure is not sufficient to stop
an ongoing fraud. Also, the first act of fraud requires more rationalization than the
second act, and so on. But, as it becomes easier to justify, the acts occur more often
and the amounts involved increase in value. This means that, left alone, fraud will
continue and the losses will only increase. I have heard it said that ‘There is no such
thing as a fraud that has reached maturity’. Fraud, ultimately, is fed by greed, and
greed is never satisfied.
There are two main views - one states that management has the responsibility for the
prevention and for the detection of fraud.
Management
• is responsible for the day to day business operations
Audit
• has expertise in the evaluation and design of controls
The reality is that both management and audit have roles to play in the prevention and
detection of fraud. The best scenario is one where management, employees, and
internal and external auditors work together to combat fraud. Furthermore, internal
controls alone are not sufficient, corporate culture, the attitudes of senior management
108 | P a g e
and all employees, must be such that the company is fraud resistant. Unfortunately,
many auditors feel that corporate culture is beyond their sphere of influence. However,
audit can take steps to ensure that senior management is aware of the risk and
materiality of fraud and that all instances of fraud are made known to all employees.
Also audit call encourage management to develop Fraud Awareness Training and a
Fraud Policy to help combat fraud. Finally, audit can review and comment on
organizational goals and objectives to reduce the existence of unrealistic performance
measures. So, there are a number of things auditors can do to help create a fraud
resistant corporate culture.
109 | P a g e
Types of Fraud
Fraud comes in many forms but can be broken down into three categories: asset
misappropriation, corruption and financial statement fraud. Asset misappropriation,
although least costly, made up 90% of all fraud cases studied. These are schemes in
which an employee steals or exploits its organization’s resources. Examples of asset
misappropriation are stealing cash before or after it’s been recorded, making a
fictitious expense reimbursement claim and/or stealing non-cash assets of the
organization.
Financial statement fraud comprised less than five percent of cases but caused the
most median loss. These are schemes that involve omitting or intentionally misstating
information in the company’s financial reports. This can be in the form of fictitious
revenues, hidden liabilities or inflated assets.
Corruption fell in the middle and made up less than one-third of cases. Corruption
schemes happen when employees use their influence in business transactions for their
own benefit while violating their duty to the employer. Examples of corruption are
bribery, extortion and conflict of interest.
110 | P a g e
Fraud Prevention
It is vital to an organization, large or small, to have a fraud prevention plan in place. The
fraud cases studied in the ACFE 2014 Report revealed that the fraudulent activities
studied lasted an average of 18 months before being detected. Imagine the type of
loss your company could suffer with an employee committing fraud for a year and a
half. Luckily, there are ways you can minimize fraud occurrences by implementing
different procedures and controls.
111 | P a g e
cash register employee, one salesperson, and one manager. The cash and check
register receipts should be tallied by one employee while another prepares the deposit
slip and the third brings the deposit to the bank. This can help reveal any discrepancies
in the collections.
Documentation is another internal control that can help reduce fraud. Consider the
example above; if sales receipts and preparation of the bank deposit are documented
in the books, the business owner can look at the documentation daily or weekly to
verify that the receipts were deposited into the bank. In addition, make sure all checks,
purchase orders and invoices are numbered consecutively. Also, be alert to new
vendors as billing-scheme embezzlers’ setup and make payments to fictitious vendors,
usually mailed to a P.O. Box.
Hire Experts
Certified Fraud Examiners (CFE), Certified Public Accountants (CPA) and CPAs who are
certified in Financial Forensics (CFF) can help you in establishing antifraud policies and
procedures. These professionals can provide a wide range of services from complete
internal control audits and forensic analysis to general and basic consultations.
112 | P a g e
Fraud Detection
In addition to prevention strategies, you should also have detection methods in place
and make them visible to the employees. According to Managing the Business Risk of
Fraud: A Practical Guide, published by Association of Certified Fraud Examiners (ACFE),
the visibility of these controls acts as one of the best deterrents to fraudulent behavior. It
is important to continuously monitor and update your fraud detection strategies to
ensure they are effective. Detection plans usually occur during the regularly scheduled
business day. These plans take external information into consideration to link with
internal data. The results of your fraud detection plans should enhance your prevention
controls. It is important to document your fraud detection strategies including the
individuals or teams responsible for each task. Once the final fraud detection plan has
been finalized, all employees should be made aware of the plan and how it will be
implemented. Communicating this to employees is a prevention method in itself.
Knowing the company is watching and will take disciplinary action can hinder
employees’ plans to commit fraud.
113 | P a g e
IAM- Identity & Access Management
IAM technology can be used to initiate, capture, record and manage user identities
and their related access permissions in an automated fashion. This ensures that access
privileges are granted according to one interpretation of policy and all individuals and
services are properly authenticated, authorized and audited.
Poorly controlled IAM processes may lead to regulatory non-compliance because if the
organization is audited, management will not be able to prove that company data is
not at risk for being misused.
The list of technologies that fall under this category includes password-management
tools, provisioning software, security-policy enforcement applications, reporting and
monitoring apps, and identity repositories. Nowadays, these technologies tend to be
grouped into software suites with assortments of additional capabilities, from enterprise-
wide credential administration to automated smart-card and digital-certificates
management.
The ID management buzz phrase of the moment is "identity lifecycle management." The
concept encompasses the processes and technologies required for provisioning, de-
provisioning, managing and synchronizing digital IDs, as well as features that support
compliance with government regulations. Technologies that fall under the ID lifecycle-
management rubric include tools for security principal creation, attribute management,
identity synchronization, aggregation and deletion.
114 | P a g e
Identity Management Concepts
Identity Lifecycle
Like the real-world entities they represent, identities have a lifecycle. Their connection to
the University will change over time and the accounts and authorizations they have will
also change accordingly. However, the identity itself does not go away. When a user
leaves the University (e.g. graduation, separation) their identity persists and they will
continue to be able to authenticate using their UT EID. This allows individuals to later
come back and apply for jobs, request transcripts, etc. Systems must take into account
the current status of a user in their authorization schemes and change account
authorizations when that status changes. So, for example, if a student or employee
leaves the university, the wireless network will note the change in affiliation and remove
authorizations for wireless access.
115 | P a g e
Need for IAM
It can be difficult to get funding for IAM projects because they don’t directly increase
either profitability or functionality. However, a lack of effective identity and access
management poses significant risks not only to compliance but also an organization’s
overall security. These mismanagement issues increase the risk of greater damages from
both external and inside threats.
Keeping the required flow of business data going while simultaneously managing its
access has always required administrative attention. The business IT environment is ever
evolving and the difficulties have only become greater with recent disruptive trends like
bring-your-own-device (BYOD), cloud computing, mobile apps and an increasingly
mobile workforce. There are more devices and services to be managed than ever
before, with diverse requirements for associated access privileges.
With so much more to keep track of as employees migrate through different roles in an
organization, it becomes more difficult to manage identity and access. A common
problem is that privileges are granted as needed when employee duties change but
the access level escalation is not revoked when it is no longer required.
This situation and request like having access like another employee rather than specific
access needs leads to an accumulation of privileges known as privilege creep.
Privilege creep creates security risk in two different ways. An employee with privileges
beyond what is warranted may access applications and data in an unauthorized and
potentially unsafe manner. Furthermore, if an intruder gains access to the account of a
user with excessive privileges, he may automatically be able to do more harm. Data
loss or theft can result from either scenario.
Typically, this accumulation of privilege is of little real use to the employee or the
organization. At best, it might be a convenience in situations when the employee is
asked to do unexpected tasks. On the other hand, it might make things much easier for
an attacker who manages to compromise an over-privileged employee identity. Poor
identity access management also often leads to individuals retaining privileges after
they are no longer employees.
116 | P a g e
IAM Features
Application shaping is new technology that gives IT complete control over what each
employee or groups of employees can see and do within web applications. For
example, you could redact certain data fields within these web applications for certain
types of employees, disable certain features or even make web applications entirely
read-only.
By removing high-risk features (e.g., exporting files, ability to mass delete, etc.), a
company can increase its security, without limiting its workforce's flexibility.
117 | P a g e
See the Whole Picture: Capture Visuals for the Audit Trail
With compliance an ongoing concern for most businesses, any IAM solution should
maintain an audit trail. However, just knowing who logged in and out and when they
did it is no longer adequate. Advanced IAM solutions allow for IT teams to monitor the
use of specific features within web applications, send alerts for unusual activity and
even provide the option to capture screen shots when certain online behaviors occur.
This provides visual evidence of exactly what the user was doing.
For instance, if an attacker were to attempt to log in with a user's identity from a
different country, the user would be presented with a security notification in the browser
or via an SMS text message instead of an operations team being alerted, as they may
not be aware of the individual's location. The user can then issue a response to disable
the account or immediately change a password. This gives companies a higher level of
assurance that their data and user accounts are protected.
118 | P a g e
The IAM Strategy
Here are some general strategies auditors can recommend for IT departments to
consider when aligning the organization's IAM program to existing business strategies
and regulatory compliance requirements:
119 | P a g e
Integrate the components and processes above, but realize that not all
components might be needed at first based on the organization's strategic plan,
business needs, and IAM project scope.
IAM Challenges
A chain is a strong as its weakest link, and when it comes to IT security, IAM is the
weakest link in many organizations. For example, many IT departments store identity
credentials as data objects in different data repositories. Because these organizations
can have hundreds of discrete identity stores containing overlapping and conflicting
data, synchronizing this information among multiple data repositories turns into a
challenging, time consuming, and expensive ordeal, especially if the data is managed
through the use of manual processes or custom scripts.
Another key challenge is related to cost. As a general rule, the costs of managing user
identities should be as low as possible to ensure a reasonable return on investment in
the IAM project. Too often, identity management projects become too large or
cumbersome to finish on schedule; after all, there will always be more applications to
integrate into the system. This can be accomplished by scaling identity life cycle
management activities efficiently across various applications and network resources
and employing as little staff as possible to manage IT applications.
Besides the challenges stemming from the use of manual processes to manage multiple
data repositories, other identity synchronization issues include:
Reducing the costs associated with managing large numbers of identity stores.
Providing the ability to expand the organization's people and IT resources
without a corresponding increase in IT staff.
Increasing employee productivity by being able to find the right information
about other users.
Meeting regulatory requirements associated with privacy and access controls.
Remembering to use more than one user ID.
120 | P a g e
Total Cost of Ownership of Identity and Access
Management
IAM is an expensive investment. Besides the recommendations above, auditors can
share the following tips with their IT department to help reduce the total cost of
ownership of IAM activities:
Support costs are usually the largest portion of total ownership costs, followed by
software and hardware costs.
121 | P a g e
Incident Response
Incident response is an organized approach to addressing and managing the aftermath of a
security breach or attack (also known as an incident). The goal is to handle the situation in a
way that limits damage and reduces recovery time and costs. An incident response plan
includes a policy that defines, in specific terms, what constitutes an incident and provides a
step-by-step process that should be followed when an incident occurs.
According to the SANS Institute, there are six steps to handling an incident most effectively:
1. Preparation: The organization educates users and IT staff of the importance of updated
security measures and trains them to respond to computer and network security incidents
quickly and correctly.
2. Identification: The response team is activated to decide whether a particular event is, in fact,
a security incident. The team may contact the CERT Coordination Center, which tracks
Internet security activity and has the most current information on viruses and worms.
3. Containment: The team determines how far the problem has spread and contains the
problem by disconnecting all affected systems and devices to prevent further damage.
4. Eradication: The team investigates to discover the origin of the incident. The root cause of
the problem and all traces of malicious code are removed.
5. Recovery: Data and software are restored from clean backup files, ensuring that no
vulnerabilities remain. Systems are monitored for any sign of weakness or recurrence.
6. Lessons learned: The team analyzes the incident and how it was handled, making
recommendations for better future response and for preventing a recurrence.
122 | P a g e
Incident Response Plan
An incident response plan (IRP) is a set of written instructions for detecting, responding
to and limiting the effects of an information security event.
An incident response plan can benefit an enterprise by outlining how to minimize the
duration of and damage from a security incident, identifying participating stakeholders,
streamlining forensic analysis, hastening recovery time, reducing negative publicity and
ultimately increasing the confidence of corporate executives, owners and
shareholders. The plan should identify and describe the roles/responsibilities of the
incident response team members who are responsible for testing the plan and putting it
into action. The plan should also specify the tools, technologies and physical resources
that must be in place to recover breached information.
123 | P a g e
Cyber Security Incident Response Team
There are various types of CSIRTS. An internal CSIRTs is assembled as part of a parent
organization, such as a government, a corporation, a university or a research network.
National CSIRTs (one type of internal CSIRT), for example, oversee incident handling for
an entire country. Typically, internal CSIRTS gather periodically throughout the year for
proactive tasks such as DR testing, and on an as-needed basis in the event of a security
breach. External CSIRTs provide paid services on either an on-going or as-needed basis.
124 | P a g e
CERT (Computer Emergency Readiness Team) lists the following among the roles of
CSIRT members:
As teams increased their capability and scope, they began to expand their activities to
include more proactive efforts. These efforts included looking for ways to
• prevent incidents and attacks from happening in the first place by securing and
hardening their infrastructure
• training and educating staff and users on security issues and response strategies
• actively monitoring and testing their infrastructure for weaknesses and
vulnerabilities
• sharing data where and when appropriate with other teams
125 | P a g e
As organizations become more complex and incident management capabilities such
as CSIRTs become more integrated into organizational business functions, it is clear that
incident management is not just the application of technology to resolve computer
security events. It is also the development of a plan of action, a set of processes that
are consistent, repeatable, of high quality, measurable, and understood within the
constituency. To be successful this plan should
126 | P a g e
The OODA Loop
Developed by US Air Force military strategist John Boyd, the OODA loop stands for
Observe, Orient, Decide, and Act.
Observe
Tools and Tactics – Vulnerability Analysis; SIEM Alerts; Application Performance
Monitoring; IDS Alerts; Netflow Tools; Traffic Analysis; Log Analysis
Questions to Ask – What does normal activity look like on my network? How can I find
and categorize events or user activity that aren’t normal? And which require my
attention now? Finally, how can I fine-tune my security monitoring infrastructure?
127 | P a g e
Orient
Tools and Tactics – Security Research; Incident Triage; Situational Awareness; Security
Research
Questions to Ask – Is your company preparing for a new software package or planning
layoffs? Have you or anyone else in the wild seen attacks from this particular IP address
before? Do you know what the root cause is? How large is the scope and impact?
Key Takeaways – In this phase of incident response methodology, it’s important to try
and think like the attacker so that you can orient your defense strategies against the
latest attack tools and tactics. These are always changing so make sure you have the
latest threat intelligence for your security monitoring tools. This will ensure that your tools
are capturing the right information and providing accurate context.
Decide
Tools and Tactics – Hard copy documentation (pen, notebook and clock), your
company’s corporate security policy
Questions to Ask – Once you have all the facts, then it’s time to ask yourself and your
team how to act.
Key Takeaways – In this phase of incident response methodology, catalog all areas of
your incident response process. Perhaps one of the most important areas to document
here are communications around data collection and the decision-making process.
Act
Tools and Tactics – System backup and recovery tools; data capture and forensics
analysis tools; patch management and other systems management, security awareness
training tools and programs
Questions to Ask – How can I quickly remedy the affected systems and get them back
online? How can this be prevented in the future? What are ways that we can educate
users so these things don’t happen again? Should we fine-tune our business process
based on these lessons?
128 | P a g e
Intrusion Detection
Intrusion detection (ID) is a type of security management system for computers and
networks. An ID system gathers and analyzes information from various areas within a
computer or a network to identify possible security breaches, which include both
intrusions (attacks from outside the organization) and misuse (attacks from within the
organization). ID uses vulnerability assessment (sometimes referred to as scanning),
which is a technology developed to assess the security of a computer system or
network.
129 | P a g e
Types of Intrusion Detection
Physical IDS
Physical intrusion detection is the act of identifying threats to physical systems. Physical
intrusion detection is most often seen as physical controls put in place to ensure CIA. In
many cases physical intrusion detection systems act as prevention systems as well.
Examples of Physical intrusion detections are Security Guards; Security Cameras; Access
Control Systems (Card, Biometric); Firewalls; Man Traps; Motion Sensors
This is similar to the way most antivirus software detects malware. The issue is that there
will be a lag between a new threat being discovered in the wild and the signature for
detecting that threat being applied to your IDS. During that lag time your IDS would be
unable to detect the new threat.
130 | P a g e
Anomaly Based IDS
An IDS which is anomaly based will monitor network traffic and compare it against an
established baseline. The baseline will identify what is “normal” for that network- what
sort of bandwidth is generally used, what protocols are used, what ports and devices
generally connect to each other- and alert the administrator or user when traffic is
detected which is anomalous, or significantly different, than the baseline.
Passive IDS
A passive IDS simply detects and alerts. When suspicious or malicious traffic is detected
an alert is generated and sent to the administrator or user and it is up to them to take
action to block the activity or respond in some way.
Reactive IDS
A reactive IDS will not only detect suspicious or malicious traffic and alert the
administrator, but will take pre-defined proactive actions to respond to the threat.
Typically this means blocking any further network traffic from the source IP address or
user.
131 | P a g e
Intrusion Prevention System
Intrusion Prevention Systems (IPS) extended IDS solutions by adding the ability to block
threats in addition to detecting them and has become the dominant deployment
option for IDS/IPS technologies.
Unlike IDS, IPS performs two functions, first it tries to prevent and intrusion and if by
chance it fails at it, IPS also detects the intrusion:
Prevention
The IPS often sits directly behind the firewall and it provides a complementary layer of
analysis that negatively selects for dangerous content. Unlike its predecessor
the Intrusion Detection System (IDS)—which is a passive system that scans traffic and
reports back on threats—the IPS is placed inline (in the direct communication path
between source and destination), actively analyzing and taking automated actions on
all traffic flows that enter the network. Specifically, these actions include:
As an inline security component, the IPS must work efficiently to avoid degrading
network performance. It must also work fast because exploits can happen in near real-
time. The IPS must also detect and respond accurately, so as to eliminate threats and
false positives (legitimate packets misread as threats).
132 | P a g e
Detection
The IPS has a number of detection methods for finding exploits, but signature-based
detection and statistical anomaly-based detection are the two dominant mechanisms.
IPS was originally built and released as a standalone device in the mid-2000s. This
however, was in the advent of today’s implementations, which are now commonly
integrated into Unified Threat Management (UTM) solutions (for small and medium size
companies) and next-generation firewalls (at the enterprise level).
133 | P a g e
How IDS works
IDS systems can use different methods for detecting suspected intrusions. The two most
common broad categories are by pattern matching and detection of statistical
anomalies.
Pattern matching
Pattern matching is used to detect known attacks by their "signatures," or the specific
actions that they perform. It is also known as signature-based IDS or misuse detection.
The IDS looks for traffic and behavior that matches the patterns of known attacks. The
effectiveness is dependent on the signature database, which must be kept up to date.
The biggest problem with pattern matching is that it fails to catch new attacks for which
the software doesn't have a defined signature in its database.
Statistical anomaly
Anomaly-based detection watches for deviations from normal usage patterns. This
requires first establishing a baseline profile to determine what the norm is, then
monitoring for actions that are outside of those normal parameters. This allows you to
catch new intrusions or attacks that don't yet have a known signature.
• Metric model
• Neural network
• Machine learning classification
A problem with anomaly-based IDS is the higher incidence of false positives, because
behavior that is unusual will be flagged as a possible attack even if it's not.
134 | P a g e
Where the IDS fits in your security plan
Edge or Front-end firewall is the first line of defense in protecting the network against
intruders, and it will likely have its own intrusion detection capability, although it may
detect and prevent only a limited number of known attacks/intrusions. A network-
based IDS is often placed between the edge firewall and a back-end firewall that
protects the internal network from the publicly accessible network in between.
Placing the IDS in this location allows it to do its job on all traffic that gets through the
edge firewall and provides an extra layer of protection for the DMZ, which is the most
vulnerable part of your network since it contains your public servers such as Internet-
accessible Web servers, DNS servers, front-end mail servers, etc.
Putting the IDS in front of the edge firewall would result in a greater load on the IDS,
since it would respond to many scans, probes and attack attempts that could
otherwise be filtered out by the firewall. Also, the huge number of alerts might lead to
an "IDS who cried wolf" situation in which administrators would start ignoring the alerts
when many of them don't lead to real attacks.
IDS can also be placed behind the back-end firewall to detect intrusions on the internal
LAN.
A multi-layered approach
The best security is afforded by using one than one IDS (for example, an IDS in the DMZ
and another on the internal network) and by using both network and host-based IDS.
Host-based IDS can be installed on critical servers for multi-layered protection.
Incident response
The detection of intrusions is only the first step in making an organization more secure
and protecting against intruders. The real key is what happens after the intrusion is
detected: your incident response plan.
To be effective, response must be as immediate as possible. That's why your IDS needs
to include notification features and you need to set them up so that the alerts get to
the proper people as quickly as possible after an intrusion is detected.
Incident response team should practice the following incident response procedures
135 | P a g e
Log Analysis & Management
Hackers are inventing new and increasingly sophisticated ways to break into corporate
information systems, and companies must respond with more effective ways to protect
their vital corporate information systems, networks, and data. Among the most reliable,
accurate, and proactive tools in the security arsenal are the event and audit logs
created by network devices.
136 | P a g e
Advantages of Logging
• The logs provide vital inputs for managing the computer security incidents, both
for Incident Prevention and Incident Response Benefits
137 | P a g e
Types of Event Logs
Application Log Any event logged by an application. These are determined by the
developers while developing the application. E.g.: An error while
starting an application gets recorded in Application Log.
System Log Any event logged by the Operating System. E.g.: Failure to start a
drive during start-up is logged under System Logs
Security Log Any event that matters about the security of the system. E.g. valid
and invalid Logins and logoffs, any file deletion etc. are logged under
this category.
Directory Records events of AD. This log is available only on domain controllers.
Service log
DNS Server log Records events for DNS servers and name resolutions. This log is
available only for DNS servers
File replication Records events of domain controller replication This log is available
service log only on domain controllers.
138 | P a g e
Key to a successful Log Analysis
Findings from project research revealed that effective logging can save time and
money in case of a cyber security incident – and that it can also be very helpful as part
of a defense (or prosecution) in a court case. Therefore key to a successful Log Analysis
and maintenance is:
139 | P a g e
Logging Challenges
140 | P a g e
Mainframe Security
Mainframes typically have multiple processors. And they can be connected in a cluster
and operate in a distributed computing system. However, the distinguishing feature of a
mainframe is that it can run independently as a “centralized cluster” by dividing itself
internally to work on problems in a parallel or multi-tasking way for extended periods of
time, even years.
An important benefit offered by this design is that expensive reliability features are
needed in only one server (as compared to being built in to many smaller servers). Also,
the physical “footprint” of a mainframe is much smaller than that of a distributed server
farm, and therefore is less expensive from an environmental perspective (that is, the
amount of power, cooling, and floor space needed is much less). Mainframes can
therefore be more cost-effective in solving the same business problems over the long
term.
Mainframes are usually larger than most servers because of the necessary redundancy
of design and components that allow the computer to deliver high availability as well
as vertical and horizontal scalability (the ability to increase the capacity of the
computer without replacing the entire unit). Also, mainframe components such as hot-
pluggable processors, disks, interface adapters such as network cards or cryptographic
engines, and even the power supply, can all be replaced or upgraded without taking
the server offline.
141 | P a g e
Why mainframe security?
Barry Schrager, the founder of Mainframe Data Security, has written numerous pieces
on the subject. He cites these statistics in a recent LinkedIn article:
71% of all Fortune 500 companies have their core business on the mainframe.
23 of the world’s top 25 retailers use a mainframe.
92% of the top 100 banks use a mainframe.
10 out of 10 of the top insurers use a mainframe.
More than 225 state and local governments worldwide rely on a mainframe.
9 of the top 10 global life and health insurance providers process their high-
volume transactions on mainframe.
With the widespread use of mainframes today, it is absolutely necessary that they have
excellent security. Everyday millions of transactions pass through mainframes; with poor
security, this can lead to the loss of massive amounts of money and data. Mainframe
security is a must for business continuity and has continuously evolved over the years to
where it is today. When the mainframe became more networked with other devices
and connected to end users on computers other than the original “dumb terminal,” its
security really broadened as the traditional physical security was no longer enough.
One rarely hears about a mainframe being involved in a major data security breach,
but there was the infamous TJX Companies Inc. hacking case, the largest data security
breach to date. In 2007, the retailer announced the discovery of a computer system's
breach and the possible loss of millions of credit card records. As the world would learn
later, the breach involved more than 45 million customer records and had gone
undetected for a number of years.
Mainframes are so impenetrable that no one knows for sure what goes on inside them.
Since the advent of mainframes, security paradigms have changed dramatically. With
the inclusion of privacy considerations in the information security discipline, the new
paradigm forces us to deal with risks that apply to any and all computing platforms
including mainframes.
142 | P a g e
Machine Learning Security- Adversarial Learning
Adversarial learning is a novel research field that lies at the intersection of machine
learning and computer security. It aims at enabling the safe adoption of machine
learning techniques in adversarial settings like spam filtering, computer security,
and biometric recognition.
The problem is motivated by the fact that machine learning techniques have not been
originally designed to cope with intelligent and adaptive adversaries, and, thus, in
principle, the whole system security may be compromised by exploiting specific
vulnerabilities of learning algorithms through a careful manipulation of the input data.
143 | P a g e
Types of attacks:
Evasion attacks
Evasion attacks are the most popular kind of attack that may be incurred in
adversarial settings during system operation. For instance, spammers and hackers
often attempt to evade detection by obfuscating the content of spam emails and
malware code. In the evasion setting, malicious samples are modified at test time to
evade detection, that is, to be misclassified as legitimate. No influence over the
training data is possible.
Poisoning
Machine learning algorithms are often re-trained on data collected during
operation to adapt to changes in the underlying data distribution. For instance, an
Intrusion Detection System (IDS) may be re-trained on a set of samples (TR) collected
during network operation. Within this scenario, an attacker may poison the training
data by injecting carefully designed samples to eventually compromise the whole
learning process. Poisoning may thus be regarded as an adversarial contamination
of the training data.
144 | P a g e
Machine Learning In Cyber Security
According to Matt Wolff, Chief Data Scientist at Cylance, when it comes to cyber
security, there are two reasons why Machine Learning is a growing trend in this field:
The collection and storage of large amounts of useful data points is already well
underway in cyber security. It would be difficult to find a security analyst who is not
currently overwhelmed by the vast amount of raw data that is collected every day
in mature environments. There even exist a plethora of tools designed to help sort,
slice, and mine this data in a somewhat automated fashion to help the analyst
along in their day-to-day activities.
145 | P a g e
Advantages of machine learning
With a machine learning approach, many of these tasks can be automated, and
even deployed in real time to catch these activities before any damage is done. For
example, a well-trained machine learning model will be able to identify unusual
traffic on the network, and shut down these connections as they occur. A well-
trained model would also be able to identify new samples of malware that can
evade human generated signatures, and perhaps quarantine these samples before
they can even execute. In addition, a machine learning model trained on the
standard operating procedure of a given endpoint may be able to identify when
the endpoint itself is engaging in odd behavior, perhaps at the request of a
malicious insider attempting to steal or destroy sensitive information.
In the past, security products attempted to ‘correlate’ data to discern patterns and
meaning. Instead, today we perform link analysis to evaluate relationships or
connections between data nodes. Key relationships can be identified among various
types of data nodes or objects, things we might think of as organizations, people,
transactions, and so on.
Machine learning is what enables us to bring together huge volumes of data that is
generated by normal user activity from disparate, even obscure, sets of data -- to
identify relationships that span time, place and actions. Since machine learning can be
simultaneously applied to hundreds of thousands of discrete events from multiple data
sets, “meaning” can be derived from behaviors and used an early warning detection
or prevention system.
The ultimate test for a machine-learning model is validation error on new data. In other
words, machine learning is looking to match new data with what it’s seen before, and
not to test it to disprove, reject or nullify an expected outcome. Since machine learning
uses an iterative, automated approach, it can reprocess data until a robust pattern is
found. This allows it to go beyond looking for “known” or “common” patterns.
Machine learning’s ability to automatically detect changes over time that inform
network behavioural profiles of what is and isn’t normal traffic also makes it well-
suited to helping the enterprise adapt to new forms of attacks without requiring
human intervention. In conjunction with neural network machine learning models
and their evolutionary programming adaptation process it is possible to iteratively
create networks that become stronger at adapting to new problems, including
aggressive automated invasions.
146 | P a g e
There is more value in using multistage machine-learning analysis and actual data in an
effort to determine which machine learning model will work best for detecting real
security events on any one particular network. Processing data streams from various
subsystems (data transmission frequency measurements over time, for instance, or
protocols in a network stream that identify affiliated applications and infrastructure
devices) using a variety of machine learning models, and then comparing the learned
data to the original raw data, lets an enterprise grade each data stream to reveal
which models provide the highest predictability of anomaly detection for that distinct
network. Machine learning models may run the gamut from associated rules learning,
to sparse dictionary learning, to Bayesian fields and artificial neural networks.
Ideally, a data stream can be mastered using unsupervised learning techniques. This
approach learns the features of a data set, and classifies it into a “cluster” of similar
data–either normal or abnormal. This is in contrast to supervised learning, which requires
that sample data for which the outcome already is known be used for training.
The industry really is just at the start of applying machine learning to the growing cyber-
security challenges of detecting and analysing increasingly sophisticated and targeted
threats. The future will see neural networks trained in one data set become the input to
others, thereby creating deep networks by extending the knowledge of high-level
networks. The industry also will increase its use of hard AI–the simulation of biologic
thinking in computers–in detection engines.
147 | P a g e
Network Security Monitoring
NMS is not an IDS, although it relies on IDS-like products as part of an integrated data
collection and analysis suite. NMS involves collecting the full spectrum of data types
(event, session, full content and statistical) needed to identify and validate intrusions.
The NSM model tries to give more control to the analyst by providing enough
background to make independent decisions.
NSM is more concerned with network auditing than with real-time identification of
intrusions. Although encryption denies the analyst the ability to see packet contents, it
doesn't deny analysts the ability to see traffic patterns. Simply knowing who talked to
whom, and when, is more information and that’s how NSM handles encryption.
148 | P a g e
NSM Cycle
While the NSM cycle flows from collection to detection and then analysis, this is not how
the emphasis we as an industry has placed on these items has evolved. Looking back,
the industry began its foray into what is now known as network security monitoring with
a focus on detection. In this era came the rise of intrusion detection systems such as
Snort that are still in use today. Organizations began to recognize that the ability to
detect the presence of intruders on their network, and to quickly respond to the
intrusions, was just as important as trying to prevent the intruder from breaching the
network perimeter in the first place. These organizations believed that you should
attempt to collect all of the data you can so that you could perform robust detection
across the network. Thus, detection went forth and prospered, for a while.
149 | P a g e
As the size, speed, and function of computer networks grew, organizations on the
leading edge began to recognize that it was no longer feasible to collect 100% of
network data. Rather, effective detection relies on selectively gathering data relevant
to your detection mission. This ushered in the era of collection, where organizations
began to really assess the value received from ingesting certain types of data. For
instance, while organizations had previously attempted to perform detection against
full packet capture data for every network egress point, now these same organizations
begin to selectively filter out traffic to and from specific protocols, ports, and services. In
addition, these organizations are now assessing the value of data types that come with
a decreased resource requirement, such as network flow data. This all worked towards
performing more efficient detection through smarter collection. This brings us up to
speed on where we stand in the modern day.
150 | P a g e
Best practices for successful NSM
151 | P a g e
Types of monitoring
• Network tap – physical device which relays a copy of packets to an NSM server
• Host NIC – configured to watch all network traffic flowing on its segment
• Serial port tap – physical device which relays serial traffic to another port, usually
requires additional software to interpret data
• Alert/log data – triggers from IDS tools, tracking user logins, etc.
152 | P a g e
Challenges in Network Monitoring
• For starters, each of the seven layers of the OSI networking model has its own
responsibilities, which call for separate methods of monitoring and security for
each layer. Network monitoring is seemingly simple — but in reality, it’s a very
complex process. Mixing traditional network monitoring with security monitoring
further complicates things from the design perspective, for network architects,
network operations teams, and the systems administrators who manage it.
• The most important in network monitoring is the vast amount of data gathered
by the monitoring tool, and the amount of time required to assimilate the
information and apply intelligence to it, in order to achieve actionable decisions.
• Another challenge is caused by the unprecedented growth of a network, a
result of the organisation’s growth due to business expansion or company
mergers. The bigger the network, the tougher it is to visualise the scale of network
infrastructure. This can result in performance bottlenecks as well as security
vulnerabilities. Finally, failure to incorporate proper monitoring tools is also a
challenge to be addressed by senior IT management staff. It has been observed
that relying purely on commercial products actually limits a firm’s ability to bring
diversification into the network monitoring process.
153 | P a g e
Challenges in Network Security
154 | P a g e
Next Generation Firewall
NGFWs are integrated network security platforms that consist of in-line deep packet
inspection (DPI) firewalls, IPS, application inspection and
control, SSL/SSH inspection, website filtering and quality of service (QoS)/bandwidth
management to protect networks against the latest in sophisticated network attacks
and intrusion.
155 | P a g e
NGFWs are not traditional firewalls
Enterprises need to make an NGFW purchase decision based on need, risk and future
growth. Don't buy a Cadillac if a Chevy pickup truck will do the job.
Although firewalls are placed between the Internet and an internal network inside
the DMZ, attackers have found ways to circumvent these controls and cause
considerable damage before detection. Meanwhile, traditional firewalls often
necessitate having to install separate IPS, Web application firewalls (WAFs), secure
coding standards based on the Open Web Application Security Project's (OWASP) Top
10 vulnerabilities, strong encryption at the Web layer (SSL/TLS), and antivirus and
malware prevention.
Having to deploy, manage and monitor this unwieldy number of network security
products to mitigate multiple heterogeneous attack vectors is challenging, to say the
least. In addition, this diverse array of security products can compromise each other's
functionality at the expense of broadband resource usage, response times, and
monitoring and maintenance requirements.
156 | P a g e
NGFWs are not UTMs
Unified threat management systems (UTMs) are all-in-one network security platforms
that are meant to provide simplicity, streamlined installation and use, as well as the
ability to concurrently update all security functions. These systems, like NGFWs, clearly
have a major advantage over acquiring a variety of network security technologies, as
there's no need to maintain disparate security products and figure out how they all
work together.
UTMs were originally designed for small to medium-sized businesses (SMBs), not large
organizations, however. NGFWs, on the other hand, are generally more expansive and
work to secure the networks of businesses from the size of an SMB to large enterprise
environments. Unlike UTMs, most NGFWs, for example, offer threat intelligence, a degree
of mobile device security, data loss prevention and an open architecture that allow
clients to use regular expressions (regex) to tailor application control and even some
firewall rule definitions.
157 | P a g e
Optimizing NGFW functionality
Second, NFGWs must be flexible, which also means scalable, so that features can be
modularized and activated based on need.
And third, NFGWs must be easy to use, with a fairly intuitive management interface that
provides a clean and easy-to-read dashboard, feature activations, rule set definitions,
configuration analysis, vulnerability assessments, activity reports and alerts.
Today's NGFWs make up a cadre of network security products that purport to offer
these three characteristics. Although NGFW services are listed with commonly named
features (e.g., DLP, application control and threat intelligence), a close look shows
some variation between NGFW vendor products. For example, those NGFWs that offer
mobile device security will admit this is not a mobile device management (MDM)
product. They can identify mobile devices and operating systems, provide policy
enforcement based on apps, users and content, and even extend a VPN tunnel to
prevent malware, but they do not provide total device management as offered by
MDM products.
Meanwhile, some NGFW features are more robust and advanced than others. So it is
incumbent upon customers to carefully vet the features of individual NGFW products to
determine the best fit for them. For example, not all NGFWs provide two-factor
authentication or mobile device security, but then, not every customer needs those
features. And while there are those NGFWs that say they support such features, some
might require additional modules or products to make them work.
158 | P a g e
How NGFWs are sold
Most NFGWs are appliance-based, but some are available as virtual products
(software) -- where enterprises can install them on their own servers -- and some
delivered over the cloud as a software as a service. Most are modular, such that an
enterprise can choose to purchase and activate features commensurate with their
specific needs and risks.
Another important point about NFGWs: Never pay retail price. NFGW vendors want the
business, and their job is to demonstrate the differentiators that set them apart from
competitors.
Enterprises should also never buy the best or most technologically advanced product.
They need to make an NGFW purchase decision based on need, risk and future growth.
Don't buy a Cadillac if a Chevy pickup truck will do the job. Just make sure to know
how long that pickup truck is needed, and ensure it'll be sufficient to maintain the
organization's anticipated pace of growth.
We live in exciting times. In speaking with top NGFW vendors, there are features under
development that will make the IT department's life easier while further strengthening
network security. These companies are also resolved to develop NGFW products that
are better tailored to the network security requirements of SMBs, large enterprises and
everything in between.
NGFW vendors are also spending a considerable amount of time and expense in R&D to keep
pace with today's sophisticated attacks and meet the comprehensive, flexible and easy-to-use
requirements outlined above. One of the major differentiators that, ironically, all of these major
NGFW companies purport to be working on is threat intelligence that is current, open,
continuous, adaptive and automatic.
159 | P a g e
Password Management
The majority of people use very weak passwords and reuse them on different websites.
Passwords -- especially those not supported by two-step verification -- are last lines of
defence against prying eyes.
160 | P a g e
Common ways of steeling Password
Applying passwords at various steps to access the data is not sufficient to safeguard the
data today. Keeping default passwords or easy passwords result in creating a threat to
the data. Many people realize that to avoid hacking, strong passwords are a must, but
they fail to understand that the hackers/crackers are becoming sophisticated day by
day.
Through social engineering or by guessing the passwords, they can easily break into the
systems. One should therefore keep changing the passwords frequently and should be
up to date with the latest techniques in use.
Guessing-
Various programs have been developed to guess a person’s password, with any sort
of personal information gained regarding him, from names, DOB, pet’s name,
license number etc. These programs are capable of searching a word spelled
backwards. That is why it is advised to clear any personal information from one’s
password.
Certain programs have been developed, which run each and every word of the
dictionary against a username in hope of finding a perfect match. Therefore it is
advised to keep away from dictionary words of even the remotest language.
Brute-Force attack-
Phishing-
Phishing scams are aimed to trick the person through IM or e-mails to provide their
personal information. They might excite the recipient to respond. The best way to
avoid being fooled is to not click on any such suspicious links.
161 | P a g e
Shoulder surfing
Passwords are not always stolen online. The hacker may be standing behind you
peeping in when you type your password. One should be careful and develop a
habit of typing the password fast and by not looking at the keyboard.
Cyber security is based on the “weakest link”, and usually the password becomes that
part of the chain which can be easily broken. Hence to create and maintain a strong
password is very necessary.
Passwords are case sensitive, so a mixture of upper and lower case letters should
be used
The password should contain numerals & special characters randomly to make it
strong. Put digits, symbols, and capital letters spread throughout the middle of
your password, not at the beginning or end.
A longer password is usually better than a more random password as long as the
password is at least 12-15 characters long.
Avoiding common sports and pop culture regardless of length is suggested. The
more common a password is, the less secure it will be, so something no one else
would be a better choice.
Passwords are only as secure as the sites to which they are entrusted with. Limit
the potential fallout by using a unique password everywhere. Or use a password
manager.
Admins who set password policies are better off requiring longer passwords and
letting users keep them for longer, rather than requiring them to change
passwords every one or two month. This encourages users to have stronger
162 | P a g e
passwords and avoids simple schemes like incrementing a number at the end of
the password each time they have to reset it.
Never give passwords to friends, even if they’re really good friends. A friend can
– maybe even accidentally – pass your password along to others or even
become an ex-friend and abuse it.
If password is in the dictionary, there is a chance someone will guess it. There’s
even software that criminals use that can guess words used in dictionaries.
Programs or web services let you create a different very strong password for
each of your sites. But you only have to remember the one password to access
the program or secure site that stores your passwords for you.
The best password in the world might not do you any good if someone is looking
over your shoulder while you type or if you forget to log out on a cybercafé
computer. Malicious software, including “keyboard loggers” that record all of
your keystrokes, has been used to steal passwords and other information. To
increase security, make sure you’re using up-to-date anti-malware software and
that your operating system is up-to-date.
In his guide to “mastering the art of passwords”, Dennis O'Reilly suggests creating a
system that both allows you to create complex passwords and remember them.
For example, create a phrase like "I hope the Giants will win the World Series in 2016!"
Then, take the initials of each word and all numbers and symbols to create your
password. So, that phrase would result in this: IhtGwwtWSi2016!
163 | P a g e
Patch Management
The rise of widespread worms and malicious code targeting known vulnerabilities on
unpatched systems, and the resultant downtime and expense they bring, is probably
the biggest reason so many organizations are focusing on patch management. Along
with these threats, increasing concern around governance and regulatory compliance
has pushed enterprises to gain better control and oversight of their information assets.
It's obvious that patch management is a critical issue. What is also clear is the main
objective of a patch management program: to create a consistently configured
environment that is secure against known vulnerabilities in operating system and
application software.
The process used for patch management differs depending on the IT infrastructure of a
company. In most cases, a large company with a large infrastructure typically
automates patch management. This reduces the need for manual implementation.
Small to mid-sized companies often choose to outsource their patch management to a
managed IT services provider. A managed service provider can perform patches
remotely.
There are a number of vulnerabilities that can endanger your network at any time.
Patch management is a form of preventative maintenance that helps ensure your
infrastructure’s security. In some cases, there is a vulnerability in which a patch has not
yet been released. A patch management system monitors your network and alerts
technicians of exploits so that they can take action to prevent an attack even while a
patch is in the process of being created.
164 | P a g e
Automated Patch Management
Patch management is very critical to business operations however it also tends to be
considered a responsibility of the IT department. While this is partially true patch
management within an organization’s infrastructure cannot be successful without the
understanding and support of the senior management.
Instead of waiting for the issue to be addressed when a problem occurs it is important
to implement and plan for patch management in advance. The key concerns for
many companies are in the number of patches and the manpower needed to deploy
them. However, new technologies along with enterprises which offer patch
management services have made patch management implementation and
distribution easier and more cost effective.
Patch management services can help to keep your network secure while reducing
costs.
Patch testing
Validate a given patch in a test environment, provide the assurance that all necessary
packages, pre-requisites, co-requisites, conflicts have been identified before deploying
to production.
Patches can be deployed in a test environment to troubleshoot problems before
patches are deployed in the enterprise.
165 | P a g e
Patch approval
Maintain strict control over what is being changed, which vulnerability the fix addresses,
what services and applications are being impacted, and priority. Requires an approval
process.
You use the WSUS interface to approve patches so that the automated patch
management solution automatically creates software packages only for patches that
have been approved.
Patch deployment
Prioritize the urgency of the patch deployment, schedule the deployment, build the
installable unit, and deploy the patch.
An automated process generates software packages and activity plans, and then
notifies the Administrator when they are ready to be submitted. The process relies on
IBM Tivoli Configuration Manager Components and services, such as, Software
Distribution and Activity Planner.
Patch verification
Validate that the patch was successfully applied on all eligible endpoints.
The automated patch management command line can be used to retrieve patch
status information. Patch installations can also be monitored from the Activity Plan
Monitor graphical user interface where activity plans are submitted.
Compliance management
Update the configuration baseline definitions to include the new patches, regularly
analyse to assure that all endpoints remain in compliance, identify improvements and
customize the patch management process accordingly.
Automated patch management is a dynamic process designed to identify any missing
patches in your environment and to automatically create patches to cover the current
vulnerabilities.
166 | P a g e
6 steps to an effective Patch Management System
1. Develop an up-to-date inventory of all production systems, including OS types (and
versions), IP addresses, physical location, custodian and function. Commercial tools
ranging from general network scanners to automated discovery products can
expedite the process (see Resources, below). You should inventory your network
periodically.
2. Devise a plan for standardizing production systems to the same version of OS and
application software. The smaller the number of versions you have running, the
easier your job will be later.
3. Make a list of all the security controls you have in place--routers, firewalls, IDSes, AV,
etc.--as well as their configurations. Don't forget to include system hardening or
nonstandard configurations in your list of controls. This list will help you decide how
to respond to a vulnerability alert (if at all).
4. Compare reported vulnerabilities against your inventory/control list. There are two
key components to this. First, you need a reliable system for collecting vulnerability
alerts. And second, you need to separate the vulnerabilities that affect your systems
from those that don't.
5. Classify the risk. Assess the vulnerability and likelihood of an attack in your
environment.
6. Apply the patch! You've determined which patches you need to install. Now comes
the hard part: deploying them without disrupting uptime or production.
167 | P a g e
Penetration testing
Penetration testing is a type of security testing used to test the insecure areas of the
system or application. The goal of this testing is to find all security vulnerabilities that are
present in the system being tested. Vulnerability is the risk that an attacker can disrupt
or gain authorized access to the system or any data contained within it
Vulnerabilities are usually introduced by accident during software development and
implementation phase. Common vulnerabilities include design errors, configuration
errors, software bugs etc.
168 | P a g e
Role and Responsibilities of Penetration Testers:
Penetration Testers job is to:
Financial sectors like Banks, Investment Banking , Stock Trading Exchanges want their
data to be secured , and penetration testing is essential to ensure security
In case if the software system is already hacked and organization wants to
determine whether any threats are still present in the system to avoid future hacks.
Proactive Penetration Testing is the best safeguard against hackers
169 | P a g e
Types of Penetration testing:
The type of penetration test selected usually depends on the scope and whether the
organization wants to simulate an attack by an employee, Network Admin (Internal
Sources) or by External Sources .There are three types of Penetration testing and they
are
In black box penetration testing, tester has no knowledge about the systems to be
tested .He is responsible to collect information about the target network or system.
In a grey box penetration testing, tester is provided with partial knowledge of the
system. It can be considered as an attack by an external hacker who had gained
illegitimate access to an organization's network infrastructure documents.
170 | P a g e
Planning phase
Discovery phase
Collect as much information as possible about the system including data in the
system, user names and even passwords. This is also called as FINGERPRINTING
Scan and Probe into the ports
Check for vulnerabilities of the system
Attack Phase
Find exploits for various vulnerabilities You need necessary security Privileges to
exploit the system
Reporting Phase
The prime task in penetration testing is to gather system information. There are two ways
to gather information -
'One to one' or 'one to many' model with respect to host: A tester performs
techniques in a linear way against either one target host or a logical grouping of
target hosts (e.g. a subnet).
'Many to one' or 'many to many' model :The tester utilizes multiple hosts to execute
information gathering techniques in a random, rate-limited, and in non-linear.
171 | P a g e
Manual Penetration vs. automated penetration testing
Manual testing requires expert professionals to run the tests whereas Automated test
tools provides clear reports with less experienced professionals
Manual Testing requires excel and other tools to track it , but automation has
centralized and standard tools
In Manual testing, results vary from test to test but not in the case of Automated tests
172 | P a g e
Pen test strategies include
Internal testing is performed from within the organization's technology environment. This
test mimics an attack on the internal network by a disgruntled employee or an
authorized visitor having standard access privileges. The focus is to understand what
could happen if the network perimeter were successfully penetrated or what an
authorized user could do to penetrate specific information resources within the
organization's network. The techniques employed are similar in both types of testing
although the results can vary greatly.
A blind testing strategy aims at simulating the actions and procedures of a real hacker.
Just like a real hacking attempt, the testing team is provided with only limited or no
information concerning the organization, prior to conducting the test. The penetration
testing team uses publicly available information (such as corporate Web site, domain
name registry, Internet discussion board, USENET and other places of information) to
gather information about the target and conduct its penetration tests. Though blind
testing can provide a lot of information about the organization (so called inside
information) that may have been otherwise unknown -- for example, a blind
penetration may uncover such issues as additional Internet access points, directly
connected networks, publicly available confidential/proprietary information, etc. But it
is more time consuming and expensive because of the effort required by the testing
team to research the target.
173 | P a g e
Double blind testing strategy
A double-blind test is an extension of the blind testing strategy. In this exercise, the
organization's IT and security staff are not notified or informed beforehand and are
"blind" to the planned testing activities. Double-blind testing is an important component
of testing, as it can test the organization's security monitoring and incident
identification, escalation and response procedures. As clear from the objective of this
test, only a few people within the organization are made aware of the testing. Normally
it's only the project manager who carefully watches the whole exercise to ensure that
the testing procedures and the organization's incident response procedures can be
terminated when the objectives of the test have been achieved.
Targeted testing or the lights-turned-on approach as it is often referred to, involves both
the organization's IT team and the penetration testing team to carry out the test. There
is a clear understanding of the testing activities and information concerning the target
and the network design. A targeted testing approach may be more efficient and cost-
effective when the objective of the test is focused more on the technical setting, or on
the design of the network, than on the organization's incident response and other
operational procedures. Unlike blind testing, a targeted test can be executed in less
time and effort, the only difference being that it may not provide as complete a picture
of an organization's security vulnerabilities and response capabilities.
174 | P a g e
Methods used in a penetration test
Passive research
USENET (newsgroups)
175 | P a g e
information such as function of a computer (whether it is a Web server, mail server etc)
as well as revealing ports that may be serious security risks such as telnet. Port scans
should include number of individual tests, including:
Connect scan
UDP (User Datagram Protocol) and ICMP (Internet Control Message Protocol) scans.
Tools such as nmap can perform this type of scan.
Dynamic ports used by RPC (Remote Procedure Call) should be scanned using tool
such as RPCinfo.
Spoofing
Spoofing involves creation of TCP/IP packets using somebody else's Internet addresses
and then sending the same to the targeted computer making it believe that it came
from a trusted source. It is the act of using one machine to impersonate another.
Routers use the "destination IP" address in order to forward packets through the Internet,
but ignore the "source IP" address. The destination machine only uses that source IP
address when it responds back to the source. This technique is used in internal and
external penetration testing to access computers that have been instructed to only
reply to specific computers. This can result in sensitive information be released to
unauthorised systems. IP spoofing is also an integral part of many network attacks that
do not need to see responses (blind spoofing).
Network sniffing
Sniffing is extensively used in internal testing where the sniffer or the computer in
promiscuous mode is directly attached to the network enabling capturing of a great
deal of information. Sniffing can be performed by a number of commercial tools such
as Ethereal, Network Associates SnifferPro and Network Instruments Observer.
176 | P a g e
Trojan attack
Trojans are malicious programs that are typically sent into network as e-mail
attachments or transferred via IM chat rooms. These programs run in stealth mode and
get installed on the client computer without the users’ knowledge. Once installed, they
can open remote control channels to attackers or capture information. A penetration
test aims at attempting to send specially prepared Trojans into a network.
A brute force attack involves trying a huge number of alphanumeric combinations and
exhaustive trial and error methods in order find legitimate authentication credentials.
The objective behind this time consuming exercise is to gain access to the target
system. Brute force attacks can overload a system and can possibly stop it from
responding to legitimate requests. Additionally, if account lockout is being used, brute
force attacks may close the account to legitimate users.
Vulnerability scanning/analysis
Scenario analysis
Once a vulnerability scanning has been done and weaknesses identified, the next step
is to perform Scenario testing. This testing aims at exploiting identified security
weaknesses to perform a system penetration that will produce a measurable result,
such as stolen information, stolen usernames and passwords or system alteration. This
level of testing assures that no false positives are reported and makes risk assessment of
vulnerabilities much more accurate. Many tools exist to assist exploit testing, although
the process is often highly manual. Exploit testing tends to be the final stage of
penetration testing.
177 | P a g e
178 | P a g e
Privileged Access Management (PAM)
A privileged user is someone who has administrative access to critical systems. For
instance, the individual who can set up and delete email accounts on Microsoft
Exchange Server is a privileged user. The word is not accidental. Like any privilege, it
should only be extended to trusted people. Only those seen as responsible can be
trusted with “root” privileges like the ability to change system configurations, install
software, change user accounts or access secure data. Of course, from a security
perspective, it never makes sense to unconditionally trust anyone. That’s why even
trusted access needs to be controlled and monitored. And, of course, privileges can be
revoked at any time.
PAM makes it harder for attackers to penetrate a network and obtain privileged
account access. PAM adds protection to privileged groups that control access across
a range of domain-joined computers and applications on those computers. It also adds
more monitoring, more visibility, and more fine-grained controls so that organizations
can see who their privileged administrators are and what are they doing. PAM gives
organizations more insight into how administrative accounts are used in the
environment.
A PAM solution offers a secure, streamlined way to authorize and monitor all privileged
users for all relevant systems. PAM lets you:
• Grant privileges to users only for systems on which they are authorized.
• Grant access only when it’s needed and revoke access when the need expires.
• Avoid the need for privileged users to have or need local/direct system
passwords.
• Centrally and quickly manage access over a disparate set of heterogeneous
systems.
• Create an unalterable audit trail for any privileged operation.
179 | P a g e
Types of Privileged Accounts
180 | P a g e
Types of PAM Tools
• Super user privilege management (SUPM) tools: Allow users granular, context-
driven and/or time-limited use of super user privileges
181 | P a g e
Components of PAM solutions
Privileged Access Management solutions vary in their architectures, but most offer the
following components working in concert:
Access Manager
This PAM module governs access to privileged accounts. It is a single point of policy
definition and policy enforcement for privileged access management. A privileged user
requests access to a system through the Access Manager. The Access Manager knows
which systems the user can access and at what level of privilege. A super admin can
add/modify/delete privileged user accounts on the Access Manager. This approach
reduces the risk that a former employee will retain access to a critical system.
Password Vault
The best PAM systems prevent privileged users from knowing the actual passwords to
critical systems. This prevents a manual override on a physical device, for example.
Instead, the PAM system keeps these password in a secure vault and opens access to a
system for the privileged user once he has cleared the Access Manager.
Session Manager
Access control is not enough. You need to know what a privileged user actually did
during an administrative session. A Session Manager tracks actions taken during a
privileged account session.
182 | P a g e
Challenges faced with Privileged Accounts
183 | P a g e
PAM Setup
Prepare:
Identify which groups in your existing forest have significant privileges. Recreate these
groups without members in the bastion forest.
Protect:
Set up lifecycle and authentication protection, such as Multi-Factor Authentication
(MFA), for when users request just-in-time administration. MFA helps prevent
programmatic attacks from malicious software or following credential theft.
Operate:
After authentication requirements are met and a request is approved, a user account
gets added temporarily to a privileged group in the bastion forest. For a pre-set amount
of time, the administrator has all privileges and access permissions that are assigned to
that group. After that time, the account is removed from the group.
Monitor:
PAM adds auditing, alerts, and reports of privileged access requests. You can review
the history of privileged access, and see who performed an activity. You can decide
whether the activity is valid or not and easily identify unauthorized activity, such as an
attempt to add a user directly to a privileged group in the original forest. This step is
important not only to identify malicious software but also for tracking "inside" attackers.
184 | P a g e
Advantages of PAM
• Additional logging: Along with the built-in MIM workflows, there is additional
logging for PAM that identifies the request, how it was authorized, and any
events that occur after approval.
185 | P a g e
PAM Best Practices
• Identify all privileged accounts and their owners in your IT infrastructure. Review
business, operational and regulatory requirements to classify these accounts
based on the level of risk they present in your environment.
• Grant only the minimum level of privileges required to carry out a task, and limit
the time when they can be used whenever possible.
186 | P a g e
Public Key Infrastructure (PKI)
A public key infrastructure (PKI) supports the distribution and identification of public
encryption keys, enabling users and computers to both securely exchange data
over networks such as the Internet and verify the identity of the other party.
Without PKI, sensitive information can still be encrypted (ensuring confidentiality) and
exchanged, but there would be no assurance of the identity (authentication) of the
other party. Any form of sensitive data exchanged over the Internet is reliant on PKI for
security.
Elements of PKI
A typical PKI consists of hardware, software, policies and standards to manage the
creation, administration, distribution and revocation of keys and digital certificates.
Digital certificates are at the heart of PKI as they affirm the identity of the certificate
subject and bind that identity to the public key contained in the certificate.
• A trusted party, called a certificate authority (CA), acts as the root of trust and
provides services that authenticate the identity of individuals, computers and
other entities
• A registration authority, often called a subordinate CA, certified by a root CA to
issue certificates for specific uses permitted by the root
• A certificate database, which stores certificate requests and issues and revokes
certificates
• A certificate store, which resides on a local computer as a place to store issued
certificates and private keys
187 | P a g e
Certificates and Certification Authorities
For public-key cryptography to be valuable, users must be assured that the other
parties with whom they communicate are “safe”—that is, their identities and keys are
valid and trustworthy. To provide this assurance, all users of a PKI must have a registered
identity. These identities are stored in a digital format known as a public key certificate.
Certification Authorities (CAs) represent the people, processes, and tools to create
digital certificates that securely bind the names of users to their public keys. In creating
certificates, CAs act as agents of trust in a PKI. As long as users trust a CA and its
business policies for issuing and managing certificates, they can trust certificates issued
by the CA. This is known as third-party trust. CAs create certificates for users by digitally
signing a set of data that includes the following information (and additional items):
• The user’s name in the format of a distinguished name (DN). The DN specifies the
user’s name and any additional attributes required to uniquely identify the user
(for example, the DN could contain the user’s employee number).
• A public key of the user. The public key is required so that others can encrypt for
the user or verify the user’s digital signature.
• The validity period (or lifetime) of the certificate (a start date and an end date).
• The specific operations for which the public key is to be used (whether for
encrypting data, verifying digital signatures, or both).
The CA’s signature on a certificate allows any tampering with the contents of the
certificate to be easily detected. (The CA’s signature on a certificate is like a tamper-
detection seal on a bottle of pills—any tampering with the contents of a certificate is
easily detected) As long as the CA’s signature on a certificate can be verified, the
certificate has integrity. Since the integrity of a certificate can be determined by
verifying the CA’s signature, certificates are inherently secure and can be distributed in
a completely public manner (for example, through publicly-accessible directory
systems).
Users retrieving a public key from a certificate can be assured that the public key is
valid. That is, users can trust that the certificate and its associated public key belong to
the entity specified by the distinguished name. Users also trust that the public key is still
within its defined validity period. In addition, users are assured that the public key may
be used safely in the manner for which it was certified by the CA.
188 | P a g e
Digital Signature
For analogy, a certificate can be considered as the ID card issued to the person.
People use ID cards such as a driver's license, passport to prove their identity. A digital
certificate does the same basic thing in the electronic world, but with one difference.
Digital Certificates are not only issued to people but they can be issued to computers,
software packages or anything else that need to prove the identity in the electronic
world.
• Digital certificates are based on the ITU standard X.509 which defines a standard
certificate format for public key certificates and certification validation. Hence
digital certificates are sometimes also referred to as X.509 certificates.
• Public key pertaining to the user client is stored in digital certificates by The
Certification Authority (CA) along with other relevant information such as client
information, expiration date, usage, issuer etc.
• CA digitally signs this entire information and includes digital signature in the
certificate.
Anyone who needs the assurance about the public key and associated information of
client, he carries out the signature validation process using CA’s public key. Successful
validation assures that the public key given in the certificate belongs to the person whose
details are given in the certificate.
189 | P a g e
Certificate repositories and certificate distribution
In addition, the directories that support certificate distribution can store other
organizational information. As discussed in the next section, the PKI can also use the
directory to distribute certificate revocation information.
190 | P a g e
Support for Non-Repudiation
191 | P a g e
Client-side softwares
When discussing requirements for PKIs, businesses often neglect the requirement for
client-side software. (For instance, many people only focus on the CA component
when discussing PKIs). Ultimately, however, the value of a PKI is tied to the ability of users
to use encryption and digital signatures. For this reason, the PKI must include client-side
software that operates consistently and transparently across applications on the
desktop (for example, email, Web browsing, e-forms, file/folder encryption). A
consistent, easy-to-use PKI implementation within client-side software lowers PKI
operating costs. In addition, client-side software must be technologically enabled to
support all of the elements of a PKI discussed earlier in this paper. The following list
summarizes the requirements client-side software must meet to ensure that users in a
business receive a usable, transparent (and thus, acceptable) PKI.
To ensure users are protected against loss of data, the PKI must support a system for
backup and recovery of decryption keys. With respect to administrative costs, it is
unacceptable for each application to provide its own key backup and recovery.
Instead, all PKI-enabled client applications should interact with a single key backup
and recovery system. The interactions between the client-side software and the key
backup and recovery system must be secure, and the interaction method must be
consistent across all PKI-enabled applications.
To provide basic support for non-repudiation, the client-side software must generate
the key pairs used for digital signature. In addition, the client-side software must ensure
that the signing keys are never backed up and remain under the users’ control at all
times. This type of support must be consistent across all PKI-enabled applications.
192 | P a g e
the organization. It is unacceptable for users to have to know that their key pairs require
updating. To meet this requirement across all PKI-enabled applications, the client-side
software must update key pairs transparently and consistently.
To enable users to easily access all data encrypted for them (regardless of when it was
encrypted), PKI-enabled applications must have access to users key histories. The client-
side software must be able to securely recover users’ key histories.
To minimize the costs of distributing certificates, all PKI-enabled applications must use a
common, scalable certificate repository.
193 | P a g e
Risk Analysis
Risk analysis is the process of defining and analyzing the dangers to individuals,
businesses and government agencies posed by potential natural and human-caused
adverse events. In IT, a risk analysis report can be used to align technology-related
objectives with a company's business objectives. A risk analysis report can be either
quantitative or qualitative.
Qualitative risk analysis, which is used more often, does not involve numerical
probabilities or predictions of loss. Instead, the qualitative method involves defining the
various threats, determining the extent of vulnerabilities and devising countermeasures
should an attack occur.
The process of conducting a risk analysis is very similar to identifying an acceptable risk
level. Essentially, you do a risk analysis on the organization as a whole to determine the
acceptable risk level. This is then your baseline to compare all other identified risks to
determine whether the risk is too high or if it is under the established acceptable risk
level.
194 | P a g e
Why Risk Analytics?
Today, risk analytics techniques make it possible to measure, quantify, and even predict
risk with more certainty than ever before. That’s a big deal for organizations that have
relied heavily on the opinions of leaders at the business unit level to monitor, assess, and
report risk. Even for executives with sound intuition, it was virtually impossible to construct
an enterprise level view of risk spanning many different parts of the business.
This is where analytics excels. It helps establish a baseline for measuring risk across the
organization by pulling together many strands of risk into one unified system and
offering executive’s clarity in identifying, viewing, understanding, and managing risk.
195 | P a g e
Steps to conduct Risk Analysis
Risk analysis provides a cost/benefit comparison, which compares the annualized cost
of safeguards to protect against threats with the potential cost of loss. A safeguard, in
most cases, should not be implemented unless the annualized cost of loss exceeds the
annualized cost of the safeguard itself. This means that if a facility is worth $100,000, it
does not make sense to spend $150,000 trying to protect it.
The value placed on assets (including information) is relative to the parties involved,
what work was required to develop it, how much it costs to maintain, what damage
would result if it were lost or destroyed, and what benefit another party would gain if it
were to obtain it. If a company does not know the value of the information and the
other assets it is trying to protect, it does not know how much money and time it should
spend on protecting them.
The value of an asset should reflect all identifiable costs that would arise if there were
an actual impairment of the asset. If a server costs $4,000 to purchase, this value should
not be input as the value of the asset in a business risk assessment. Rather, the cost of
replacing or repairing it, the loss of productivity and the value of any data that may be
corrupted or lost, need to be accounted for to properly capture the amount the
company would lose if the server were to fail for one reason or another.
Understanding the value of an asset is the first step to understanding what security
mechanisms should be put in place and what funds should go toward protecting it. A
very important question is how much it could cost the company to not protect the
asset.
196 | P a g e
Step two: Identify vulnerabilities and threats
Once the assets have been identified and assigned values, all of the vulnerabilities and
associated threats need to be identified for each asset or group of assets. The IRM team
needs to identify the vulnerabilities that could affect each asset's integrity, availability or
confidentiality requirements. All of the relevant vulnerabilities need to be identified and
documented so that the necessary countermeasures can be implemented.
Since there is a large amount of vulnerabilities and threats that can affect the different
assets, it is important to be able to properly categorize them. The goal is to determine
which threats and vulnerabilities could cause the most damage so that the most critical
items can be taken care of first.
What physical damage could the threat cause, and how much would that cost?
How much productivity loss could the threat cause, and how much would that cost?
What is the single loss expectancy (SLE) for each asset and each threat?
This is just a small list of questions that should be answered. The specific questions will
depend upon the types of threats the team uncovers.
The team then needs to calculate the probability and frequency of the identified
vulnerabilities being exploited. The team will need to gather information about the
likelihood of each threat taking place from people in each department, past records
and official security resources. If the team is using a quantitative approach, then they
will calculate the annualized rate of occurrence (ARO), which is how many times the
threat can take place in a 12-month period.
197 | P a g e
Step four: Identify countermeasures and determine cost/benefit
The team then needs to identify countermeasures and solutions to reduce the potential
damages from the identified threats.
A security countermeasure must make good business sense, meaning that it is cost-
effective and that its benefit outweighs its cost. This requires another type of analysis:
a cost/benefit analysis.
For example, if the ALE of the threat of a hacker bringing down a Web server is $12,000
prior to implementing the suggested safeguard, $3,000 after implementing the
safeguard, and the annual cost of maintenance and operation of the safeguard is
$650, then the value of this safeguard to the company is $8,350 each year.
The cost of a countermeasure is more than just the amount that is filled out on the
purchase order. The following items need to be considered and evaluated when
deriving the full cost of a countermeasure:
• Product costs
• Design/planning costs
• Implementation costs
• Environment modifications
• Compatibility with other countermeasures
• Maintenance requirements
• Testing requirements
• Repair, replacement or update costs
• Operating and support costs
• Effects on productivity
198 | P a g e
The real cost of this countermeasure is $18,000. If our total potential loss was calculated
at $9,000, we went over budget by 100% when applying this countermeasure for the
identified risk. Some of these costs may be hard or impossible to identify before they are
acquired, but an experienced risk analyst would account for many of these possibilities.
It is important that the team knows how to calculate the actual cost of a
countermeasure to properly weigh it against the benefit and savings the
countermeasure is supposed to provide.
The risk analysis team should have clearly defined goals that it is seeking. The following is
a short list of what generally is expected from the results of a risk analysis:
Although this list looks short, there is usually an incredible amount of detail under each
bullet item. This report is presented to senior management, which will be concerned
with possible monetary losses and the necessary costs to mitigate these risks. Although
the reports should be as detailed as possible, there should be executive abstracts so
that senior management may quickly understand the overall findings of the analysis.
199 | P a g e
Benefits of Risk Analytics
200 | P a g e
SAP ERP Security
SAP Enterprise Central Component (also known as SAP ERP, earlier – as SAP R/3) is a
heart of Enterprise Resource Management. It is undoubtedly one of the major elements
of any business as it enables effective management, storage and processing of such
critical information as personal data of employees, financial and tax reports information
about material resources and more, depending on the modules enabled. Unauthorized
access to this system can result in disruption of key business processes and data
corruption.
Enterprise resource planning (ERP) systems are the backbone of many large
organizations and are critical to successfully running business operations.
However, many ERP systems are very complex with a diverse set of stakeholders
throughout the enterprise. They have also been in place for decades in some
enterprises and may have accumulated many years of technical debt -- making ERP
security difficult and costly to maintain.
201 | P a g e
SAP ERP Security Risks
There are multiple risks related to SAP ERP systems. Some of them are:
202 | P a g e
SAP vulnerabilities
In its report, Onapsis researchers found more than 95% of SAP systems are exposed to
vulnerabilities that could lead to a detrimental compromise of enterprise data and
processes.
These issues were identified through hundreds of security assessments of SAP systems.
According to the Onapsis report, the top three most common attack vectors on SAP
systems that threaten ERP security are:
All three of these issues contribute to the technical debt in securing an SAP system.
In the first vector, for example, a lower-security customer Web portal that is exposed to
the Internet could be set up to allow customers to connect from anywhere to place
orders. However, this customer Web portal can be used as part of an attack, with the
attacker pivoting from the lower-security system to other more critical systems, and
eventually the entire SAP system.
In the second attack vector, customer and supplier portals could potentially be
infiltrated; backdoor users could pivot the SAP portals and other platforms to continue
on and attack the internal network.
In the third attack vector, an attacker can exploit insecure database protocol
configurations that would allow them to execute commands on the operating system.
At this point, the attacker has complete access to the operating system and can
potentially modify or disrupt any information stored in the database.
Note that these are all common attack methods and should not be surprising to any
information security professional.
203 | P a g e
SAP Security audit checklist
While enterprises need to include all systems in an information security program, the
specific resources devoted to securing a particular asset should correspond to the
system's value to the organization. These value assets should be established through
a business impact analysis.
Given the critical nature of SAP systems, one major concern for ongoing security
controls has been the potential for downtime from security. If an SAP system can't be
"down" for business reasons, plans should be in place on how to apply patches or make
other security changes without disrupting operations. This might include ensuring a high-
availability system is in place, such as a backup system that automatically takes over
when the primary system is being patched or is having changes made.
204 | P a g e
Another consideration to keep in mind is that other security technologies -- such as
an intrusion detection system, monitoring tools, among others -- which should be in
place, can be specifically tuned to monitor an SAP system.
Again, enterprises need to ensure all systems are part of their information security
program -- including SAP systems. Excluding SAP systems in the past is what has allowed
for these basic security vulnerabilities to still be present in SAP systems today.
Some of these vulnerabilities have been well known in the information security
community for decades, so applying the processes and fixes found outside SAP systems
can significantly improve SAP security and prevent more severe incidents from affecting
critical business operations.
205 | P a g e
Software Development Security
The software development life cycle, or SDLC, encompasses all of the steps that an
organization follows when it develops software tools or applications. Organizations that
incorporate security in the SDLC benefit from products and applications that are secure
by design.
In an organization that's been around for several years or more, the SDLC is well-
documented and usually includes the steps that are followed and in what order, the
business functions and/or individuals responsible for carrying out the steps and
information about where records are kept.
206 | P a g e
A typical SDLC model contains the following main functions:
207 | P a g e
Getting the right security information to the right people
Many people in the entire development process need all kinds of information, including
security information, in a form that is useful to them. Here is the type of information that
is required during each phase of the SDLC.
If you are wondering why maintenance is omitted from the life cycle example here, it is
because maintenance is just an iteration of the life cycle: when a change is needed,
the entire process starts all over again. All of the validations that are present the first
time through the life cycle are needed every time thereafter.
Finally, one may say that these changes represent a lot of extra work in a development
project. This is not the case – these additions do not present that much extra time. These
are but small additions that reap large benefits later on.
208 | P a g e
Secure SDLC
A Secure SDLC process ensures that security assurance activities such as penetration
testing, code review, and architecture analysis are an integral part of the development
effort. The primary advantages of pursuing a Secure SDLC approach are:
209 | P a g e
Many Secure SDLC models have been proposed, for example:
• MS Security Development Lifecycle (MS SDL): One of the first of its kind,
the MS SDL was proposed by Microsoft in association with the phases of a classic
SDLC.
• NIST 800-64: Provides security considerations within the SDLC. Standards were
developed by the National Institute of Standards and Technology to be
observed by US federal agencies.
• OWASP CLASP (Comprehensive, Lightweight Application Security
Process): Simple to implement and based on the MS SDL. It also maps the
security activities to roles in an organization.
The idea is to have security built in rather than bolted on, maintaining the security
paradigm during every phase, to ensure a secure SDLC.
During requirements gathering for a secure SDLC, the first step is to identify applicable
policies and standards and the mandates that the software will need to follow;
compliance is an important factor to incorporate a standard framework, as well as to
ensure audit requirements are met. Next, the compliance requirements can be
mapped to the security controls.
210 | P a g e
Phase 2: Design
An architectural blueprint is now created, taking all the security requirements into
consideration. This defines the entry and exit points in addition to defining how the
business logic would interact with the different layers of the software.
In keeping with the secure SDLC paradigm, threat modeling is performed, which puts
the software through various scenarios of misuse to assess the security robustness. In the
process, various avenues to tackle potential problems emerge. One must keep in mind
that the application communicates in a distributed environment rather than just a single
system.
Phase 3: Coding
The best practices in the coding phase of a secure SDLC revolve around educating the
developers. Instead of focusing only on language- or platform-specific problems,
developers need an insight into how security vulnerabilities are created. These include
not just technical vulnerabilities, but also problems from a business logic perspective.
For a secure SDLC, outsourcing of software testing is a good idea, for cost savings
definitely, but more so to leverage the specialized testing knowledge, skills and
experience of the experts in the company being outsourced to.
When outsourcing, legalities like data sensitivity must be considered, and access to
production databases should be avoided. Data should be masked or sanitized and the
scope of the testing pre-defined.
211 | P a g e
Phase 5: Deployment
In the final deployment phase of a secure SDLC, the different components of the
platform interact with each other. Platform security cannot be ignored, for while the
application itself might be secure, the platform it operates on might have exploitable
flaws. Platforms thus need to be made secure by turning off unwanted services, running
the machines on the least privilege principle, and making sure there are security
safeguards such as IDS, firewalls, and so on.
212 | P a g e
Unified Threat Management
Security expert Karen Scarfone defines UTM products as firewall appliances that not
only guard against intrusion but also perform content filtering, spam filtering,
application control, Web content filtering, intrusion detection and antivirus duties; in
other words, a UTM device combines functions traditionally handled by multiple
systems. These devices are designed to combat all levels of malicious activity on the
computer network.
An effective UTM solution delivers a network security platform comprised of robust and
fully integrated security and networking functions along with other features, such as
security management and policy management by a group or user. It is designed to
protect against next generation application layer threats and offers a centralized
management through a single console, all without impairing the performance of the
network.
213 | P a g e
Advantages of using UTM
Convenience and ease of installation are the two key advantages of unified threat
management security appliances. There is also much less human intervention required
to install and configure them appliances. Other advantages of UTM are listed below:
Reduced complexity
The integrated all-in-one approach simplifies not only product selection but
also product integration, and ongoing support as well.
Ease of deployment
Since there is much less human intervention required, either vendors or the customers
themselves can easily install and maintain these products.
Integration capabilities
UTM appliances can easily be deployed at remote locations without the on-site help of
any security professional. In this scenario a plug-and-play appliance can be installed
and managed remotely. This kind of management is synergistic with large, centralized
software-based firewalls.
Troubleshooting ease
When a box fails, it is easier to swap out than troubleshoot. This process gets the node
back online quicker, and a non-technical person can do it, too. This feature is especially
important for remote offices without dedicated technical staff on site.
Some of the leading UTM solution providers are Check Point, Cisco, Dell,
Fortinet, HP, IBM and Juniper Networks.
214 | P a g e
Challenges of using UTM
UTM products are not the right solution for every environment. Many organizations
already have a set of point solutions installed that, combined, provide network security
capabilities similar to what UTMs offer, and there can be substantial costs involved in
ripping and replacing the existing technology install a UTM replacement. There are also
advantages to using the individual products together, rather than a UTM. For instance,
when individual point products are combined, the IT staff is able to select the best
product available for each network security capability; a UTM can mean having to
compromise and acquire a single product that has stronger capabilities in some areas
and weaker ones in others.
Another important consideration when evaluating UTM solutions is the size of the organization
in which it would be installed. Smallest organizations might not need all the network security
features of a UTM. There is no need for a smaller firm to tax its budget with a UTM if many of its
functions aren't needed. On the other hand, a UTM may not be right for larger, more cyber-
dependent organizations either, since these often need a level of scalability and reliability in
their network security that UTM products might not support (or at least not support as well as a
set of point solutions). Also a UTM system creates a single point of failure for most or all
network security capabilities; UTM failure could conceivably shut down an enterprise, with a
catastrophic effect on company security. How much an enterprise is willing to rely on a UTM is a
question that must be asked, and answered.
215 | P a g e
Web App & Website Security
As most businesses rely on web sites to deliver content to their customers, interact with
customers, and sell products certain technologies are often deployed to handle the
different tasks of a web site. A content management system like Joomla! or Drupal may
be the solution used to build a robust web site filled with product, or service, related
content. Businesses often turn to blogs using applications like WordPress or forums
running on phpBB that rely on user generated content from the community to give
customers a voice through comments and discussions. ZenCart and Magento are often
the solutions to the e-commerce needs of both small and large businesses who sell
directly on the web. Add in the thousands of proprietary applications that web sites rely
and the reason securing web applications should be a top priority for any web site
owner, no matter how big or small.
216 | P a g e
The Foundations of Security
AUTHENTICATION
Authentication addresses the question: who are you? It is the process of uniquely
identifying the clients of your applications and services. These might be end users, other
services, processes, or computers. In security parlance, authenticated clients are
referred to as principals.
AUTHORIZATION
Authorization addresses the question: what can you do? It is the process that governs
the resources and operations that the authenticated client is permitted to access.
Resources include files, databases, tables, rows, and so on, together with system-level
resources such as registry keys and configuration data. Operations include performing
transactions such as purchasing a product, transferring money from one account to
another, or increasing a customer's credit rating.
AUDITING
Effective auditing and logging is the key to non-repudiation. Non-repudiation
guarantees that a user cannot deny performing an operation or initiating a transaction.
For example, in an e-commerce system, non-repudiation mechanisms are required to
make sure that a consumer cannot deny ordering 100 copies of a particular book.
CONFIDENTIALITY
Confidentiality, also referred to as privacy, is the process of making sure that data
remains private and confidential, and that it cannot be viewed by unauthorized users
or eavesdroppers who monitor the flow of traffic across a network. Encryption is
frequently used to enforce confidentiality. Access control lists (ACLs) are another means
of enforcing confidentiality.
INTEGRITY
Integrity is the guarantee that data is protected from accidental or deliberate
(malicious) modification. Like privacy, integrity is a key concern, particularly for data
passed across networks. Integrity for data in transit is typically provided by using hashing
techniques and message authentication codes.
AVAILABILITY
From a security perspective, availability means that systems remain available for
legitimate users. The goal for many attackers with denial of service attacks is to crash
an application or to make sure that it is sufficiently overwhelmed so that other users
cannot access the application.
217 | P a g e
218 | P a g e
Risks Associated with Web Applications
Web applications allow visitors access to the most critical resources of a web site, the
web server and the database server. Like any software, developers of web applications
spend a great deal of time on features and functionality and dedicate very little time to
security. Its not that developers don’t care about security, nothing could be further from
the truth. The reason so little time is spent on security is often due to a lack of
understanding of security on the part of the developer or a lack of time dedicated to
security on the part of the project manager.
For whatever reason, applications are often riddled with vulnerabilities that are used by
attackers to gain access to either the web server or the database server. From there
any number of things can happen. They can:
219 | P a g e
Attacks on Web Application
• Code injection: hackers find ways to insert malicious executable code into
legitimate traffic sent to an endpoint
• Broken authentication and session management: compromising user identities in
a variety of ways
• Cross-site scripting: similar to code injection, but involving scripts instead, drawn
from inappropriate sources
• Insecure direct object references: obtaining file access when it’s not actually
authorized
• Security misconfiguration: a failure of the admin, sometimes as simple as leaving
passwords as defaults
• Sensitive data exposure: failure to shield data in proportion to its business value or
customer sensitivity
• Missing function level access control: failure to verify functions are actually
limited by access rights
• Cross-site request forgery: compromising an unexpected web application by
leveraging validated authentication information
• Components with known vulnerabilities: a vulnerable element, such as a Java
class, hasn’t been patched
• Invalidated redirects and forwards: sending web users to unexpected sites that
serve hacker interests
220 | P a g e
Web application security testing
There are also many commercial solutions designed to automate some of the testing.
“Black box” solutions don’t try to assess application code per se, but instead just treat
the application in a monolithic way. These are typically known as “web application
security scanners,” “vulnerability scanners,” “penetration testing tools,” etc., and work
by simulating a running, active, environment. Once installed, they then stress-test an
application for flaws in ways that real-world users presumably would. These flaws, once
exposed in the reports the solution generates, can then be addressed by the
development team.
“White box” solutions, on the other hand, do look into the structure and code of the
application itself — evaluating to some extent how well implemented the secure
coding best practices were by the engineers who built the application. For instance,
static analysis (as described above) can be performed to automatically trace process
execution and predict what should happen in an up-and-running application (that isn’t
actually up and running), thus spotting clear application security issues.
Another good testing idea is “fuzzing,” which basically just means hammering an
application with many different kinds of data. That includes data of a completely
inappropriate format for which the application was never designed, as well as random
data that doesn’t make sense because it hasn’t got a format. This is a good way of
revealing web application security flaws in an application via input that a normal
human being (whether working in quality assessment or a typical user) might never
even imagine, let alone carry out — but a hacker might.
In the case of applications that require a secure log-in process, let’s not forget web
application security basics - it’s wise to try password crackers. These can train a spotlight
on predictable issues, such as the strength of the password the application requires,
whether it’s possible to break the authentication code in any of several commonplace
ways, the minimum time interval between password entry attempts, or how many failed
passwords can be entered before a user is locked out.
221 | P a g e
The Need to Avoid Attacks
With so many web sites running applications, attackers have taken to creating
automated tools that can launch well-coordinated attacks against a number of
vulnerable web sites at once. With this capability, the targets of these malicious hackers
are no longer limited to large corporate web sites. Smaller web sites are just as easily
caught up in the net cast by these automated attacks.
The repercussion of having your web site compromised can be devastating to any
business, no matter what the industry or size of the company. The after-effects of these
attacks include:
• Stolen data
• Compromised user accounts
• Loss of trust with customers and/or visitors
• Damaged brand reputation
• Lost sales revenue
• Your site labeled as a malicious site
• Loss of search engine rankings
222 | P a g e
Ways to Strengthen Web App Security
Web Services
This section deals with the common issues facing web developers as they work to build
secure web apps, whether that includes Java, pHp, AJAX or other web languages
and/or technologies.
Authentication
This section deals with authentication issues associated with secure web apps, such as
basic/digest authentication, form-based authentication, integrated (SSO)
authentication, etc.
Authorization
This section addresses authentication issues, ensuring a user has the appropriate
privileges to view a resource. Topics such as principle of least privilege, client-side
authorization tokens, etc. are addressed here.
Session Management
This section addresses topics such as authenticated users having a robust and
cryptographically secure association with their session, applications enforcing
authorization checks and applications avoiding or preventing common web attacks,
such as replay, request forging and man-in-the-middle.
Data Validation
This section deals with applications being robust against all forms of input data, whether
obtained from the user, infrastructure, external entities or databases.
Interpreter Injection
This section addresses application issues so they are secure from well-known parameter
manipulation attacks against common interpreters.
223 | P a g e
Error Handling, Auditing and Logging
This section deals with designing well-written applications that have dual-purpose logs
and activity traces for audit and monitoring. This makes it easy to track a transaction
without excessive effort or access to the system. They should possess the ability to easily
track or identify potential fraud or anomalies end-to-end.
Distributed Computing
This section deals with synchronization and remote services to web applications, by
hardening applications against:
Buffer Overflow
This section addresses issues such as:
Administrative Interfaces
This section addresses issues such that:
Cryptography
This section helps to ensures that cryptography is safely used to protect the
confidentiality and integrity of sensitive user data.
Configuration
This section is focused on creating secure web applications which are as well-built and
secure out-of-the-box as possible.
224 | P a g e
modeling scenarios are established up front, so the developers and QA engineers know
what to expect and what to work towards.”
Deployment
This section deals with the issues surrounding secure deployment of web applications.
Maintenance
This section addresses issues such as:
225 | P a g e
WAF- Web Application Firewall
Over the past few years, a clear trend has emerged within the information security
landscape; web applications are under attack. “Web applications continue to be a
prime vector of attack for criminals, and the trend shows no sign of abating; attackers
increasingly shun network attacks for cross-site scripting, SQL injection, and many other
infiltration techniques aimed at the application layer.” (Sarwate, 2008) Web
application vulnerabilities can be attributed to many things including poor input
validation, insecure session management, improperly configured system settings and
flaws in operating systems and web server software. Certainly writing secure code is the
most effective method for minimizing web application vulnerabilities. However, writing
secure code is much easier said than done and involves several key issues. First of all,
many organizations do not have the staff or budget required to do full code reviews in
order to catch errors. Second, pressure to deliver web applications quickly can cause
errors and encourage less secure development practices. Third, while products used to
analyze web applications are getting better, there is still a large portion of the job that
must be done manually and is susceptible to human error. Securing an organization’s
web infrastructure takes a defense in depth approach and must include input from
various areas of IT including the web development, operations, infrastructure, and
security teams.
One technology that can help in the security of a web application infrastructure is a
web application firewall. A web application firewall (WAF) is an appliance or server
application that watches http/https conversations between a client browser and web
server at layer 7. The WAF then has the ability to enforce security policies based upon a
variety of criteria including signatures of known attacks, protocol standards and
anomalous application traffic.
226 | P a g e
WAF Placement
Appliance-based WAF deployments typically sit directly behind an enterprise firewall
and in front of organizational web servers. Deployments are often done in-line with all
traffic flowing through the web application firewall. However, some solutions can be
“out of band” with the use of a network monitoring port. If network based deployments
are not preferred, organizations have another option. Host or server based WAF
applications are installed directly onto corporate web servers and provide similar
feature sets by processing traffic before it reaches the web server or application.
Security Model
A WAF typically follows either a positive or negative security model when it comes to
developing security policies for your applications. A positive security model only allows
traffic to pass which is known to be good, all other traffic is blocked. A negative
security model allows all traffic and attempts to block that which is malicious. Some
WAF implementations attempt to use both models, but generally products use one or
the other. “A WAF using a positive security model typically requires more configuration
and tuning, while a WAF with a negative security model will rely more on behavioral
learning capabilities.” (Young, 2008)
227 | P a g e
Operating Modes
Web Application Firewalls can operate in several distinct modes. Vendor names and
support for different modes vary, so check each product for specific details if a
particular mode is desired. Each mode offers various pros and cons which require
organizations to evaluate the correct fit for their organization.
Reverse Proxy
The full reverse proxy mode is the most common and feature rich deployment in the
web application firewall space. While in reverse proxy mode a device sits in line and all
network traffic passes through the WAF. The WAF has published IP addresses and all
incoming connections terminate at these addresses. The WAF then makes requests to
back end web servers on behalf of the originating browser. This mode is often required
for many of the additional features that a WAF may provide due to the requirement for
connection termination. The downside of a reverse proxy mode is that it can increase
latency which could create problems for less forgiving applications.
Transparent Proxy
When used as a transparent proxy, the WAF sits in line between the firewall and web
server and acts similar to a reverse proxy but does not have an IP address. This mode
does not require any changes to the existing infrastructure, but cannot provide some of
the additional services a reverse proxy can.
Layer 2 Bridge
The WAF sits in line between the firewall and web servers and acts just like a layer 2
switch. This mode provides high performance and no significant network changes,
however does not provide the advanced services other WAF modes may provide.
Host/Server Based
Host or server based WAFs are software applications which are installed on web servers
themselves. Host based WAFs do not provide the additional features which their
network based counterparts may provide. They do, however, have the advantage of
removing a possible point of failure which network based WAFs introduce. Host based
WAFs do increase load on web servers so organizations should be careful when
introducing these applications on heavily used servers.
228 | P a g e
WAP Features
WAF appliances are often either add-on components of existing application delivery
controllers or include additional features to improve the reliability and performance of
web applications. These additional features can help make the case for implementing
a WAF for organizations not already taking advantage of such features. Not all WAF
solutions have these features and many are dependent upon the deployment mode
chosen. Typically a reverse-proxy deployment will support each of these features.
Caching
Reducing load on web servers and increasing performance by caching copies of
regularly requested web content on the WAF thus reducing repeated requests to back
end servers.
Compression
In order to provide for more efficient network transport, certain web content can be
automatically compressed by the WAF and then decompressed by the browser.
SSL Acceleration
Use of hardware based SSL decryption in a WAF to speed SSL processing and reduce
the burden on back-end web servers.
Load Balancing
Spreading incoming web requests across multiple back end web servers to improve
performance and reliability.
Connection Pooling
Reduces back end server TCP overhead by allowing multiple requests to use the same
back end connection.
229 | P a g e
Implementation, Tuning and Maintenance
Web application firewalls are certainly not a plug and play solution. They require
rigorous testing prior to implementation and regular tuning thereafter.
During the implantation phase, most vendors will have either a learning or passive
mode so that the WAF can be properly tuned before blocking any traffic. A solution
based upon a positive security model will need to learn what “normal” traffic looks like
for your applications. Negative security model solutions will typically be deployed in a
non-blocking mode so that any false positives can be tuned prior to turning on blocking
capabilities. Similarly to intrusion prevention systems, a WAF requires regular monitoring
of log files to detect attacks and tune false positives.
Organizations also need to consider how to incorporate WAF testing and tuning into
their standard development practices so that the impact of new applications can be
evaluated prior to deployment.
PCI Compliance
One of the major reasons organizations have an interested in web application firewalls
is PCI DSS version 1.1. Requirement 6.6 states that organizations need to protect web
applications by either reviewing all custom code for vulnerabilities or installing a web
application firewall. This choice sparked a bit of controversy in the industry over which
was the best practice. There are a myriad of arguments on both sides, but most agree
that the best approach it to implement both methods rather than choosing one over
the other. This requirement, however, has certainly shown a bright spotlight on WAF
technology and, if anything, given vendors fuel to sell their products.
230 | P a g e
Wireless/Wi-Fi Security
Wireless networks are forcing organizations to completely rethink how they secure their
networks and devices to prevent attacks and misuse that expose critical assets and
confidential data. By their very nature, wireless networks are difficult to roll out, secure
and manage, even for the most savvy network administrators.
Wireless networks offer great potential for exploitation for two reasons; they use the
airwaves for communication, and wireless-enabled laptops are ubiquitous. To make the
most of their security planning, enterprises need to focus on threats that pose the
greatest risk. Wireless networks are vulnerable in a myriad of ways, some of the most
likely problems being rogue access points (APs) and employee use of mobile
devices without appropriate security precautions, but malicious hacking attempts and
denial-of-service (DoS) attacks are certainly possible as well.
Additional wireless access security challenges come through the use of wireless-
enabled devices by employees, the growing amount of confidential data residing on
those devices, and the ease with which end users can engage in risky wireless behavior.
The value of connectivity typically outweighs concerns about security, as users need to
get work done while at home or while traveling. Survey data from the leading research
group, Gartner, shows that at least 25 percent of business travelers connect to hotspots,
many of which are unsecure, while traveling. Furthermore, about two-thirds of those
who use hotspots connect to online services via Wi-Fi at least once a day highlighting
the need for extending wireless security outside of the enterprise.
231 | P a g e
company has authorized the use of wireless or has a 'no wireless' policy, their networks,
data, devices and users are exposed and at risk.
Wi-Fi Standards
• 802.11a
o Frequency: 5.0 GHz
o Typical Maximum Speed: 54 Mbps
• 802.11b
o Frequency: 2.4 GHz
o Typical Maximum Speed: 11 Mbps
• 802.11g
o Frequency: 2.4 GHz
o Typical Maximum Speed: 54 Mbps
• 802.11n
o Frequency: 2.4 GHz or 5.0 GHz
o Typical Maximum Speed: 600 Mbps
• 802.11ac
o Frequency: 5.0 GHz
o Typical Maximum Speed: 6 Gbps
Most Wi-Fi devices including computers, routers, and phones support several security
standards. The available security types and even their names vary depending on a
device's capabilities.
WEP: WEP stands for Wired Equivalent Privacy. It is the original wireless security standard
for Wi-Fi and is still commonly used on home computer networks. Some devices support
multiple versions of WEP security
and allow an administrator to choose one, while other devices only support a single
WEP option. WEP should not be used except as a last resort, as it provides very limited
security protection.
232 | P a g e
WPA: WAP stands for Wi-Fi Protected Access. This standard was developed to replace
WEP. Wi-Fi devices typically support multiple variations of WPA technology. Traditional
WPA, also known as WPA-Personal and sometimes also called WPA-PSK (for pre-shared
key), is designed for home networking while another version, WPA-Enterprise, is
designed for corporate networks.
WAP2: WAP2 is an improved version of Wi-Fi Protected Access supported by all newer
Wi-Fi equipment. Like WPA, WPA2 also exists in Personal/PSK and Enterprise forms.
802.1X: 802.1X provides network authentication to both Wi-Fi and other types of
networks. It tends to be used by larger businesses as this technology requires additional
expertise to set up and maintain.
802.1X works with both Wi-Fi and other types of networks. In a Wi-Fi configuration,
administrators normally configure 802.1X authentication to work together with
WPA/WPA2-Enterprise encryption. 802.1X is also known as RADIUS.
233 | P a g e
Wi-Fi Attacks
War Driving
This is the act of driving around neighborhoods and areas to enumerate what wireless
networks exist, what type of encryption (if any) is used, password (if known), and any
other pertinent information. This information may chalked or painted to the street or side
walk or posted to various websites. Some websites, like SkyHook ask their users for this.
Be cautious when you see various cars sitting outside your house for long periods of time
(unless you live near a Pokemon Gym or a Pokestop).
Cracking Attacks
Just like anything else using Passwords, there are desires and ways to crack those
passwords to gain access. Without password attacks, there would be no Have I Been
Pwned and other similar sites. Very much like other password attacks, there are the
simplistic attacks (brute force) and the complex attacks. While brute force will
eventually work, there are methods to minimize the impact if compromised. These
mitigating factors are mentioned below in the Wi-Fi Security Tips. One tool, or rather a
suite of tools, used to crack Wi-Fi (WEP, WPA1, and WPA2) passwords is Aircrack-ng. It is
the replacement for Airsnort. You will also need the airmon-ng, airodump-ng, and
aireplay-ng tools (hence the suite) as well as a wireless card set to to "Monitor Mode"
(like promiscuous mode) to steal the handshake file and replay handshake to get the
file to crack. Once you have the file, you can use your favorite password list (mine is a
custom list with rockyou.txt as a base) to attempt to crack the key.
Denial of Service
A Denial of Service (DoS) attack is more of a nuisance than a true technical attack.
Think of it as an extreme brute force attack that overwhelms something, in this case, a
Wi-Fi network or assets/nodes on it. My broad over generalization of it being a nuisance
vice technical is an exaggeration; sometimes the vectors of attack for a DoS are very
technical. Many technologies, namely web servers and websites, have DoS protective
measures, as the internet can connect to them if they are public facing.
234 | P a g e
Karma Attacks
Karma was a tool that was used to sniff, probe, and attack wi-fi networks using Man-in-
the-Middle (MITM) methods. It has since fell from support as Karma but now exists as
several other products. For the scope of this blog post, I will be focusing on the current
incarnation known as Karmetasploit a portmanteau of Karma and Metasploit. Once the
run control file is obtained and everything properly configured, the attacker will use
airmon-ng and airbase-ng (relative of all the other airX-ng tools) to establish itself as a
wireless access point (AP). This is what perpetrates the Wi-Fi version of the Evil Twin
attack. In perpetrating the actual attack, the attacker will open metasploit and input
the Karma run control file then wait for users to connect. Once they connect, the
attacker has visibility into what the victim is doing and browsing as well as the capability
to interrogate the victim machine and extract cookies, passwords, and hashes. This
could be combined with password attacks like Mimikatz or replay attacks. The attacker
can also establish a meterpreter session with the victim for further exploitation.
235 | P a g e
Ways to secure Wi-Fi Network
The service set identifier (SSID) is the name that's broadcast from your Wi-Fi to the
outside world so people can find the network. While you probably want to make the
SSID public, using the generic network name/SSID generally gives it away. For example,
routers from Linksys usually say "Linksys" in the name; some list the make and model
number ("NetgearR6700"). That makes it easier for others to ID your router type. Give
your network a more personalized moniker.
It's annoying, but rotating the SSID(s) on the network means that even if someone had
previous access—like a noisy neighbor—you can boot them off with regular changes.
It's usually a moot point if you have encryption in place, but just because you're
paranoid doesn't mean they're not out to use your bandwidth. (Just remember, if you
change the SSID and don't broadcast the SSID, it's on you to remember the new name
all the time and reconnect ALL your devices—computers, phones, tablets, game
consoles, talking robots, cameras, smart home devices, etc.
Activate Encryption
This is the ultimate Wi-Fi no-brainer; no router in the last 10 years has come without
encryption. It's the single most important thing you must do to lock down your wireless
network. Navigate to your router's settings and look for security options. Each router
brand will likely differ; if you're stumped, head to your router maker's support site.
Once there, turn on WPA2 Personal (it may show as WPA2-PSK); if that's not an option
use WPA Personal (but if you can't get WPA2, be smart: go get a modern router). Set
the encryption type to AES (avoid TKIP if that's an option). You'll need to enter a
password, also known as a network key, for the encrypted Wi-Fi.
This is NOT the same password you used for the router—this is what you enter on every
single device when you connect via Wi-Fi. So make it a long nonsense word or phrase
no one can guess, yet something easy enough to type into every weird device you've
got that uses wireless. Using a mix of upper- and lowercase letters, numbers, and special
characters to make it truly strong, but you have to balance that with ease and
memorability.
236 | P a g e
Double Up on Firewalls
The router has a firewall built in that should protect your internal network against outside
attacks. Activate it if it's not automatic. It might say SPI (stateful packet inspection) or
NAT (network address translation), but either way, turn it on as an extra layer of
protection.
For full-bore protection—like making sure your own software doesn't send stuff out over
the network or Internet without your permission—install a firewall software on your PC as
well.
It's nice and convenient to provide guests with a network that doesn't have an
encryption password, but what if you can't trust them? Or the neighbors? Or the people
parked out front? If they're close enough to be on your Wi-Fi, they should be close
enough to you that you'd give them the password. (Remember—you can always
change your Wi-Fi encryption password later.)
Use a VPN
A virtual private network (VPN) connection makes a tunnel between your device and
the Internet through a third-party server—it can help mask your identity or make it look
like you're in another country, preventing snoops from seeing your Internet traffic. Some
even block ads. A VPN is a smart bet for all Internet users, even if you're not on Wi-Fi.
Just like with your operating system and browsers and other software, people find
security holes in routers all the time to exploit. When the router manufacturers know
about these exploits, they plug the holes by issuing new software for the router, called
firmware. Go into your router settings every month or so and do a quick check to see if
you need an update, then run their upgrade. New firmware may also come with new
features for the router, so it's a win-win.
237 | P a g e
Turn Off WPS
Wi-Fi Protected Setup, or WPS, is the function by which devices can be easily paired
with the router even when encryption is turned on, because you push a button on the
router and the device in question. Voila, they're talking. It's not that hard to crack,
however, and means anyone with quick physical access to your router can instantly
pair their equipment with it. Unless your router is locked away tight, this is a potential
opening to the network you may not have considered.
This makes it harder, but not impossible, for friends and family to get on the Wi-Fi; that
means it makes it a lot harder for non-friends to get online. In the router settings for the
SSID, check for a "visibility status" or "enable SSID broadcast" and turn it off. In the future,
when someone wants to get on the Wi-Fi, you'll have to tell them the SSID to type in—so
make that network name something simple enough to remember and type. (Anyone
with a wireless sniffer, however, can pick the SSID out of the air in very little time. The
SSID is not so much as invisible as it is camouflaged.)
Disable DHCP
The Dynamic Host Control Protocol (DHCP) server in your router is what IP addresses are
assigned to each device on the network. For example, if the router has an IP of
192.168.0.1, your router may have a DCHP range of 192.168.0.100 to 192.168.0.125—
that's 26 possible IP addresses it would allow on the network. You can limit the range so
(in theory) the DHCP wouldn't allow more than a certain number of devices—but with
everything from appliances to watches using Wi-Fi, that's hard to justify.
For security you could also just disable DHCP entirely. That means you have to go into
each device—even the appliances and watches—and assign it an IP address that fits
with your router. (And all this on top of just signing into the encrypted Wi-Fi as it is.) If that
sounds daunting, it can be for the layman. Again, keep in mind, anyone one with the
right Wi-Fi hacking tools and a good guess on your router's IP address range can
probably get on the network even if you do disable the DHCP server.
238 | P a g e
Filter on MAC Addresses
Every single device that connects to a network has a media access control (MAC)
address that serves as a unique ID. Some with multiple network options—say 2.4GHz Wi-
Fi, and 5GHz Wi-Fi, and Ethernet—will have a MAC address for each type. You can go
into your router settings and physically type in the MAC address of only the devices you
want to allow on the network. You can also find the "Access Control" section of your
router to see a list of devices already connected, then select only those you want to
allow or block. If you see items without a name, check its listed MAC addresses against
your known products—MAC addresses are typically printed right on the device.
Anything that doesn't match up may be an interloper. Or it might just be something you
forgot about—there is a lot of Wi-Fi out there.
Never allow an untrusted or unfamiliar person have access to your private Wi-Fi
network. If you want to offer visitors or guests wireless Internet access, make sure that
such access is segregated from your company’s main network so they can’t possibly
get into your computers and files, and eavesdrop on your traffic.
When configuring guest access, you could even enable separate encryption so you
can still try to control who connects and uses your Internet access. With a wireless
router, you should use the guest access settings.
239 | P a g e
Physically Secure Your Network Gear
Besides enabling encryption to secure your private wireless network, you need to think
about the physical security of your network. Make sure that your wireless router or APs
are all secured from visitors. An intruder could easily plug into the network if it’s in reach
or reset it to factory defaults to clear the security. To prevent this, you could, for
instance, mount the hardware high on walls or above a false ceiling. Also, if your office
has Ethernet network ports on the walls, make sure that they aren’t within the reach of
visitors, or disconnect them from the network. If you have a larger network with a wiring
closet, make sure it says locked and secure.
If you don’t use a VPN connection to secure all your traffic when out of the office, at
least ensure that any websites you log in to are encrypted. Highly sensitive websites,
such as banks, use encryption by default, but others, such as social networking sites and
email providers, don’t always do so.
To ensure that a website is using encryption, access it via a Web browser and try to use
SSL/HTTPS encryption. You can see if the site supports SSL encryption by adding the
letter s to its address: https:// instead of http://. If it’s encrypted, you’ll also see some sort
of notification in the browser about the security, such as a padlock or green-colored
address bar. If you don’t see any notification or it shows an error, it may not be secure;
you should therefore consider waiting to access the site until you’re on a private
network at home or in the office.
If you check your email with a client program such as Microsoft Outlook, you should try
enabling SSL encryption for your email server in your account settings (see Figure 6).
However, many email providers don’t support encrypted connections via
client programs. If that’s the case, check your email via the Web browser--using
SSL/HTTPS--if possible.
240 | P a g e
Shop for Secure Wi-Fi Gear
When shopping for a Wi-Fi router or access points, keep security in mind. As mentioned,
some consumer-level wireless routers, such as the D-Link Xtreme N Gigabit Router, offer
a wireless guest feature, so you can keep visitors off your private network. And business-
class routers and APs usually offer VLAN and multiple SSID support, which you can
configure to do the same.
Additionally, some business-level routers offer integrated VPN servers. You can use VPN
connections to secure your Wi-FI hotspot sessions, remotely access your network, or link
multiple offices together. Some, such as the ZyXEL 802.11a/b/g/n Business Access Point,
even have an embedded RADIUS server, so you can use the Enterprise mode of
WPA2 security.
When shopping the big-box stores, you’ll find mostly consumer-level wireless routers. You
can check the box for features, but I suggest investigating online before purchasing.
Check the manufacturer’s site and read through the model’s product description
pages to get a better idea of what features it supports.
When shopping online for consumer or business gear, some Web stores include a
lengthy description, but again, check the manufacturer’s site for a full feature list.
241 | P a g e
Conclusion
A few years ago cyber-attacks were on the margins of news stories. But after a series of
high-profile attacks against major financial institutions, retailers and health care
providers, people realize that cyber-attacks aren’t going away.
The need to address increasingly sophisticated threats has rapidly gone from an IT issue
to a top priority, and laid back attitude towards cyber security will make the respective
organization pay not only in terms of cash and kind but also in terms of reputation.
There are thousands of cyber security products present in the market today, and each
day hundreds of new products are released. So, it is the responsibility of an individual
and the organizations to employ the solution that best suits its needs and stay safe in
the world of never stopping cyber attacks.
242 | P a g e
Recommendations
Government of India is focused on Digital India, make in India, which are being used to
empower the citizens of India. It is expected to result in one trillion economy in next
seven years. Digital Payments is another focus area. It is felt that cyber security needs to
be given priority to secure the digital payments and IT infrastructure of India.
Digital India is the growth engine that has the potential to transform India into
knowledge led economy and society. The digital revolution now stands at the cusp of a
transformation, with the government having laid out its vision of a digitally enabled
India.
The transformation of the cash to cash less society is getting limited because of the love
of cash and currency. We believe that massive efforts are needed to bring about that
change. It is also a fact that the people of India despite the handicap of cyber
education are quick to adopt technology when it affects their living. This was amply
proven by the speed with which the society took to mobile phone and its applications.
CMAI, is Asia’s largest ICT Association with 48,500 members and 54 MOU Partners
worldwide. CMAI is actively engaged in promotion of Digital India and Digital
Payments.
CMAI is dealing with more than one lack Educational institutions and academic
professionals consisting of Universities and technical/engineering colleges/schools etc.
CMAI has initiated free online training programs and large scale education for the use
of e-transaction and transformation of India to a digital economy.
243 | P a g e
Cyber security is need of the hour to protect digital payments and ICT infrastructure of
India. The report is an attempt to put together various aspects of cyber security
solutions. The report interlaid suggests:
244 | P a g e
References
www.google.co.in
https://www.mygov.in/group/digital-india/
https://www.mygov.in/group/digital-india/
https://securityintelligence.com/two-important-lessons-from-the-ashley-madison-breach/
http://www.cio.com/article/2987830/online-security/ashley-madison-breach-shows-
hackers-may-be-getting-personal.html
http://www.tripwire.com/state-of-security/security-data-protection/cyber-security/the-
ashley-madison-hack-a-timeline/
http://surveillance.rsf.org/en/hacking-team/
https://en.wikipedia.org/wiki/Hacking_Team
https://nakedsecurity.sophos.com/2016/04/19/how-hacking-team-got-hacked/
http://www.scmagazine.com/hacker-behind-hacking-team-breach-publishes-how-to-
guide/article/490541/
http://malwarejake.blogspot.in/2016/04/lessons-learned-from-hacking-team.html
http://edition.cnn.com/2015/10/27/politics/john-brennan-email-hack-outrage/
http://nypost.com/2015/10/27/cia-director-outraged-with-teenager-who-hacked-him/
http://www.mirror.co.uk/news/uk-news/vodafone-hacked-cyber-thieves-steal-6742472
http://www.ibtimes.co.uk/vodpahone-hack-almost-2000-customer-accounts-have-been-
accessed-by-hackers-1526638
https://next.ft.com/content/9bfb4e72-7965-11e5-a95a-27d368e1ddf7
http://www.itpro.co.uk/security/24136/talktalk-hack-what-to-do-if-hackers-have-your-
data-20
https://pdfs.semanticscholar.org/3ba9/52ee1b042b224109d6a586a18830cef1068a.pdf
http://www.bbc.com/news/business-34743185
http://www.bankinfosecurity.in/blogs/5-lessons-from-talktalk-hack-p-1967
http://www.cyberwar.news/2016-05-16-germany-says-russian-government-was-behind-
aggressive-hack-of-its-government-systems.html
http://www.wsj.com/articles/germany-points-finger-at-russia-over-parliament-hacking-
attack-1463151250
http://www.securityweek.com/evidence-russia-behind-cyber-attacks-germany-secret-
service
https://ist.mit.edu/security/malware
https://antivirus.comodo.com/how-antivirus-software-works.php
https://www.lookout.com/know-your-mobile/what-is-a-mobile-threat
http://www.webroot.com/in/en/home/resources/tips/pc-security/security-what-is-anti-
virus-software
https://pralab.diee.unica.it/en/AdversarialMachineLearning
https://www.datanami.com/2016/04/21/machine-learning-can-applied-cyber-security/
245 | P a g e
http://www.csoonline.com/article/3046543/security/machine-learning-is-reshaping-
security.html
https://www.cnet.com/how-to/the-guide-to-password-security-and-why-you-should-care/
http://searchsecurity.techtarget.com/Six-steps-for-security-patch-management-best-
practices
http://securityaffairs.co/wordpress/6370/security/5-reasons-why-you-need-good-patch-
management.html
http://www.itpro.co.uk/security/27713/the-importance-and-benefits-of-effective-patch-
management
http://www.ibm.com/support/knowledgecenter/SSTFWG_4.3.1/com.ibm.tivoli.itcm.doc/C
MPMmst20.htm
http://www.infoworld.com/article/2616316/security/the-5-cyber-attacks-you-re-most-
likely-to-face.html?page=2
http://www.redbooks.ibm.com/redbooks/pdfs/sg246776.pdf
http://searchsecurity.techtarget.com/definition/biometrics
http://www.itbusinessedge.com/slideshows/top-iam-features-to-help-protect-your-vital-
enterprise-data-07.html
http://searchmidmarketsecurity.techtarget.com/definition/intrusion-detection
https://www.sans.org/security-resources/idfaq/what-is-intrusion-detection/1/1
https://www.sagedatasecurity.com/blog/cyber-threat-detection-5-keys-to-log-analysis-
success-infographic
http://opensourceforu.com/2011/06/best-practices-network-security-monitoring/
https://www.slideshare.net/dgpeters/ics-network-security-monitoring-nsm
https://www.beyondtrust.com/products/powerbroker/
https://docs.microsoft.com/en-us/microsoft-identity-manager/pam/privileged-identity-
management-for-active-directory-domain-services
http://blog.wallix.com/what-is-privileged-access-management-pam
http://searchsecurity.techtarget.com/definition/PKI
http://searchsecurity.techtarget.com/tip/ERP-security-How-to-defend-against-SAP-
vulnerabilities
http://searchsecurity.techtarget.com/tip/Security-in-the-software-development-life-cycle
https://www.synopsys.com/blogs/software-security/secure-sdlc/
http://resources.infosecinstitute.com/intro-secure-software-development-life-cycle/#gref
http://in.pcmag.com/networking/81330/feature/12-ways-to-secure-your-wi-fi-network
https://www.alienvault.com/blogs/security-essentials/security-issues-of-wifi-how-it-works
246 | P a g e