0% found this document useful (0 votes)
20 views3 pages

How Artificial Intelligence Transforms Cybersecurity

Uploaded by

elalaouiaya05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views3 pages

How Artificial Intelligence Transforms Cybersecurity

Uploaded by

elalaouiaya05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

LINA YAO

HOW ARTIFICIAL
INTELLIGENCE
TRANSFORMS
CYBERSECURITY
by Lina Yao, Scientia Associate Professor at UNSW

As cyberattacks grow in volume and complexity, Governments and businesses are making every
artificial intelligence (AI) is helping under-resourced effort to protect themselves, but the volume of
security operations analysts stay ahead of threats. attacks can be overwhelming for security analysts
and professionals. And there will always be new
By curating threat intelligence from millions of
and unforeseen attacks and threats, such as the
research papers, blogs and news stories, AI can
notorious ransomware attacks of the past two years
provide instant insights to help cut through the noise
that paralysed countless computers and even IoT
of thousands of daily alerts, drastically reducing
devices.
response times and mis/dis information on the
internet, etc. The latest advancements in AI can A security paradigm that is purely responsive will fail
take cybersecurity to a new level, and boost relevant to provide adequate protection. It can resolve issues
research and application development. only after they have been discovered, by which time,
damage is likely to have already been done.[1] Without
According to the Australian Cyber Security Centre’s
long-term vision, only identified and confirmed threats
(ACSC) Annual Cyber Threat Report July 2019 to June
can be dealt with. New ones will not be addressed.
2020, in Australia alone there are, on average, more
than six cyberattack incidents every single day, and
most of them have moderate or substantial impacts.
MACHINE LEARNING IS HOT
Machine learning is a hot topic in artificial
ACSC says it received 59,806 cybercrime reports
intelligence, and is capable of extracting valuable
in the 12 months to June 2020, almost one every
insights from existing knowledge, such as recordings
10 minutes. It says the true figure is probably
of experiences, and identified threats or attacks.
much larger, because cybercrime in Australia is
underreported. Notably, the attacks were mostly Machine learning has proved to be very effective in
targeted at large organisations. detecting variants of existing malware, attacks and

102 WOMEN IN SECURITY MAGAZINE


T E C H N O L O G Y P E R S P E C T I V E S

threats, no matter how deep the malicious code or only collected externally. Furthermore, there are
attack patterns are hidden. also applications to make fine-grain predictions that
identify the risk associated with specific business
Data-driven machine learning powered by deep neural
information. This would enable a business to adjust
networks can learn the activity patterns or tendencies
resource allocation and prioritise protection so as to
of individuals in an organisation. Given sufficient time
minimise the impact of an attack.
or sufficient data it can develop an understanding of
patterns and tendencies that may be too complicated However, many solutions assume the input data
or subtle for human cognition. fed into their algorithms are clean with no noise

This enables machine learning


to respond rapidly to threats,
such as a link in a phishing
email, a malware payload, or “A security paradigm that is purely
attacking network traffic. A
system powered by machine
responsive will fail to provide adequate
learning is able to continuously protection. It can resolve issues only
monitor an entire system and after they have been discovered, by
provide a real-time threat
response.
which time, damage is likely to have
Some of the most successful
already been done. [1] Without long-term
applications of AI to vision, only identified and confirmed
cybersecurity have been to threats can be dealt with. New ones will
provide predictive protection.
For example, modern malware
not be addressed.”
may be hard to detect solely
by examining its code and its
behaviour. [2]

In recent years, few shot and lifelong machine


learnings are attracting increasing attention, which or errors. Such assumptions can be exploited by
equips AI with human-like ability of Learning to attackers, who may poison the input by providing
Learn and enables the AI systems to quickly learn counterfeit malicious incident reports, or creating a
and generalize to new tasks from very limited data. fake honeypot network for the algorithm, which can
mislead its predictors and sabotage its learning. This

ANTICIPATING ATTACKS WITH AI is referred to as adversarial machine learning. It is


critical that it be addressed.
An AI-based malware detection system [3] has been
able to detect malware while it is downloading, and so Work is also underway to develop adversarial machine
prevent it from being installed and executing on the learning that will provide security to the machine
target system. learning itself.[5][8][9] This is key to successfully applying
machine learning to cybersecurity.
Another example is data breach prediction with
AI. Liu et al [3] modelled this as a binary prediction
MACHINE LEARNING UNDER ATTACK
problem, based on historical data and observations,
to determine whether a system is likely to face such In general, there are two common types of attacks
an attack in the near future. on machine learning: poisoning attacks which attack
the learning during the training, and evasion attacks
What’s fascinating is that this was done with no which attack the inferencing stage of the machine
access to the client’s internal networks: data were learning process.

WOMEN IN SECURITY MAGAZINE 103


There is another kind of attack called model stealing. and this could be the subject of future research.
This either tries to figure out the internal structure of This may also lead to another research topic. A
the machine learning model or to extract the sensitive standalone security recommendation powered by
data the model has been trained on. comprehensive recommender systems, especially on
critical services, can sometimes be hard to trust.[8] [12]
Another major research project we are conducting
An explainable system that differs from explainable
aims to develop robust predictive machine learning
AI, is preferable. It should provide explanations and
models that will detect and defend against false/
visualised reasoning processes for intermediate risks
misinformation spread over the Web via social media.
and explain why the actions it suggests can minimise
Such techniques are initially being developed such risks, and at what costs.
against disinformation like fake news, fake reviews
Also, some reports suggest that, just as
and clickbait, which can be used for cyberattacks,
developments in AI technology can be applied for
nefarious business operations and political
security, they can also be weaponised for malware
subversion, creating social tension.[4] [5] [10]
and attacks, making these harder or even impossible
Also, AI-powered false information can be even to detect.
harder to distinguish from legitimate information than
It may not be possible to prevent AI being used for
false information created by humans. Researchers
nefarious activities, but it should be possible to
need to develop methods to alleviate and address
prevent its impacts.
such misuse of AI technologies.

Much of the current work on proactive AI for


cybersecurity is providing results that are too
www.linayao.com/
ambiguous, so few developments are finding
practical application. insdata.org/beta/

More detailed security recommendations with


specific actions are needed for practical applications,

References
[1] B. Morel, “ Artificial intelligence and the future of cybersecurity,” in The 4th ACM workshop on Security and artificial intelligence (AISec ‘11), Chicago, Illinois, USA,
2011.
[2] Sun, Nan, Jun Zhang, Paul Rimba, Shang Gao, Leo Yu Zhang, and Yang Xiang, “Data-driven cybersecurity incident prediction: A survey.,” IEEE communications
surveys & tutorials, vol. 2, no. 21, pp. 1744-1772, 2018.
[3] B. J. Kwon, J. Mondal, J. Jang, L. Bilge, and T. Dumitras, “The Dropper Effect: Insights into Malware Distribution with Downloader Graph Analytics,” in The 22nd ACM
Conference on Computer and Communications Security (CCS’15), Denver, Colorado, USA., 2015.
[4] Yang Liu, Armin Sarabi, Jing Zhang, and Parinaz Naghizadeh, Manish Karir, Michael Bailey, Mingyan Liu, “Cloudy with a Chance of Breach: Forecasting Cyber Security
Incidents,” in The 24th USENIX Security Symposium (USENIX Security ‘15), Washington, D.C., USA, 2015.
[5] Abraham, Tamas, Olivier de Vel, and Paul Montague, “Adversarial Machine Learning for Cyber-Security: NGTF Project Scoping Study,” Defence Science and Technolo-
gy Group, Australia, 2018.
[6] Xianzhi Wang, Quan Z. Sheng, Lina Yao, Xue Li, Xiu Susie Fang, Xiaofei Xu and Boualem Benatallah, “Truth Discovery via Exploiting Implications from Multi-Source
Data,” in The 25th ACM Conference on Information and Knowledge Management ( CIKM 2016), Indianapolis, USA, 2016.
[7] Dong, Manqing, Lina Yao, Xianzhi Wang, Boualem Benatallah, Chaoran Huang, and Xiaodong Ning, “Opinion fraud detection via neural autoencoder decision forest,”
Pattern Recognition Letters, no. 132 , pp. 21-29, 2020.
[8] Yuanjiang Cao, Xiaocong Chen, Lina Yao, Xianzhi Wang and Wei Emma Zhang. Adversarial Attack and Detection on Reinforcement Learning based Recommenda-
tion System. The 43rd Annual ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2020). Xi’an, China, July 25-30, 2020.
[9] Zhe Liu, Lina Yao, Lei Bai, Xianzhi Wang and Can Wang. Spectrum-Guided Adversarial Disparity Learning. The 26th ACM SIGKDD Conference on Knowledge Discov-
ery and Data Mining (KDD 2020). Research Track. (KDD 2020), San Diego, CA, USA, August 23 - 27, 2020.
[10] Zhe Liu, Lina Yao, Xianzhi Wang, Lei Bai and Jake An. Are You a Risk Taker? Adversarial Learning of Asymmetric Cross-Domain Alignment for Risk Tolerance Predic-
tion. International Joint Conference on Neural Networks (IJCNN 2020), Glasgow, UK, July 19 - 24, 2020
[11] Bin Guo, Yasan Ding, Lina Yao, Yunji Liang and Zhiwen Yu, The Future of Misinformation Detection: New Perspectives and Trends ACM Computing Surveys (CUSR) ,
2020
[12] Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. . Deep Learning based Recommender System: A Survey and New Perspectives ACM Computing Surveys (CUSR) ,
2019

104 WOMEN IN SECURITY MAGAZINE

You might also like