0% found this document useful (0 votes)
3 views

Lecture 2 Cyber Security Fundamentals

Cybersecurity is essential for securing AI systems, focusing on the CIA triad of Confidentiality, Integrity, and Availability. Common threats include malware, phishing, and DDoS attacks, which require structured frameworks like NIST and ISO/IEC 27001 for effective protection. Best practices for securing AI systems involve data security, model hardening, access control, and continuous monitoring to mitigate risks.

Uploaded by

ahmeddhamed179
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Lecture 2 Cyber Security Fundamentals

Cybersecurity is essential for securing AI systems, focusing on the CIA triad of Confidentiality, Integrity, and Availability. Common threats include malware, phishing, and DDoS attacks, which require structured frameworks like NIST and ISO/IEC 27001 for effective protection. Best practices for securing AI systems involve data security, model hardening, access control, and continuous monitoring to mitigate risks.

Uploaded by

ahmeddhamed179
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 19

AI Security Issues Cybersecurity Fundamentals Spring 2025

Cybersecurity Fundamentals
• Cybersecurity forms the backbone of secure AI systems.
• As AI technologies become integral to industries like
healthcare, finance, and autonomous systems,
understanding core cybersecurity principles is critical to
safeguarding these systems from exploitation.
• This chapter explores the foundational concepts of
cybersecurity, common threats, and best practices
tailored to AI environments.
2.1 Key Concepts: The CIA Triad

The CIA triad—Confidentiality, Integrity, and Availability—is the


cornerstone of cybersecurity. Below is how each principle applies to AI
systems:
1. Confidentiality
• Definition: Ensuring that sensitive data and systems are accessible
only to authorized users.
• AI Relevance:
• Protects training data, proprietary algorithms, and user information.
• Example: Encrypting datasets used to train AI models to prevent unauthorized
access.
• Threats in AI: Data breaches, model inversion attacks, or adversarial
attempts to extract sensitive information from AI outputs.
2.1 Key Concepts: The CIA Triad

2. Integrity
• Definition: Maintaining the accuracy and trustworthiness of
data and systems.
• AI Relevance:
• Ensures AI models and datasets are not tampered with.
• Example: Detecting and mitigating data poisoning attacks that
manipulate training data.
• Threats in AI: Model tampering, adversarial inputs, or corrupted
datasets.
2.1 Key Concepts: The CIA Triad

3. Availability
• Definition: Ensuring systems and data are accessible to
authorized users when needed.
• AI Relevance:
• Prevents disruptions to AI services (e.g., autonomous vehicles,
chatbots).
• Example: Mitigating DDoS attacks targeting cloud-based AI inference
services.
• Threats in AI: DDoS attacks, ransomware locking AI
infrastructure.
2.2 Common Threats to AI
Systems
1. Malware
• Definition: Malicious software designed to disrupt, damage, or
gain unauthorized access.
• AI-Specific Risks:
• Data Poisoning: Malware alters training data to corrupt AI model
behavior.
• Model Hijacking: Malware injects backdoors into AI systems for remote
control.
• Case Study: Stuxnet disrupted industrial systems; similar logic
could target AI-driven manufacturing.
2.2 Common Threats to AI
Systems
2. Phishing
• Definition: Deceptive attempts to steal sensitive information via
fraudulent communication.
• AI-Specific Risks:
• Phishing emails targeting AI researchers to steal credentials for model
repositories.
• Social engineering to gain access to confidential datasets.
• Example: A phishing attack on a healthcare AI team could
expose patient data.
2.2 Common Threats to AI
Systems
3. DDoS Attacks
• Definition: Overwhelming a system with traffic to disrupt
service.
• AI-Specific Risks:
• Targeting AI APIs or cloud services to disable real-time decision-making
(e.g., fraud detection).
• Financial loss from downtime in AI-driven platforms.
• Case Study: 2016 Dyn DDoS attack disrupted major websites;
similar tactics could cripple AI-as-a-Service (AIaaS) providers.
2.3 Security Frameworks for AI
Systems
Adopting structured frameworks ensures systematic protection of
AI systems.
1. NIST Cybersecurity Framework
Key Components: Identify, Protect, Detect, Respond, Recover.
AI Adaptation:
• Identify: Inventory AI assets (models, datasets, APIs).
• Protect: Encrypt data, enforce access controls.
• Detect: Monitor for adversarial attacks or model drift.
2.3 Security Frameworks for AI
Systems
2. ISO/IEC 27001
Focus: Information security management systems (ISMS).
AI Application:
• Secure the AI development lifecycle (data collection to deployment).
• Conduct risk assessments for AI-specific vulnerabilities.
2.3 Security Frameworks for AI
Systems
3. MITRE ATT&CK® for AI
Purpose: Framework for understanding adversarial tactics against
AI.
Key Tactics:
• Data Poisoning: Manipulating training data.
• Model Evasion: Crafting inputs to bypass AI defenses.
2.4 Best Practices for Securing AI
Systems
1. Data Security
• Encrypt datasets at rest and in transit.
• Validate data sources to prevent poisoning.
2. Model Hardening
• Implement adversarial training to improve robustness.
• Regularly audit models for unexpected behavior.
3. Access Control
• Use role-based access control (RBAC) for AI tools and datasets.
• Enforce multi-factor authentication (MFA).
4. Monitoring and Response
• Deploy anomaly detection systems to identify suspicious activity.
• Establish incident response plans for AI-specific breaches.
5. Collaboration

2.4 Best Practices for Securing AI
Systems
6. Use strong, unique passwords.
• Impose password constrains.
• Require users to change passwords frequently.
7. Keep software and systems updated.
• Regularly check and install updates.
8. Be cautious of suspicious emails and links.
• Educate staff and provide training.
9. Regularly back up important data.
• Keep a copy of your backup in a safe place.
Network Security Measures
1. Firewalls: Block unauthorized access.
2. Intrusion Detection/Prevention Systems (IDS/IPS):
Monitor network traffic.
3. Virtual Private Networks (VPNs): Secure remote access.
4. Secure Wi-Fi practices: Use WPA3 encryption.
Endpoint Security
• Install antivirus and anti-malware software.
• Keep operating systems and applications updated.
• Use device encryption and secure boot settings.
• Implement security policies and employee training.
• Use role-based access controls (RBAC).
• Regularly conduct security audits and penetration testing.
• Stay informed and proactive.
• Follow best practices to secure personal and business data.
• Continuous learning and adapting to new threats.
Emerging Cybersecurity Trends
• AI and Machine Learning for threat detection.
• Zero Trust Security model.
• Cloud security advancements.
• Cybersecurity regulations and compliance (GDPR, CCPA, ISO
27001).
2.5 Case Study
AI Security Failure in Facial Recognition

• Scenario: A facial recognition system used by law enforcement


was found to have racial bias due to unsecured, biased training
data.
• Impact: Misidentification led to wrongful arrests and public
distrust.
• Lessons Learned:
• Integrity of training data is critical.
• Regular audits and diverse datasets mitigate bias.
Summary
• Cybersecurity fundamentals are non-negotiable in AI systems.
• By applying the CIA triad, understanding threats like malware and
DDoS, and adopting frameworks like NIST, organizations can
build resilient AI infrastructures.
• The next chapter explores how AI itself can both enhance and
complicate cybersecurity efforts.
Discussion Questions
1. How might a breach of confidentiality in an AI system differ from
a traditional IT system?
2. Design a security plan for an AI-powered medical diagnosis tool
using the NIST framework.
3. What unique challenges do DDoS attacks pose for real-time AI
applications?

You might also like