Ccna Devnet
Ccna Devnet
1st Edition
Ratnesh K
CCIE x3 #61380
© 2024 by Ratnesh. All rights reserved.
This material is protected under international and domestic copyright laws and treaties.
Any unauthorized reproduction, distribution, or use of this material is prohibited without
the express written consent of Ratnesh. For permissions, contact Ratnesh at +91 8970816983
Python Code:
python
import yaml
# YAML data
yaml_data = '''
book:
year: 2005
'''
# Parse YAML
book_dict = yaml.safe_load(yaml_data)
Intelligent Features: These devices are equipped with intelligent features such as
facial recognition, voice commands, whiteboarding, content sharing, and integration
with productivity tools like Microsoft Office 365 and Google Workspace.
XML (eXtensible Markup Language), JSON (JavaScript Object Notation), and YAML (YAML Ain't Markup Language)
are all popular data formats used for storing and transmitting structured data. Here's a comparison of these formats:
YAML is a human-readable data serialization format that focuses on simplicity and readability.
It uses indentation and whitespace for structuring data, making it visually appealing and easy to understand.
YAML supports complex data structures, including lists and dictionaries, similar to JSON.
It is often used in configuration files, automation scripts, and data serialization.
1.2 Describe parsing of common data format (XML, JSON, and YAML) to Python data structures
Python Code:
python
import json
# JSON data
json_data = '{"book": {"title": "Harry Potter", "author": "J.K. Rowling", "year": 2005}}'
# Parse JSON
book_dict = json.loads(json_data)
print(book_dict['book']['title'], book_dict['book']['author'], book_dict['book']['
1.3 Describe the concepts of test-driven development
Test-driven development (TDD) is a software development approach where tests are written before writing the
actual code. This methodology follows a cycle of writing tests, implementing code to pass those tests, and then
refactoring the code as needed. Here's a description of the TDD process along with an example and a simple
network topology analogy:
Start by writing a test that defines the desired behavior of the code.
Tests are typically written using testing frameworks like unittest (for Python), JUnit (for Java), or other relevant tools.
The test should fail initially since the code to implement the functionality doesn't exist yet.
Implement Code:
Write the minimum amount of code necessary to pass the test.
The goal is to make the failing test pass without introducing unnecessary complexity.
Run the Test:
Run the test to check if the newly written code passes the test.
If the test fails, refine the code until it passes the test.
Refactor Code:
Refactor the code to improve its structure, readability, and performance while ensuring that all tests still pass.
The code should maintain its functionality and correctness after refactoring.
Let's consider an example of developing a simple Factorial application using TDD in Python:
1.4 Compare software development methods (agile, lean, and waterfall)
1.4 Compare software development methods (agile, lean, and waterfall)
Waterfall Methodology:
Sequential Process: Waterfall follows a linear and sequential approach, where each
phase of the project must be completed before moving to the next phase.
Flexibility: Embraces change and welcomes customer feedback, allowing for continuous
improvement and adaptation to changing requirements.
Key Practices: Scrum, Kanban, and Extreme Programming (XP) are popular frameworks
under Agile, each with its own set of practices and principles.
Suitability: Ideal for projects with dynamic and evolving requirements, where
frequent feedback and rapid delivery are essential.
Lean Methodology:
Value Stream Mapping: Identifies and optimizes the value stream, from customer
request to product delivery, to streamline processes and eliminate bottlenecks.
Model (Data and Business Logic): 📊 Think of the model as the central database
or data store in your topology. It stores all the data and business logic of
your system, similar to how a centralized server stores and manages information.
View (User Interface): 🖥️ The view represents the user interface or frontend
of your application. It's like the end devices (computers, smartphones) that
users interact with to access data and services from the central database (model).
Loose Coupling: The Observer pattern promotes loose coupling between objects.
Subjects (publishers) and Observers (subscribers) are decoupled, allowing changes
in one object to notify and update multiple dependent objects without directly
depending on them.
Rollback and Revert: ⏪ Allows for rollback and revert operations, undoing
changes that introduce errors.
1.8.a Clone
1.8.b Add/remove
1.8.c Commit
1.8.d Push / pull
1.8.e Branch
1.8.f Merge and handling conflicts
1.8.g diff
Clone (1.8.a):
To clone a repository, use the git clone command followed by the repository URL:
Add/Remove (1.8.b):
To add files to the staging area for commit, use git add followed by the
file names or . to add all changes:
To remove files from the staging area, use git rm followed by the file names:
git rm filename.txt
Commit (1.8.c):
Push/Pull (1.8.d):
To create a new branch, use git branch followed by the branch name:
git diff
git diff filename.txt
git diff commit-hash-1 commit-hash-2
2.4 Explain common HTTP response codes associated with REST APIs
Understand the API endpoints, methods (GET, POST, PUT, DELETE), request
headers,parameters, and response format specified in the API documentation.
Determine the specific task or action you want to perform using the API
(e.g., create a new resource, retrieve data, update an existing record).
Format the request URL with the API endpoint and any additional parameters.
Send the Request:
Use a tool or programming language (e.g., cURL, Postman, Python requests library)
to send the constructed API request.
Here's an example of constructing a REST API request using cURL:
Let's assume we have an API endpoint to create a new user profile
with the following documentation:
Endpoint: https://api.example.com/users
Method: POST
Headers: Content-Type: application/json
Request Body (JSON format):
{
"username": "john_doe",
"email": "[email protected]",
"password": "securepassword123"
}
2.2 Describe common usage patterns related to webhooks
Topology Diagram:
Description:
NMS Detects Event: The NMS detects the event through monitoring
mechanisms or receives a request for a configuration change
from an external source.
NMS Sends Webhook POST Request: The NMS sends an HTTP POST request
to the webhook receiver's endpoint, passing the webhook payload in
the request body.
Webhook Receiver Processes Payload: The webhook receiver
receives the POST request, processes the payload, and triggers
the corresponding action or workflow based on the data received.
4⃣ Data Format: 📝 Specific data formats (e.g., JSON, XML) must be used.
This topology format visually represents the relationship between the HTTP
response codes and their meanings in the context of Cisco network device
interactions via REST APIs, using emojis for each status code.
200 OK: This response code indicates that the request was successful.
1⃣ For example, when retrieving information about a Cisco device, a 200
OK response means that the device information was successfully retrieved.
201 Created: This code signifies that a new resource has been successfully
2⃣ created. For instance, when adding a new VLAN configuration to a Cisco switch
via API, a 201 Created response would confirm that the VLAN was successfully created.
400 Bad Request: This code indicates that the request was malformed or had
3⃣ invalid syntax. If you send an API request to configure an invalid VLAN ID
on a Cisco switch, you might receive a 400 Bad Request response.
404 Not Found: This code indicates that the requested resource was not found on the server.
6⃣ For example, if you try to access a non-existent endpoint for retrieving interface
information on a Cisco device, you would receive a 404 Not Found response.
405 Method Not Allowed: This code indicates that the HTTP method used in the request is
not supported for the requested resource. For instance, if you attempt to use a POST request
7⃣ to retrieve information instead of a GET request on a Cisco device API endpoint, a 405 Method
Not Allowed response would be returned.
500 Internal Server Error: This code indicates that there was an unexpected error on the
8⃣ server while processing the request. For example, if there is a configuration issue or a
software bug on a Cisco device's API server, a 500 Internal Server Error response would be
returned
2.5 Troubleshoot a problem given the HTTP response code, request and API documentation
IdentifyHTTPResponseCode
ReviewRequestDetails
ConsultAPIDocumentation
CheckForMisconfigurations
AnalyzeResponseDetails
ConsiderNetworkAndServerIssues
ImplementDebuggingTechniques
TestingAndValidation
End
2.6 Identify the parts of an HTTP response (response code, headers, body)
}
Lab 2: Error Handling
Body:
{
"error": "Invalid request body", 3
Basic Authentication:
Request:
Request:
Request:
Here, the API key is included in a custom header (x-api-key: your_api_key) as part
of the request to authenticate the API consumer.
Communication Style:
Synchronous APIs follow a request-response pattern where the client waits for the
server to process and respond to each request before continuing.
Asynchronous APIs allow the client to send requests without waiting for immediate
responses, enabling non-blocking operations.
Latency:
Synchronous APIs typically have lower latency since the client waits for immediate responses.
Asynchronous APIs may have higher latency as the client doesn't wait for immediate responses
and processes responses asynchronously.
REST (Representational State Transfer):
Uses standard HTTP methods (GET, POST, PUT, DELETE).
Statelessness, uniform interface, cacheability principles.
Resource-based architecture with URLs.
Supports JSON, XML data formats.
Commonly used for web services and cloud APIs.
Asynchronous APIs:
Client can continue execution without waiting.
Delayed response possible.
Used for long-running tasks, batch processing.
Implemented with callbacks, promises.
2.9 Construct a Python script that calls a REST API using the requests library
import requests
# Example API call to get network devices
import json
devices_endpoint = f"{dnac_url}/network-device"
devices_response = requests.get(devices_endpoint,
# DNAC API endpoint and credentials headers={'X-auth-token': auth_token}, verify=False)
dnac_url = "https://your_dnac_server/api/v1"
# Print the response
username = "your_username"
if devices_response.status_code == 200:
password = "your_password"
devices_data = devices_response.json()
headers = { else:
3.1 Construct a Python script that uses a Cisco SDK given SDK documentation
3.2 Describe the capabilities of Cisco network management platforms and
APIs (Meraki, Cisco DNA Center, ACI, Cisco SD-WAN, and NSO)
3.6 Describe the device level APIs and dynamic interfaces for IOS XE and NX-OS
3.7 Identify the appropriate DevNet resource for a given scenario (Sandbox,
Code Exchange, support, forums, Learning Labs, and API documentation)
3.9.a Obtain a list of network devices by using Meraki, Cisco DNA Center,
ACI, Cisco SD-WAN, or NSO
https://dnacentersdk.readthedocs.io/en/latest/api/api.html
#dnac = DNACenterAPI(username="your_username",
password="your_password", base_url="https://your_dnac_server")
devices = dnac.devices.get_device_list()
for device in devices['response']:
print(device['hostname'], device['managementIpAddress'])
3.2 Describe the capabilities of Cisco network management platforms and APIs
(Meraki, Cisco DNA Center, ACI, Cisco SD-WAN, and NSO)
Meraki:
Meraki Dashboard API allows programmable access to the Meraki cloud infrastructure,
enabling automation and monitoring of Meraki devices such as switches, routers, and access points.
Cisco SD-WAN:
RESTful API: UCS Manager provides a RESTful API for programmatically managing
servers, allowing automation and orchestration of server operations and configurations.
Intersight:
APIs for Integration: Webex provides APIs for integrating with third-party
applications and services, allowing developers to build custom workflows,
automate tasks, and enhance user experiences.
Call Control: CUCM is a call control and telephony management solution that provides
call routing, signaling, call handling, and media processing for IP telephony, video conferencing,
and collaboration services.
AXL (Administrative XML Layer) API: AXL API enables administrative tasks such as user
management, device configuration, dial plan configuration, and call control operations through
XML-based requests and responses.
UDS (User Data Services) API: UDS API allows access to user data and profile information
stored in CUCM, facilitating integration with directory services, CRM systems, and identity
management platforms.
Endpoint Protection: Secure Endpoint (formerly known as Cisco AMP for Endpoints)
provides advanced endpoint protection capabilities, including malware detection and
prevention, endpoint detection and response (EDR), and sandboxing for threat analysis.
APIs for Integration: Secure Endpoint offers APIs that allow integration with security
orchestration, automation, and response (SOAR) platforms, SIEM solutions, and
third-party security tools. These APIs enable automated response actions, threat
intelligence sharing, and endpoint management.
Cisco Umbrella:
Cloud Security: Umbrella is a cloud-delivered security service that provides DNS-layer
security, secure web gateway (SWG) functionality, firewall integration, and threat
intelligence to protect users and devices from internet-based threats.
APIs for Customization: Umbrella offers APIs for custom integrations, policy
management, reporting, and event notifications. These APIs enable organizations
to customize security policies, automate security workflows, and integrate Umbrella
with other security solutions.
Cisco Firepower:
Next-Generation Firewall (NGFW): Firepower Threat Defense (FTD) is a NGFW
platform that combines firewall, intrusion prevention system (IPS), advanced malware
protection (AMP), and URL filtering capabilities for comprehensive network security.
APIs for Management: Firepower provides RESTful APIs for device management,
configuration, policy enforcement, and event monitoring. These APIs allow
administrators to automate firewall management tasks, orchestrate security
policies, and integrate Firepower with security orchestration platforms.
Cisco Identity Services Engine (ISE):
APIs for Integration: ISE offers APIs for integration with identity management
systems, endpoint security solutions, and network infrastructure. These APIs
enable automated user provisioning, access policy enforcement, and security
policy orchestration.
Cisco Secure Malware Analytics (formerly Threat Grid):
APIs for Threat Intelligence: Secure Malware Analytics offers APIs for threat intelligence
sharing, malware analysis, and automated threat response. These APIs enable security
teams to investigate and remediate threats, share threat intelligence, and integrate threat
data into security operations.
Cisco SecureX Threat Response (XDR):
APIs for Orchestration: SecureX Threat Response offers APIs for security orchestration,
incident response automation, and threat enrichment. These APIs enable security teams
to automate threat response actions, streamline incident investigations, and integrate XDR
with security workflows.
Python Script:
Explanation: This script uses the Secure Endpoint API to retrieve a list of security events filtered by
event type and start date. It then processes the events based on their severity, taking different actions
for high, medium, and low severity events.
Cisco Umbrella API Example:
Objective: Block a malicious domain in Cisco Umbrella based on threat intelligence data.
Python Script:
Explanation: This script uses the Umbrella API to block a specified domain by categorizing
it as malicious and taking the "block" action. It demonstrates how APIs can be used to automate
security controls and response actions in Cisco Umbrella.
3.6 Describe the device level APIs and dynamic interfaces for IOS XE and NX-OS
IOS XE:
Device-Level APIs:
RESTCONF: This is a RESTful API for device configuration and management.
It uses HTTP methods like GET, POST, PUT, DELETE to interact with devices.
NETCONF: A protocol for managing network devices. It uses XML-based messages
over a secure connection to configure and monitor devices.
YANG Models: IOS XE devices support YANG data models, which provide a
structured way to define configuration and operational data for network elements.
Dynamic Interfaces:
Embedded Event Manager (EEM): Allows you to monitor events and take automated
actions based on predefined policies. For example, you can trigger scripts in response to
specific events like interface status changes.
Python Scripts: With the Python interpreter integrated into IOS XE, you can create custom
scripts to automate tasks and interact with device APIs programmatically.
NX-OS:
Device-Level APIs:
NX-API: Provides a RESTful API for NX-OS devices. It supports both XML and
JSON formats for data exchange and allows configuration and monitoring of devices.
NETCONF: Similar to IOS XE, NX-OS devices support NETCONF for device
management using XML-based messages.
NX-SDK: Offers a software development kit for building custom applications
and scripts to interact with NX-OS devices.
Dynamic Interfaces:
PowerOn Auto Provisioning (POAP): Automates the initial configuration and software
upgrades of devices. It dynamically assigns IP addresses and installs software based
on predefined policies.
NX-OS Python SDK: Allows you to write Python scripts to automate tasks and configure
devices using NX-OS APIs. It provides a Pythonic way to interact with the device.
Here's an example of using the RESTCONF API on an IOS XE device to retrieve
interface information using Python and the requests library:
NX-API REST
DevNet Sandbox: Ideal for testing and experimenting with Cisco technologies in a virtual
environment. You can reserve a lab and get hands-on experience with various Cisco products.
https://developer.cisco.com/site/sandbox/
DevNet Code Exchange: Perfect for finding pre-built code samples and projects shared by
the community. It's a collaborative platform where you can also share your own code.
https://developer.cisco.com/codeexchange/search/?q=dnac
DevNet Learning Labs: Best suited for guided learning paths and interactive tutorials. Learning
Labs offer step-by-step instructions to help you understand and implement various Cisco technologies.
DevNet Forums: Ideal for engaging with the community, asking questions, sharing knowledge, and
discussing topics related to Cisco technologies. Forums are a great way to get insights and advice
from peers and experts.
DevNet API Documentation: This is where you can find comprehensive documentation for
various Cisco APIs. It includes detailed information on how to use the APIs, along with examples
and reference material.
Choose the right resource for a given scenario:
Scenario: You want to build a new integration using Cisco's DNA Center API.
Resource: DevNet API Documentation. This will provide you with all the necessary details about
the DNA Center API endpoints, methods, and usage examples.
Scenario: You are troubleshooting an issue with a script you wrote for automating
network configurations.
Resource: DevNet Support. You can get help from Cisco’s technical support team to
resolve your issue.
Scenario: You want to learn how to deploy a new Cisco technology in your lab environment.
Resource: DevNet Learning Labs. Interactive tutorials will guide you through the
deployment process step-by-step.
Scenario: You need a pre-built script to automate a repetitive task in your network.
Resource: DevNet Code Exchange. You can search for existing scripts and code samples that
meet your requirements.
Scenario: You are looking for advice and best practices from other network engineers.
Resource: DevNet Forums. Engaging with the community can provide valuable insights and
shared experiences.
Scenario: You want to experiment with Cisco's latest security solutions without setting up a
physical lab.
Resource: DevNet Sandbox. Virtual labs allow you to explore and test new solutions in a
risk-free environment.
3.8 Apply concepts of model driven programmability
(YANG, RESTCONF, and NETCONF) in a Cisco environment
YANG
RESTCONF
NETCONF
YANG Models
Identify the YANG Model: Find the appropriate YANG model for the
configuration or operational data you need. Cisco provides many standard
and custom YANG models.
Understand the Model: Study the structure and elements of the YANG model to
understand how data is organized.
Use Tools: Tools like pyang can be used to validate and visualize YANG models.
RESTCONF
Formulate REST API Requests: Use HTTP methods (GET, POST, PUT, DELETE)
to interact with the device’s RESTCONF interface.
NETCONF: Perform configuration and state management using XML over SSH.
These model-driven programmability tools enable efficient, scalable, and consistent network
management and automation in Cisco environments.
3.9 Construct code to perform a specific operation based on a set of requirements and given
API reference documentation such as these:
3.9.a Obtain a list of network devices by using Meraki, Cisco DNA Center,
ACI, Cisco SD-WAN, or NSO
3.9.b Manage spaces, participants, and messages in Webex
3.9.c Obtain a list of clients / hosts seen on a network using Meraki or Cisco DNA Center
3.9.a Obtain a list of network devices by using Meraki, Cisco DNA Center,
ACI, Cisco SD-WAN, or NSO
Meraki API
Cisco DNA Center API
Cisco ACI API
Cisco SD-WAN API
Cisco NSO API
3.9.b Manage spaces, participants, and messages in Webex
Managing spaces, participants, and messages in Webex can be done
using the Webex Teams API. Here’s how you can use Python to interact
with the Webex API to manage spaces, participants, and messages.
Prerequisites
Webex API Token: Obtain an access token from your Webex account.
Managing Spaces: Use the /rooms endpoint to list and create spaces.
Managing Participants: Use the /memberships endpoint to list and add participants to spaces.
Managing Messages: Use the /messages endpoint to list and send messages in spaces.
These scripts demonstrate basic interactions with the Webex API. Replace placeholders
like your_access_token, your_space_id, and [email protected] with actual values.
This will allow you to effectively manage your Webex spaces, participants, and messages.
3.9.c Obtain a list of clients / hosts seen on a network using Meraki or Cisco DNA Center
To obtain a list of clients or hosts seen on a network using Meraki or Cisco DNA Center,
you can use their respective APIs. Below are examples demonstrating how to do this using Python.
1. Meraki API
The Meraki Dashboard API allows you to retrieve a list of clients seen on a network.
Python Example:
Detailed Steps for Each Platform
Meraki API
Get API Key: Ensure you have your Meraki API key.
Network ID: Obtain the network ID for which you want to list clients.
API Endpoint: Use the /networks/{networkId}/clients endpoint to get the list of clients.
Authentication: Use the API key in the request headers.
Example Outputs For Meraki:
Cisco DNA Center API
Cisco DNA Center API provides various endpoints to retrieve information about clients connected to the network.
Python Example:
For Cisco DNA Center:
Summary
Enhanced User Experience 🎮 - Provides faster, smoother interactions for applications like gaming and VR.
Cost Savings 💰 - Reduces operational costs by lowering cloud resource and bandwidth needs.
Enabling Emerging Technologies 🚀 - Supports 5G, IoT, and AI with necessary computational power and low latency.
Contextual Awareness 📍 - Gathers and processes contextual data for personalized applications.
Autonomous Operations 🤖 - Allows devices to operate independently in environments with limited connectivity.
By processing data closer to the source, edge computing significantly reduces the
latency associated with transmitting data to a centralized data center or cloud.
This is critical for applications that require real-time processing, such as autonomous
vehicles, industrial automation, and augmented reality.
2. Bandwidth Optimization
Edge computing reduces the amount of data that needs to be transmitted over
the network to central data centers. By processing and filtering data locally, only
relevant information is sent to the cloud, optimizing bandwidth usage and reducing costs.
Processing data at the edge can improve privacy and security by keeping sensitive data local
rather than transmitting it across potentially insecure networks to centralized servers. This is
especially beneficial for applications in healthcare, finance, and smart cities where data privacy
is paramount.
4. Improved Reliability and Resilience
Edge computing can reduce operational costs by decreasing the need for extensive
cloud resources and bandwidth. By processing data locally, organizations can reduce
the amount of data transferred to the cloud and lower associated costs.
9. Enabling Emerging Technologies
Edge computing supports the deployment and operation of emerging technologies such as
5G, IoT, and AI by providing the necessary computational power and low latency required for
these applications to function effectively.
10. Contextual Awareness
Edge devices can gather and process contextual data such as location, environment, and
user behavior, enabling more personalized and context-aware applications. This capability
is valuable for smart homes, retail, and location-based services.
11. Autonomous Operations
Attributes:
1⃣
2⃣
3⃣
4⃣
5⃣
6⃣
Comparison Table
7⃣
8⃣
9⃣
🔟
1⃣1⃣
1⃣2⃣
1⃣3⃣
4.3 Identify the attributes of these application deployment types
Attributes:
Isolation: Each VM runs in its own isolated environment with a separate operating system.
Overhead: Requires significant resources due to the need for a full OS for each VM.
Performance: Generally slower compared to bare metal due to the overhead of virtualization.
Flexibility: Supports multiple OS types and versions on the same physical hardware.
Scalability: Easy to scale by creating additional VMs, but limited by the underlying physical
hardware.
Portability: VMs can be moved between different physical hosts, provided the hypervisor
is supported.
Security: Strong isolation between VMs; vulnerabilities in one VM typically do not affect others.
Management: Requires a hypervisor (e.g., VMware, Hyper-V) for management,
which adds complexity.
Boot Time: Typically takes longer to boot compared to containers.
Bare Metal
Attributes:
Performance: Highest performance as there is no virtualization overhead.
Isolation: Full control of the hardware, providing strong isolation from other systems.
Overhead: No overhead from a hypervisor or virtualization layer.
Flexibility: Limited to running a single OS directly on the hardware; multiple OS
instances require multiple physical machines.
Scalability: Scaling requires adding more physical servers, which can be more
costly and time-consuming.
Portability: Less portable than VMs and containers; moving an application to
different hardware can be more complex.
Security: High security, as there is no shared hypervisor layer; however, physical security
and hardware-level vulnerabilities need consideration.
Management: Simpler in terms of not needing a hypervisor, but each server needs
individual management.
Boot Time: Faster than VMs, typically faster than containers depending on the
OS and application.
Containers
Attributes:
Management: Managed through container orchestration tools (e.g., Kubernetes, Docker Swarm),
simplifying deployment and scaling.
Boot Time: Very fast to start and stop, much faster than VMs and comparable to bare metal in
some cases.
Container Runtime Engine:
Containers:
Container 1, 2, 3, ...: Each container runs its own application along with the
necessary libraries and binaries, isolated from other containers.
Containers share the same OS kernel but operate in isolated user spaces.
Containers are lightweight and can be started or stopped quickly.
Comparison Table
4.4 Describe components for a CI/CD
pipeline in application deployments
Components of a CI/CD Pipeline
Components:
Environment Management: Manages different deployment environments.
Configuration Management: Ensures consistent application configuration across
environments.
5⃣Continuous Deployment (CD):
Function: Extends continuous delivery by automatically deploying code changes to
production after passing tests and validations.
Components:
Deployment Automation: Fully automates the process of deploying to production.
Canary Releases/Rolling Updates: Gradually deploys changes to subsets of users
to ensure stability.
8⃣ Security (DevSecOps):
Tools: Snyk, WhiteSource, SonarQube, OWASP ZAP
Function: Integrates security practices into the CI/CD pipeline to identify and
remediate vulnerabilities early in the development process.
Components:
Static Application Security Testing (SAST): Scans source code for vulnerabilities.
Dynamic Application Security Testing (DAST): Tests running applications for security
vulnerabilities.
Dependency Scanning: Checks for vulnerabilities in third-party libraries and
dependencies.
CI/CD Pipeline Diagram
1⃣ 2⃣ 3⃣
4⃣ 5⃣ 6⃣
8⃣ 7⃣
4.5 Construct a Python unit test
You'll use the unittest module, which is part of Python's standard library. Here's an
example of how to create a unit test for a simple Python function.
Step-by-Step Example
Let's assume we have a function that adds two numbers:
Writing Unit Tests
1⃣Create a Test File:
Create a separate file for your tests, typically named test_<module>.py.
1⃣Imports:
unittest: The built-in module for unit testing in Python.
add: The function being tested.
2⃣ Test Class:
TestMathOperations: A subclass of unittest.TestCase that contains test methods.
3⃣Test Methods:
test_add_positive_numbers: Tests the addition of two positive numbers.
test_add_negative_numbers: Tests the addition of two negative numbers.
test_add_positive_and_negative: Tests the addition of a positive and a negative number.
test_add_zero: Tests the addition of zero to another number.
4⃣ Assertions:
self.assertEqual(a, b): Checks that a equals b. If not, the test fails.
5⃣ Running Tests:
unittest.main(): Runs all the test cases when the script is executed.
To run the tests, you can execute the test script from the command line:
python test_math_operations.py
4.6 Interpret contents of a Dockerfile
1⃣FROM:
FROM python:3.9-slim: Specifies the base image for the Docker image. In this case, it uses a
slim version of Python 3.9. This base image is pulled from Docker Hub if it doesn't exist locally.
2⃣ENV:
ENV PYTHONDONTWRITEBYTECODE=1: Sets an environment variable inside the container
to prevent Python from writing .pyc files.
ENV PYTHONUNBUFFERED=1: Sets an environment variable to ensure Python output is sent
straight to the terminal (stdout) without being buffered, which is helpful for logging.
3⃣WORKDIR:
WORKDIR /app: Sets the working directory inside the container to /app. All subsequent
instructions that use relative paths will be based in this directory.
4⃣ COPY:
COPY requirements.txt /app/: Copies the requirements.txt file from the host
machine to the /app/ directory inside the container.
COPY . /app/: Copies all files from the current directory on the host machine
to the /app/ directory in the container.
5⃣ RUN:
RUN pip install --no-cache-dir -r requirements.txt: Runs a command to install the Python
dependencies specified in requirements.txt. The --no-cache-dir option prevents pip
from caching the packages, which reduces the image size.
6⃣ CMD:
CMD ["python", "app.py"]: Specifies the command to run when the container starts.
Here, it runs the Python application app.py.
Summary
1⃣ Base Image: The image starts with a Python 3.9 slim base image.
3⃣ Working Directory: Sets the working directory inside the container to /app.
Useful Links:-
https://docs.docker.com/get-started/
https://code.visualstudio.com/docs/?dv=win64user
https://docs.docker.com/desktop/install/windows-install/
4.7 Utilize Docker images in local developer environment
501 sudo apt update
502 apt list --upgradable
503 sudo apt install apt-transport-https ca-certificates curl software-properties-common
504 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
505 sudo add-apt-repository "deb [arch=amd64]
https://download.docker.com/linux/ubuntu focal stable"
506 apt-cache policy docker-ce
507 sudo apt install docker-ce
508 sudo systemctl status docker
509 docker pull python:3.9-slim
Utilizing Docker images in a local developer environment can significantly streamline
development workflows by providing consistent and isolated environments. Here’s a
guide on how to use Docker images for development:
Steps to Utilize Docker Images in Local Development
1⃣ Install Docker:
Ensure Docker is installed on your machine. You can download it from
Docker's official website. https://www.docker.com/
2⃣ Pull a Docker Image:
Pull the required Docker image from Docker Hub or any other Docker registry.
This command pulls the Python 3.9 slim image from Docker Hub.
3⃣Create a Dockerfile for Your Project:
Create a Dockerfile in your project directory to define your development
environment. Here's an example Dockerfile for a Python project:
# Use the official Python image
FROM python:3.9-slim
Options:
-it: Runs the container in interactive mode with a terminal.
--rm: Automatically removes the container when it exits.
-v $(pwd):/app: Mounts the current directory ($(pwd)) to the /app directory
inside the container, allowing you to edit files locally and have changes
reflected in the container.
-w /app: Sets the working directory inside the container.
my-python-app: The name of the Docker image to use.
/bin/bash: The command to run inside the container (a Bash shell in this case).
6⃣ Develop Inside the Container:
With the container running, you can develop your application. Changes made
to files in the mounted directory will be reflected inside the container.
You can run your application and its tests inside the container, ensuring a
consistent environment.
7⃣Run Application/Tests:
Inside the container, you can run your application or tests just like you
would on your local machine.
python app.py
Or run tests:
pytest
Example Workflow
4⃣ Make changes to your code using your preferred editor on your local machine.
Inside the container shell, run your application or tests to validate changes.
5⃣ Cleanup:
Exit the container. The --rm option ensures the container is removed automatically.
Benefits of Using Docker for Local Development
Simplified Setup: Reduces the setup time for new developers by providing
a pre-configured environment.
1⃣ Hardcoding Secrets:
Issue: Storing secrets such as API keys, passwords, and tokens directly in the codebase.
Mitigation: Use environment variables or secret management tools (e.g., AWS Secrets
Manager, HashiCorp Vault).
Use Secret Management Tools: Tools like AWS Secrets Manager, HashiCorp
Vault, and Azure Key Vault help manage and rotate secrets securely.
Encrypt Data at Rest and in Transit: Ensure all sensitive data is encrypted
using strong, current encryption standards.
Implement Access Controls: Apply strict access controls to both data and
secrets, using principles like RBAC.
Sanitize and Validate Input: Always sanitize and validate input data to
prevent common vulnerabilities like SQL injection and XSS.
Regularly Update Security Practices: Stay updated with the latest security
best practices and regularly audit your application for potential vulnerabilities.
Secure Data Disposal: Ensure that data is securely deleted and not recoverable
once it is no longer needed.
Application Security Best Practices Diagram
4.9 Explain firewall, DNS, load balancers, and reverse proxy in application deployment
Firewall
A firewall is a network security device or software that monitors and controls incoming
and outgoing network traffic based on predetermined security rules.
Purpose: Protects the network from unauthorized access, attacks, and malicious activity.
Types:
Network Firewalls: Placed between internal and external networks, filtering traffic based
on IP addresses, ports, and protocols.
Application Firewalls: Focus on inspecting traffic at the application layer, identifying
and blocking malicious payloads.
Common Features:
Packet Filtering: Inspects individual packets and allows or blocks them based on predefined rules.
Stateful Inspection: Monitors the state of active connections and makes decisions based
on the context of the traffic.
Proxy Services: Intermediary between clients and servers to inspect and control traffic.
Intrusion Prevention Systems (IPS): Detects and prevents security threats in real-time.
DNS (Domain Name System)
How It Works:
User enters a domain name in their browser.
DNS resolver queries DNS servers to find the IP address.
DNS server responds with the IP address.
Browser uses the IP address to connect to the web server.
Load Balancer
A load balancer is a device or software that distributes incoming network traffic across
multiple servers to ensure no single server becomes overwhelmed.
Web Acceleration: Reducing latency and improving load times by caching content and
compressing responses.
API Gateway: Routing API requests to the appropriate microservices and managing API
security, rate limiting, and logging.
Summary Table
4.10 Describe top OWASP threats (such as XSS, SQL injections, and CSRF)
The OWASP (Open Web Application Security Project) publishes a list of the top
security threats to web applications. Here's a description of some of the top OWASP
threats, including Cross-Site Scripting (XSS), SQL Injection, and Cross-Site Request
Forgery (CSRF):
OWASP Threat
Cross-Site Scripting (XSS)
SQL Injection
Cross-Site Request Forgery (CSRF)
Injection
Insecure Deserialization
Security Misconfiguration
Broken Authentication
Sensitive Data Exposure
Using Components with Known Vulnerabilities
Insufficient Logging and Monitoring
Cross-Site Scripting (XSS)
Description: XSS occurs when an attacker injects malicious scripts into content from
otherwise trusted websites. The scripts are then executed in the context of the victim’s browser.
Types of XSS:
Stored XSS: The malicious script is permanently stored on the target server (e.g., in a database).
Reflected XSS: The malicious script is reflected off a web server, such as in an error message
or search result.
DOM-Based XSS: The vulnerability exists in the client-side code rather than server-side.
Impact:
Stealing cookies, session tokens, or other sensitive information.
Defacing websites.
Redirecting users to malicious sites.
Mitigation:
Escape untrusted data based on the context (HTML, JavaScript, URL, etc.).
Use Content Security Policy (CSP) to reduce the impact of XSS.
Validate input on the server side.
SQL Injection
Description: SQL Injection happens when an attacker can execute arbitrary SQL
code on a database by manipulating user inputs that are not properly sanitized.
Impact:
Unauthorized access to data.
Data modification or deletion.
Compromising the entire database server.
Mitigation:
Use parameterized queries or prepared statements.
Use ORM (Object-Relational Mapping) frameworks that automatically
handle query parameterization.
Validate and sanitize inputs.
Cross-Site Request Forgery (CSRF)
Impact:
Unintended fund transfers.
Changing account settings, including passwords and email addresses.
Performing administrative functions.
Mitigation:
Use anti-CSRF tokens that are unique per session/request.
Validate the origin and referer headers.
Implement SameSite cookie attributes.
Injection
Impact:
Data loss or corruption.
Loss of accountability or denial of access.
Full system compromise.
Mitigation:
Use safe APIs.
Avoid using the interpreter directly.
Escape special characters in inputs.
Insecure Deserialization
Impact:
Unauthorized access to default accounts, unused pages, unpatched flaws, etc.
Exposure of sensitive data.
Mitigation:
Implement a repeatable hardening process.
Use automated scanners to detect misconfigurations.
Regularly patch and update software.
Broken Authentication
Description: Broken authentication involves issues that allow attackers to
compromise passwords, keys, or session tokens, or to exploit other
implementation flaws to assume other users’ identities.
Impact:
User account compromise.
Data theft or manipulation.
Unauthorized access to systems and sensitive data.
Mitigation:
Implement multi-factor authentication.
Store passwords using strong hashing algorithms.
Use secure mechanisms for session management.
Sensitive Data Exposure
Description: Sensitive data exposure occurs when applications do not
adequately protect sensitive information, such as financial, healthcare,
or personally identifiable information (PII).
Impact:
Identity theft.
Financial loss.
Legal repercussions.
Mitigation:
Encrypt data at rest and in transit.
Use strong cryptographic algorithms.
Limit data exposure by adhering to the principle of least privilege.
Using Components with Known Vulnerabilities
Impact:
Exploitation of known vulnerabilities to compromise systems.
Data breaches or corruption.
Full system compromise.
Mitigation:
Regularly update components and dependencies.
Use tools to monitor and manage vulnerabilities (e.g., OWASP Dependency-Check).
Prefer components that are actively maintained and supported.
Insufficient Logging and Monitoring
Impact:
Delayed detection of breaches.
Incomplete forensic analysis.
Unidentified and uncontained attacks.
Mitigation:
Implement comprehensive logging and monitoring.
Regularly review and analyze logs.
Establish an incident response plan.
Summary Table
4.11 Utilize Bash commands (file management, directory navigation, and environmental variables)
Essential Bash commands for file management, directory navigation, and handling environmental variables:
File Management
Directory Navigation Commands Summary Table
Environmental Variables
Next Videos
4.12 Identify the principles of DevOps practices
Principles of DevOps Practices
2⃣ Automation
Automate repetitive tasks such as testing, deployment, and infrastructure
provisioning.
Utilize CI/CD pipelines to streamline processes and reduce manual intervention.
5.3 Describe the use and roles of network simulation and test tools (such as Cisco Modeling
Labs and pyATS)
5.4 Describe the components and benefits of CI/CD pipeline in infrastructure automation
5.5 Describe principles of infrastructure as code
5.6 Describe the capabilities of automation tools such as Ansible, Terraform, and Cisco NSO
5.7 Identify the workflow being automated by a Python script that uses Cisco APIs including
ACI, Meraki, Cisco DNA Center, or RESTCONF
5.8 Identify the workflow being automated by an Ansible playbook (management
packages, user management related to services, basic service configuration, and start/stop)
5.9 Identify the workflow being automated by a bash script (such as file management, app
install, user management, directory navigation)
5.10 Interpret the results of a RESTCONF or NETCONF query
IP Topology:
Import Statement:
import ietf-interfaces { prefix if; }: Imports another YANG module called
ietf-interfaces and assigns it the prefix if.
Container:
container interfaces: Defines a container named interfaces that will
group related data nodes.
List:
list interface { key "name"; }: Defines a list of interfaces. Each entry in the
list is uniquely identified by the name key.
Leaf Nodes:
leaf name { type string; }: Defines a name leaf with a string type.
leaf enabled { type boolean; default "true"; }: Defines an enabled leaf
with a boolean type and a default value of true.
Nested Container:
container ipv4: Defines a container within each interface entry to hold
IPv4 configuration.
leaf address { type inet:ipv4-address; }: Defines an address leaf within the
ipv4 container for the IPv4 address.
leaf netmask { type inet:ipv4-address; }: Defines a netmask leaf within the ipv4
container for the subnet mask.
Conclusion:
The example YANG model defines a structured way to configure network interfaces,
including their names, enable/disable state, and IPv4 configuration. Understanding
these basic components and their structure is crucial for working with YANG models
effectively.
Here are the YANG scripts for the configurations mentioned in the lab scenario:
These YANG scripts define the configuration structure for Router1,
Switch1, Switch2, and Switch3, including interface configurations and
IP addresses. Adjust the namespaces, prefixes, and specific configuration
details as needed for your network environment.
Here are the corresponding XML payloads based on the provided YANG
scripts for Router1, Switch1, Switch2, and Switch3:
These XML payloads represent the configurations specified
in the respective YANG models for Router1, Switch1, Switch2,
and Switch3. Adjust the values accordingly based on your specific
requirements and network setup.
Lab Prerequisites
Network Device with NETCONF Support: Ensure you have a network device
(router or switch) that supports NETCONF and YANG. Cisco IOS XE devices
typically support these protocols.
Python Environment: If using ncclient, you need a Python environment set up.
Step-by-Step Guide
Step 1: Install Required Tools
If you are using Python and ncclient, install it using pip:
pip install ncclient
Step 2: Create YANG Model Configurations in XML
Prepare the XML payloads based on the YANG models as shown earlier.
Save each configuration in separate XML files.
Automated Testing:
Automates the execution of network tests, reducing the time
and effort required for manual testing.
3⃣ Configuration Management
Controller-Level Management: Configurations are applied from a single
interface.
Device-Level Management: Configurations need to be applied separately
on each device.
4⃣ Scalability
Controller-Level Management: Scales well for large networks.
Device-Level Management: Limited scalability, managing each
device becomes cumbersome.
Network simulation and test tools are essential for network
engineers to design, test, and troubleshoot network configurations
and protocols without impacting live environments. They offer a
safe and controlled way to explore and validate network changes.
Two widely used tools in this domain are Cisco Modeling Labs (CML)
and pyATS.
Cisco Modeling Labs (CML)
Description
Cisco Modeling Labs (CML) is a powerful network simulation tool
developed by Cisco. It enables users to create and run virtual
network topologies using Cisco's operating systems and software.
CML is designed for both individuals (CML-Personal) and enterprises
(CML-Enterprise).
Roles and Uses
4⃣ Performance Monitoring:
Monitors network performance metrics and alerts on deviations from
expected values.
Helps in identifying performance bottlenecks and ensuring network
reliability.
5⃣ Test Script Development:
Allows creation of custom test scripts to meet specific testing
requirements.
Uses Python, making it accessible to network engineers familiar
with scripting.
1. Configuration Management
Automates the configuration of systems and software, ensuring consistency
across environments.
Uses playbooks (written in YAML) to define tasks and roles, making it easy
to read and write configurations.
2. Application Deployment
Facilitates automated deployment of applications, managing dependencies,
configurations, and services.
Supports rolling updates and rollback procedures, ensuring minimal downtime
and consistency.
3. Orchestration
Coordinates multiple configurations and deployments across various systems.
Integrates with cloud services (AWS, Azure, Google Cloud) and container
orchestration platforms (Kubernetes, Docker).
4. Agentless Architecture
Operates without the need for agents installed on target machines, using
SSH for Linux/Unix systems and WinRM for Windows systems.
Simplifies management and reduces overhead.
5. Extensibility
Supports custom modules and plugins, allowing for tailored automation
workflows.
Large ecosystem of community-contributed modules for various applications
and services.
6. Idempotency
Ensures that repeated executions of playbooks result in the same system
state, preventing unintended changes.
Terraform
1. Infrastructure as Code (IaC)
Uses a declarative approach to define and provision infrastructure.
Configuration files (written in HCL) describe the desired state of
infrastructure, which Terraform then enforces.
2. Multi-Cloud Support
Supports multiple cloud providers (AWS, Azure, Google Cloud) and
on-premises solutions.
Enables consistent infrastructure management across different
environments and platforms.
3. Dependency Management
Automatically manages dependencies between resources, ensuring
that changes are applied in the correct order.
Detects and handles resource dependencies and dependencies
between modules.
4. State Management
Maintains a state file that tracks the real-world state of
infrastructure.
Facilitates incremental updates, ensuring that only necessary
changes are applied.
6. Extensibility
Supports custom providers and modules, enabling the extension of
Terraform's functionality.
Large ecosystem of community-contributed modules and providers.
7. Plan and Apply
1. Service Orchestration
Automates the deployment and lifecycle management of
network services.
Supports the creation, modification, and deletion of
network services across multi-vendor environments.
2. Model-Driven Approach
Uses YANG models to define network services and devices.
Ensures consistency and standardization across different
network components.
3. Multi-Vendor Support
Integrates with various network devices and technologies
from different vendors.
Provides a unified management interface for heterogeneous
network environments.
4. Configuration Management
Automates the configuration and provisioning of network devices.
Ensures consistent and accurate device configurations, reducing
manual errors.
5. Transactional Integrity
Supports atomic transactions, ensuring that configuration
changes are either fully applied or fully rolled back.
Maintains network stability and reliability by preventing
partial configurations.
6. Extensibility
Allows customization through service models and templates.
Supports integration with external systems via APIs and custom
scripts.
7. Real-Time Network Management
Provides real-time visibility into the network state and
configurations.
Enables rapid troubleshooting and resolution of network
issues.
1⃣ Actors/Participants:
the system. They can
These are the entities that interact in
be users, systems, or other entities.
3⃣ Activation
the period
Bars: Thick vertical bars on a lifeline indicating
an object is active and executing a process.
5⃣ Activations:
Thick bars on a lifeline indicate periods when a participant is
performing an action.
The length of the activation bar represents the duration of the action.
6⃣ Return Messages:
Dashed lines with an open arrowhead going back to the sender indicate a
response or return message from an API call.
Example Sequence Diagram Interpretation
2⃣ Flow of Messages:
Client sends a Request to the API Gateway.
API Gateway forwards the Request to the Service.
Service processes the request and sends a Response back to
the API Gateway.
API Gateway forwards the Response back to the Client.
3⃣ Types of Messages:
The initial Request from Client to API Gateway is a synchronous call.
The Forward from API Gateway to Service is also synchronous.
The Response messages are synchronous return messages.
4⃣ Activations:
Activation
The Client's activation starts with sending the Request and ends
after receiving the Response.
The API Gateway is active while forwarding the request and waiting
for the response from the Service.
The Service is active while processing the request and sending back
the response.
Interpretation Summary
NETCONF uses XML for encoding messages and communicates over SSH.
It retrieves and modifies data in a structured way, often
representing the configuration in a hierarchical XML format.
Interface Details:
<name/: The name of the interface ("GigabitEthernet0").
Summary
RESTCONF responses are typically in JSON or XML format and use
HTTP-based methods. They provide structured data in a straightforward
key-value format.
NETCONF responses are in XML format and use a hierarchical structure
to represent configuration data. They provide detailed, nested
information about network configurations.
5.7 Identify the workflow being automated by a Python script
that uses Cisco APIs including ACI, Meraki, Cisco DNA Center,
or RESTCONF
Cisco ACI (Application Centric Infrastructure) API
Cisco ACI API is used to manage and configure Cisco's data center solutions.
Common Workflows:
1⃣ Network Provisioning:
Automating the deployment of network devices.
Applying configuration templates to devices.
Configuring network policies and segmentation.
3⃣ Software Management:
Managing software images and upgrades for network devices.
Scheduling and automating firmware updates.
Part 1: Authentication and
Device Retrieval
Part 2: Template
Application to Device
5.8 Identify the workflow being automated by an Ansible
playbook (management packages, user management related
to services, basic service configuration, and start/stop)
An Ansible playbook can automate a variety of workflows related to
IT management. Here are four typical workflows that can be automated
using Ansible, each focusing on different aspects of system and service
management:
1⃣ Management Packages
2⃣ User Management Related to Services
3⃣ Basic Service Configuration
4⃣ Start/Stop Services
1⃣ Management Packages
This workflow involves installing, updating, and removing software
packages on managed hosts.
Playbook Example:
Workflow:
Ensure required packages are installed: Installs the httpd package if it
is not already present.
Update packages to the latest version: Ensures the nginx package is
updated to its latest version.
Remove unwanted packages: Removes the old-software package
if it is installed.
2⃣ User Management Related to Services
This workflow involves managing user accounts and permissions,
particularly those related to specific services.
Playbook Example:
Workflow:
Ensure the service user exists: Creates the serviceuser and adds it to
the servicegroup group.
Set user password: Sets the password for serviceuser.
Grant sudo privileges: Ensures serviceuser has passwordless sudo privileges.
3⃣
3. Basic Service Configuration
This workflow involves configuring services on managed hosts.
Playbook Example:
Workflow:
Workflow:
1⃣File Management
This workflow involves tasks like creating, copying, moving, and deleting files.
Example Bash Script:
Workflow:
Create a new directory: mkdir -p /path/to/new_directory
Copy files: cp /path/to/source_file /path/to/new_directory/
Move files: mv /path/to/new_directory/source_file /path/to/another_directory/
Delete a file: rm /path/to/another_directory/source_file
2⃣Application Installation
This workflow involves installing and configuring software applications.
Workflow:
Update package lists: sudo apt-get update
Install a package: sudo apt-get install -y apache2
Start the service: sudo systemctl start apache2
Enable the service to start on boot: sudo systemctl enable apache2
3⃣User Management
This workflow involves creating and managing user accounts.
Example Bash Script:
Workflow:
Create a new user: sudo useradd -m newuser
Set password for the new user: echo "newuser:password" | sudo chpasswd
Add the new user to a group: sudo usermod -aG sudo newuser
Delete a user: sudo userdel -r olduser
4⃣ Directory Navigation
This workflow involves navigating and working with directories.
Example Bash Script:
Workflow:
Navigate to a directory: cd /path/to/directory
List files in the directory: ls -l
Create a new subdirectory: mkdir new_subdirectory
Change to the new subdirectory: cd new_subdirectory
Summary
By examining these examples, you can identify the automated workflow
being managed by a bash script. Here’s a quick summary of the workflows:
1⃣ File Management: Creating, copying, moving, and deleting files and
directories.
This line provides information about where the changes occur in the
file:-
1,6 refers to the range of lines in the original file (oldfile.txt).
Here, it starts at line 1 and spans 6 lines.
Lines removed from the original file start with a minus sign (-):
Lines added in the modified file start with a plus sign (+):
Interpretation of the Sample Diff
File Headers
The diff is comparing oldfile.txt with newfile.txt.
Hunk Information
The changes start at line 1 and affect 6 lines in both files.
Changes within the Hunk
Line 2:
Original: It has a few lines of text. (removed)
Modified: It has several lines of text. (added)
Line 6:
Line 2 in oldfile.txt was changed from "It has a few lines of text." to "It
has several lines of text." in newfile.txt.
Line 6 in oldfile.txt was changed from "This line will be removed." to "
This line has been modified." in newfile.txt.
2⃣ Constructive Feedback:
Provides constructive and specific feedback.
Focuses on code improvement rather than personal criticism.
Collaboration and Knowledge Promotes team collaboration and knowledge sharing. Enables
Sharing learning opportunities for junior developers.
Focus on Functionality and Assesses code correctness, requirement fulfillment, and performance.
Considers edge cases, error handling, and performance bottlenecks.
Performance
Security and Compliance Identifies and addresses potential security vulnerabilities.
Ensures compliance with relevant standards.
Utilizes automated tools (e.g., linters, static analysis) for common issues.
Automated Tools Allows human reviewers to focus on complex problems.
Benefits of Code Review
6⃣ Increased Security:
Identifies security vulnerabilities and ensures best practices.
Enhances overall application security.
7⃣ Compliance and Risk Management:
Ensures code complies with industry standards and regulations.
Mitigates risks of non-compliance and potential legal issues.
8⃣ Performance Optimization:
Identifies and suggests performance improvements.
Ensures application performs well under expected workloads.
Benefits of Code Review
Catches bugs and potential issues early. Encourages best
Improved Code Quality practices and coding standards.
Knowledge Sharing and Provides learning opportunities for junior developers. Spreads
Mentorship knowledge of the codebase across the team.
Early Bug Detection More cost-effective than finding bugs in later stages.
Helps maintain a stable and reliable codebase.
Compliance and Risk Ensures code complies with industry standards and regulations.
Management Mitigates risks of non-compliance and potential legal issues.
Knowledge Sharing and Provides learning opportunities for junior developers. Spreads
Mentorship knowledge of the codebase across the team.
Early Bug Detection More cost-effective than finding bugs in later stages.
Helps maintain a stable and reliable codebase.
Compliance and Risk Ensures code complies with industry standards and regulations.
Management Mitigates risks of non-compliance and potential legal issues.
6.4 Interpret a basic network topology diagram with elements such as switches,
routers, firewalls, load balancers, and port values
6.5 Describe the function of management, data, and control planes in a network device
6.6 Describe the functionality of these IP Services: DHCP, DNS, NAT, SNMP, NTP
6.7 Recognize common protocol port values (such as, SSH, Telnet, HTTP, HTTPS,
and NETCONF)
6.8 Identify cause of application connectivity issues (NAT problem, Transport Port
blocked, proxy, and VPN)
Purpose:
Usage:
1⃣ Configuration:
VLAN IDs.
Configured on network switches, identified by
Purpose:
IP addresses (Internet Protocol addresses) are numerical labels
assigned to devices connected to a computer network that uses the
Internet Protocol for communication. They operate at the network
layer (Layer 3) of the OSI model. The primary purposes of IP
addresses are:
Usage:
Usage:
🟢 IPv4 Subnet Mask: An IPv4 subnet mask is a 32-bit number
(e.g., 255.255.255.0) that, when combined with an IP address,
identifies the network and host portions. For example, with
an IP address of 192.168.1.10 and a subnet mask of
255.255.255.0, the network is 192.168.1.0/24.
🟢
IPv6 Prefix: An IPv6 prefix is written in CIDR notation
(e.g., 2001:0db8:85a3/:/64) and indicates the network portion
of the address. The prefix length specifies how many bits are
used for the network portion.
Gateways
Purpose:
Usage:
🟢 Default Gateway: A default gateway is typically a router
that connects a local network to the internet or other
networks. Devices send traffic destined for external
networks to the default gateway, which then forwards
it appropriately.
Switches
User
Internet
1⃣ 8⃣
Router Router
(default (default
Gateway Gateway)
2⃣ 7⃣
Firewall Firewall
3⃣ 6⃣
4⃣ 5⃣
Load Load
Server
Balancer Balancer
6.4 Interpret a basic network topology diagram with
elements such as switches, routers, firewalls, load
balancers, and port values
Basic Network Topology Diagram Interpretation
WAN RTR
BL-1 BL-2
L3 Fabric
SPINE-01 SPINE-06
L3 Fabric
virtual
Services
LEAF-01 LEAF-02 LEAF-03 LEAF-54
1. Internet
Represents the external global network to which the local
network is connected.
Port Values:
WAN Port (Port 1): Receives traffic from the router.
LAN Port (Port 2): Sends traffic to the load balancer.
4. Load Balancer
Function: Distributes incoming traffic across multiple servers to
ensure reliability, availability, and optimal performance.
Port Values:
Port 80: Common port for HTTP traffic. Balances web traffic among
connected servers.
5.Servers
Function: Host applications and services, responding to client requests.
IP Addresses:
Server 1: 192.168.1.2
Server 2: 192.168.1.3
6. Router (LAN IP: 192.168.2.1)
Connected Devices:
Server Response:
Load Balancer:
The router sends the response back to the user on the internet.
Internal Network Traffic Example (Inter-VLAN Routing)
2. Switch (VLAN 1)
4. Switch (VLAN 2)
6. Switch (VLAN 2)
8. Switch (VLAN 1)
Function:
The management plane is responsible for all the administrative tasks required
to configure, monitor, and manage the network device. It handles functions
that are necessary for the operation and maintenance of the device but do not
directly involve the forwarding of user data.
Key Features:
Function:
The data plane, also known as the forwarding plane, is responsible for
the actual movement of packets through the network device. It handles the
processing and forwarding of user data based on the rules and policies
established by the control plane.
Key Features:
Key Features:
Routing Protocols: Runs protocols like OSPF, BGP, and EIGRP to
discover and maintain the best paths through the network.
Control Plane
Function: Decision-making for packet forwarding paths.
Key Features: Routing protocols, switching protocols,
signaling, topology discovery.
Network Topology Diagram with Management, Data, and Control Planes
Interactions of Planes within the Network Topology
Management Plane:
Configure router settings (e.g., IP addresses, routing protocols)
via SSH, Telnet, or web interface.
Monitor router performance and log data.
Control Plane:
Runs routing protocols (e.g., BGP, OSPF) to exchange routing information
with other routers.
Builds and maintains the routing table.
Data Plane:
Forwards packets between the internet and the internal network based
on the routing table.
Applies access control lists (ACLs) to filter traffic.
2. Firewall (WAN IP: 192.168.1.1)
Management Plane:
Configure firewall rules and policies via a management interface.
Monitor firewall logs and performance.
Control Plane:
Determines the rules for allowing or blocking traffic based on
configured security policies.
Updates policies and rules dynamically based on network conditions.
Data Plane:
Inspects incoming and outgoing packets to enforce security rules.
Blocks or allows traffic according to the defined rules.
3. Load Balancer
Management Plane:
Configure load balancing algorithms and settings via a
management interface.
Monitor load balancer performance and server health.
Control Plane:
Determines which server should handle incoming traffic based
on the load balancing algorithm.
Maintains information about server availability and health.
Data Plane:
Distributes incoming traffic across multiple servers.
Performs SSL offloading if required.
4. Router (LAN IP: 192.168.2.1)
Management Plane:
Configure router settings for internal network routing.
Monitor internal network traffic and performance.
Control Plane:
Runs internal routing protocols (e.g., OSPF) to manage
internal network paths.
Maintains the routing table for the internal network.
Data Plane:
Routes packets between different VLANs and internal networks.
Applies ACLs for internal traffic filtering.
5. Switches (VLAN 1 and VLAN 2)
Management Plane:
Configure VLAN settings and port configurations via a management
interface.
Monitor switch performance and port status.
Control Plane:
Uses Spanning Tree Protocol (STP) to prevent loops and manage
port states.
Maintains MAC address tables.
Data Plane:
Forwards Ethernet frames based on MAC addresses within VLANs.
Enforces VLAN segmentation and traffic separation.
Communication Flow Examples
User Request from Internet to Web Server and Back
User Request from Internet
Management Plane: Not directly involved.
Control Plane: Routes request to internal network.
Data Plane: Forwards packet from router to firewall.
Diagram:
Lab Example:
PC Setup:
🟢 Ensure that PC1, PC2, and PC3 are set to obtain an IP address
automatically.
Operation:
🟢 When each PC boots up, it sends a DHCPDISCOVER message to locate
a DHCP server.
🟢 The DHCP server responds with a DHCPOFFER message.
🟢 The PC requests the offered configuration with a DHCPREQUEST message.
🟢 The DHCP server confirms the assignment with a DHCPACK message.
2. DNS (Domain Name System)
Functionality:
DNS translates human-readable domain names (e.g., //w.example.com) into
IP addresses that computers use to identify each other on the network.
Lab Example:
Setup:
🟢 DNS Server IP: 192.168.1.2
🟢 Domain: example.com
Process:
🟢 The client device sends a DNS query to resolve //w.example.com.
🟢 The DNS server responds with the IP address 93.184.216.34.
NAT (Network Address Translation)
Functionality:
NAT modifies network address information in IP packet headers while
in transit. It allows multiple devices on a local network to share
a single public IP address for accessing external networks.
Diagram:
Process:
🟢 An internal device with IP 192.168.1.10 sends a request to the internet.
🟢 The router translates the source IP from 192.168.1.10 to 203.0.113.1.
🟢 The response from the internet is translated back to 192.168.1.10 by
the router.
Lab Example:
Router Setup:
🟢 Configure NAT on the router to translate private IP addresses
(e.g., 192.168.0.x) to a public IP address.
PC Setup:
🟢 PCs are configured with private IP addresses and the router as
their default gateway.
Operation:
Functionality:
Functionality:
SNMP is used for collecting and organizing information about managed
devices on IP networks and for modifying that information to change
device behavior.
Diagram:
Lab Example:
Setup:
SNMP Manager IP: 192.168.1.3
Managed Device IP: 192.168.1.4
Process:
The SNMP Manager sends a request to the Managed Device to get
interface statistics.
The Managed Device responds with the requested information.
5. NTP (Network Time Protocol)
Functionality:
NTP synchronizes the clocks of computers to some time reference. It
ensures that all devices on a network maintain accurate time, which
is crucial for logging events, security, and network management.
Diagram:
Lab Example:
Setup:
NTP Server IP: 192.168.1.5
Process:
A client device sends a request to the NTP server to synchronize
its clock.
The NTP server responds with the current time, and the client
adjusts its clock accordingly.
Comprehensive Diagram
Detailed Functionality with the Network
DHCP:
Client devices obtain IP addresses and network configuration
from the DHCP server.
DNS:
Client devices resolve domain names to IP addresses using
the DNS server.
NAT:
The router translates private IP addresses to a public IP
address for internet access and vice versa.
SNMP:
The SNMP Manager monitors and manages network devices using
SNMP protocol.
NTP:
Client devices synchronize their clocks with the NTP server
for accurate timekeeping.
6.7 Recognize common protocol port values
(such as, SSH, Telnet, HTTP, HTTPS, and NETCONF)
Recognizing common protocol port values is essential for network
configuration, management, and troubleshooting. Below, I describe
the port values for common protocols such as SSH, Telnet, HTTP, HTTPS,
and NETCONF, along with a diagram and a lab example for each protocol.
Telnet
Port: 23
Function: Provides a bidirectional interactive text-oriented
communication facility using a virtual terminal connection.
HTTP (Hypertext Transfer Protocol)
Port: 80
Function: Used for transmitting hypertext requests and
information on the internet.
Port: 830
Function: Used for installing, manipulating, and deleting the
configuration of network devices.
Network Topology Diagram
Lab Example
1. SSH (Port 22)
Setup:
SSH Server IP: 192.168.1.10
Client Device IP: 192.168.1.100
Process:
Client uses an SSH client (e.g., PuTTY, OpenSSH) to connect to the server
using ssh [email protected].
Server authenticates the user and establishes a secure connection.
Setup:
Telnet Server IP: 192.168.1.11
Client Device IP: 192.168.1.100
Process:
Client uses a Telnet client to connect to the server using telnet 192.168.1.11.
Server provides a text-based interface for remote management.
3.HTTP (Port 80)
Setup:
Web Server IP: 192.168.1.12
Client Device IP: 192.168.1.100
Process:
Client uses a web browser to access http://192.168.1.12.
Server responds with the requested web page.
4. HTTPS (Port 443)
Setup:
HTTPS Server IP: 192.168.1.13
Client Device IP: 192.168.1.100
Process:
Client uses a web browser to access https://192.168.1.13.
Server establishes a secure SSL/TLS connection and responds with
the requested web page.
5.NETCONF (Port 830)
Setup:
NETCONF Server IP: 192.168.1.14
Client Device IP: 192.168.1.100
Process:
Client uses a NETCONF client to connect to the server using
netconf /-host 192.168.1.14 /-port 830.
Server allows the client to manipulate network device
configurations.
Detailed Lab Example
Network Configuration:
1. SSH Connection:
2. Telnet Connection:
3. HTTP Connection:
4. HTTPS Connection:
5. NETCONF Connection:
Diagram with Protocols and Ports
6.8 Identify cause of application connectivity issues
(NAT problem, Transport Port blocked, proxy, and VPN)
Identifying the cause of application connectivity issues requires
a systematic approach to isolate and diagnose potential problems.
Network Topology
Lab Setup
1. NAT Problems
Symptoms:
Diagnosis:
Check NAT Configuration: Ensure proper NAT rules are configured
on the router.
# Example of checking NAT rules on a typical router
show ip nat translations
2.Test Connectivity:
From an external device, try accessing the web server using the
public IP.
curl http://203.0.113.1
2. Transport Port Blocked
Symptoms:
Diagnosis:
Firewall Rules: Check firewall settings to ensure the necessary
ports are open.
# Example command to list firewall rules
sudo iptables -L -v -n
Lab Example:
3. Proxy Issues
Symptoms:
Web traffic is slow or blocked.
Authentication prompts appear unexpectedly.
Diagnosis:
Proxy Configuration: Ensure the client devices are configured to
use the correct proxy settings.
# Example of setting proxy configuration in Linux
export http_proxy="http://192.168.1.20:8080"
export https_proxy="http://192.168.1.20:8080"
Proxy Logs: Check proxy server logs for any errors or blocked requests.
Lab Example:
Configure Proxy on Client:
Set up the client device to use the proxy server at 192.168.1.20.
export http_proxy="http://192.168.1.20:8080"
export https_proxy="http://192.168.1.20:8080"
Diagnosis:
Routing Issues: Verify that the VPN server provides proper routes
to the internal network.
# Check routing tables
netstat -rn
Lab Example:
Connect a VPN client and verify access to the internal web server.
3.Proxy Issues:
Symptoms: Slow or blocked web traffic, unexpected authentication
prompts.
Diagnosis: Verify proxy settings and check proxy server logs.
4.VPN Issues:
Symptoms: Remote client connectivity issues, frequent
VPN drops.
Diagnosis: Check VPN configuration and routing tables.
Bandwidth
Reduced speeds, increased latency, packet loss
Limitations
Network
performance degradation, service unavailability
Congestion
Network
Topology and Distance Increased latency, reliability issues