Introduction To Kubernetes
Introduction To Kubernetes
- D. A file system
- A. Microsoft
- B. IBM
- C. Docker Inc.
- D. Google
**Answer:** D. Google
- A. Docker Engine
- B. kubelet
- C. Kubernetes Master
- D. Pod
- C. To manage databases
- A. `kubectl nodes`
- C. `kubeadm nodes`
- D. `kubelet status`
- A. Playing music
- C. Installing applications
- A. To store data
- C. To monitor containers
- A. Deployment
- B. ReplicaSet
- C. StatefulSet
- D. ConfigMap
**Answer:** B. ReplicaSet
- A. JSON
- B. XML
- C. YAML
- D. CSV
**Answer:** C. YAML
- A. Deployment
- B. StatefulSet
- C. DaemonSet
- D. Dockerfile
**Answer:** D. Dockerfile
- A. A physical server
- A. ClusterIP
- B. NodePort
- C. LoadBalancer
- D. ExternalName
**Answer:** A. ClusterIP
- A. Deployment
- B. ConfigMap
- C. Service
- D. ReplicaSet
**Answer:** B. ConfigMap
- A. `kubeadm start`
- B. `kubeadm init`
- C. `kubectl init`
- D. `kubelet start`
- B. `kubelet join`
- C. `kubeadm join`
- D. `kubectl connect`
- A. kubectl
- B. docker-compose
- C. terraform
- D. ansible
**Answer:** A. kubectl
- D. To deploy applications
- A. kube-proxy
- B. kube-scheduler
- C. kube-controller-manager
- D. kube-apiserver
**Answer:** C.
kube-controller-manager
- D. To manage networking
- A. kube-proxy
- B. kube-scheduler
- C. kube-apiserver
- D. kube-controller-manager
**Answer:** C. kube-apiserver
- A. ClusterIP
- B. NodePort
- C. LoadBalancer
- D. ExternalName
**Answer:** C. LoadBalancer
- A. ClusterIP
- B. NodePort
- C. LoadBalancer
- D. ExternalName
**Answer:** C. LoadBalancer
- A. DNS
- B. IP addresses
- C. Ingress rules
- D. Round-robin algorithm
- A. Service
- B. Deployment
- C. Ingress
- D. Pod
**Answer:** C. Ingress
- A. 30000-32767
- B. 31000-34000
- C. 32000-35000
- D. 33000-36000
**Answer:** A. 30000-32767
- A. kubelet
- B. kube-proxy
- C. kube-apiserver
- D. kube-scheduler
**Answer:** B. kube-proxy
- D. To create backups
**Explanation:** Logs are used to record system events, transactions, and activities for
monitoring, troubleshooting, and analysis.
- A. Batch processing
- B. Real-time processing
- C. Scheduled reporting
- D. Data warehousing
- A. Monolithic
- B. Client-Server
- C. Distributed
- D. Peer-to-peer
**Answer:** C. Distributed
- A. By using APIs
**Explanation:** Splunk ingests data by indexing it from a wide variety of sources, including
log files, network streams, and application outputs.
- A. Splunk Lite
- B. Splunk Basic
- C. Splunk Pro
- D. Splunk Advanced
**Explanation:** Splunk Lite is one of the products offered by Splunk, targeted at small IT
environments for log search and analysis.
**Explanation:** Splunk Cloud provides all the features of Splunk Enterprise with the
advantage of being a managed service, eliminating the need for local infrastructure.
- B. Forwarder
- C. Indexer
- D. Deployment Server
**Answer:** C. Indexer
**Explanation:** The Indexer is responsible for processing incoming data, indexing it, and
storing it for search and analysis.
**Explanation:** Splunk Enterprise offers advanced data visualization tools, allowing users
to create detailed and interactive dashboards.
- A. Splunk Enterprise
- B. Splunk Light
- C. Splunk Cloud
- D. Splunk Mobile
**Explanation:** Splunk Light is designed for small IT environments, offering essential log
search and analysis features at a lower cost.
- A. By writing code
- B. By configuring networks
**Explanation:** The Splunk Processing Language (SPL) is used to query, analyze, and
visualize data within Splunk.
- A. To index data
- B. To search data
15. **Which Splunk product provides a cloud-based solution for operational intelligence?**
- A. Splunk Lite
- B. Splunk Enterprise
- C. Splunk Cloud
- D. Splunk Mobile
- A. Indexer
- B. Search Head
- C. Forwarder
- D. Deployment Server
**Explanation:** The Search Head is responsible for searching, analyzing, and visualizing
data in Splunk.
**Explanation:** The Indexer processes, indexes, and stores the data ingested by Splunk.
3. **Which component of Splunk is used for forwarding data from remote sources?**
- A. Search Head
- B. Heavy Forwarder
- C. Deployment Server
- D. Cluster Master
**Answer:** B. Heavy Forwarder
**Explanation:** The Heavy Forwarder is a full Splunk instance that can parse and index
data before forwarding it to the Indexer.
- D. To visualize data
**Answer:** A. To act as a lightweight agent for collecting and forwarding log data
5. **Which Splunk component is responsible for managing configurations and updates across
the Splunk deployment?**
- A. Search Head
- B. Deployment Server
- C. Indexer
- D. Cluster Master
- A. Search Head
- B. Deployment Server
- C. Cluster Master
- D. Universal Forwarder
- A. `splunk start`
- B. `splunk run`
- C. `splunk initialize`
- D. `splunk boot`
**Explanation:** The command `splunk start` is used to start the Splunk service after
installation.
9. **How is data forwarded from a Universal Forwarder to an Indexer?
**
10. **Which Splunk component can parse data before forwarding it to the Indexer?**
- A. Universal Forwarder
- B. Heavy Forwarder
- C. Search Head
- D. Deployment Server
**Explanation:** The Heavy Forwarder can parse, transform, and index data before
forwarding it to the Indexer.
- A. spctl
- B. splunkd
- C. splunk
- D. splunkcli
**Answer:** C. splunk
**Explanation:** A Search Head Cluster distributes search queries across multiple Search
Heads to improve performance and availability.
**Explanation:** To install Splunk on a Windows system, you need to download and run the
.msi installer file.
14. **Which Splunk component ensures high availability of indexed data?**
- A. Search Head
- B. Indexer Cluster
- C. Universal Forwarder
- D. Heavy Forwarder
15. **Which Splunk component handles the indexing and searching of data?**
- A. Universal Forwarder
- B. Deployment Server
- C. Indexer
- D. Cluster Master
**Answer:** C. Indexer
**Explanation:** The Indexer is responsible for both indexing and enabling search
capabilities for the data in Splunk.
**Explanation:** In Splunk, "host" refers to the name of the server or device from which the
data originates.
**Explanation:** "Source" refers to the path, file, or name of the data input that Splunk
indexes.
- A. A category of users
- D. A network protocol
**Explanation:** "Source type" is used to classify data formats, helping Splunk to properly
parse and index the data.
- B. Network configurations
**Explanation:** Fields are key-value pairs that Splunk extracts from events to make data
searchable and analyzable.
**Explanation:** Tags are used to categorize and group similar events, making it easier to
search and analyze related data.
- B. A search query
- C. A network configuration
- D. A user role
**Explanation:** Index-time refers to the moment when data is ingested and indexed by
Splunk.
**Explanation:** Search-time is the time when data is queried, analyzed, and visualized by
the user.
9. **Which Splunk feature allows users to create alerts based on specific conditions?**
- A. Indexing
- B. Forwarding
- C. Reporting
- D. Alerting
**Answer:** D. Alerting
**Explanation:** Splunk's alerting feature allows users to set up alerts that trigger actions
based on specific conditions or thresholds.
- D. To index data
**Explanation:** Dashboards are used in Splunk to create visual representations of data for
monitoring and analysis purposes.
- A. Mobile applications
**Explanation:** Field extraction is the process of parsing and structuring raw data to make
it searchable and analyzable in Splunk.
- A. A group of forwarders
- D. A network configuration
15. **Which Splunk feature helps in correlating events across multiple data sources?**
- A. Data summarization
- B. Data forwarding
- C. Event correlation
- D. Data archiving
**Explanation:** Splunk's event correlation feature helps in linking and analyzing related
events from multiple data sources to identify patterns and anomalies.
**1. What is Kubernetes and what is the difference between Docker and Kubernetes tool?**
- **Docker:**
- **Kubernetes:**
- **Usage:** Used for orchestrating and managing multiple containers deployed across
multiple hosts.
```bash
apt-get update
```
```bash
```
```bash
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
```
```bash
```
```bash
apt-get update
```
Obtain the join command from the master node (given during the `kubeadm init` process)
and run it on the worker node:
```bash
```
**Creating a Deployment:**
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image
ports:
- containerPort: 80
```
```bash
```
**Creating a Service:**
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 80
```
```bash
```
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
```
```bash
```
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.y
aml
```
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
```
```bash
```
3. **Create a ClusterRoleBinding:**
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
```
```bash
```
```bash
kubectl proxy
```
**Splunk Tool:** Splunk is a powerful platform for searching, monitoring, and analyzing
machine-generated big data via a web-style interface. It captures, indexes, and correlates
real-time data in a searchable repository from which it can generate graphs, reports, alerts,
dashboards, and visualizations.
- **Log Management:** Collecting and analyzing log data from various sources.
- **Security Information and Event Management (SIEM):** Monitoring and analyzing security
events.
- **Operational Intelligence:** Providing insights from machine data to improve IT operations
and business performance.
- **Splunk Cloud:** A SaaS offering that provides all the features of Splunk Enterprise without
the need for on-premises infrastructure.
```bash
```
2. **Install Splunk:**
```bash
rpm -i splunk-8.x.x-linux-2.6-x86_64.rpm
```
3. **Start Splunk:**
```bash
/opt/splunk/bin/splunk start --accept-license
```
4. **Enable Boot-start:**
```bash
```
**Universal Forwarder:**
```bash
wget -O splunkforwarder-8.x.x-linux-2.6-x86_64.rpm
'https://www.splunk.com/page/download_track?
file=8.x.x/universalforwarder/splunkforwarder-8.x.x-linux-2.6-x86_64.rpm'
```
```bash
rpm -i splunkforwarder-8.x.x-linux-2.6-x86_64.rpm
```
```bash
```
4. **Enable Boot-start:**
```bash
/opt/splunkforwarder/bin/splunk enable boot-start
```
**Heavy Forwarder:**
```bash
```
**Deployment Server:**
**Cluster Master:**
- **Role:** Manages the configuration and health of indexer clusters, ensuring data
replication and high availability.
- **Use Case:**
Used in environments where data availability and redundancy are critical, managing the
replication of data across indexer nodes to prevent data loss.