0% found this document useful (0 votes)
4 views20 pages

Linux

Samba is a Linux server software that facilitates file and printer sharing between Linux/Unix and Windows systems using the SMB protocol. The document also covers shell programming syntax, including case statements, file creation commands, and the structure of the Linux file system, detailing components like boot blocks, super blocks, inode tables, and data blocks. Additionally, it explains shell variables, command line arguments, conditional statements, and the Domain Name System (DNS) as a critical Internet infrastructure component for translating domain names into IP addresses.

Uploaded by

badhushakassim24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views20 pages

Linux

Samba is a Linux server software that facilitates file and printer sharing between Linux/Unix and Windows systems using the SMB protocol. The document also covers shell programming syntax, including case statements, file creation commands, and the structure of the Linux file system, detailing components like boot blocks, super blocks, inode tables, and data blocks. Additionally, it explains shell variables, command line arguments, conditional statements, and the Domain Name System (DNS) as a critical Internet infrastructure component for translating domain names into IP addresses.

Uploaded by

badhushakassim24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Samba is a server software in Linux that allows file and printer sharing between Linux/Unix systems and Windows

systems.​
It uses the SMB (Server Message Block) protocol to enable interoperability between different operating systems in a
network.
Key Points:
Helps Linux act as a file/print server for Windows clients.
Supports network browsing and authentication.
Commonly used in mixed-OS environments (Linux + Windows).
syntax of the case statement in shell programming (bash):
case expression in
pattern1)
commands ;;
pattern2)
commands ;;
pattern3)
commands ;;
*)
default commands ;;
esac
Important points:
case starts the block and esac ends it ("case" spelled backward).
;; ends each pattern’s command block.
* is the default case (like else).
Here’s a short, summarized note:

Shell environment refers to the collection of settings, variables, and configurations that control how a shell behaves and
interacts with the system and the user.​
It includes environment variables (like PATH, HOME, USER), shell options, and configurations that affect command
execution and scripting.
commands used to create files in Linux
cat > filename – Creates a new file and lets you add content.
touch filename – Creates an empty file (or updates the timestamp if the file exists).
vi filename – Opens the vi editor to create and edit a new file.
difference between home directory and working directory in Linux:
Home Directory:
It is the default directory where a user is placed after logging into Linux.
Example: /home/username
It stores the user's personal files and settings.
Working Directory:
It is the current directory where you are working at any moment.
You can move (change) the working directory using the cd command.
You can check it using the pwd command.
Linux File System
Introduction
In Linux, the file system plays a crucial role in organizing and managing data on physical drives. Each physical drive can
be divided into multiple partitions, and each partition can host one file system.
A file system is:
A logical structure used by the operating system to manage how data is stored and retrieved.
Composed of methods and data structures to keep track of files.
Used to organize and store files efficiently.

Structure of a Linux File System


When a partition or disk is formatted, the sectors are grouped into blocks. Each block typically has a size of 512 bytes,
and these blocks are logically divided into four parts:
1. Boot Block
Located in the first few sectors of the file system.
Contains the initial bootstrap program used to load the operating system.
The bootstrap is loaded in stages—starting with a small one that loads larger parts.

2. Super Block
Describes the state and configuration of the file system.
Contains critical metadata such as:
Total size of the partition
Block size
Maximum number of files
Pointers to free blocks and free inodes
Number of allocated/free inodes
Size and structure of inode tables
Inode number of the root directory
Crucial for accessing files; hence, it's backed up in multiple areas on the disk.

3. Inode Table (Inode Block)


Every file and directory is represented by an inode.
An inode is a data structure storing metadata about the file (not the file content).
Information stored in an inode includes:
File size
User ID (owner)
Group ID
File type (e.g., regular, directory, symbolic link)
File permissions (read/write/execute)
Timestamps (creation, modification, access)
File location (block pointers)
Protection flags and other metadata

4. Data Block
Contains the actual content of the files.
Starts immediately after the inode table.
Stores:
User files (text, media, code, etc.)
Special files (device files, symbolic links, directories)
Each block is allocated to one file only, and once deleted, the block becomes reusable.

Common Linux File Systems


➤ EXT Family
ext (Extended File System): The first Linux-specific file system, introduced in 1992.
ext2: Default in Linux kernel 2.2, lacks journaling.
ext3: Adds journaling to ext2, improving data recovery and reliability.
ext4: Successor to ext3 with support for larger filesystems, better performance, and more robust structure.

➤ XFS
Developed for high-performance and high scalability.
Offers near-native I/O performance even across multiple storage devices.

➤ OCFS & OCFS2 (Oracle Cluster File System)


OCFS: Designed for Oracle RAC (Real Application Cluster), allowing multiple nodes to access the same file system.
OCFS2: Improved, POSIX-compliant version for general-purpose use, supports shared Oracle home directories.

➤ NFS (Network File System)


Used to access files over a network as if they were on a local drive.
Facilitates remote file sharing across systems.

➤ JFS (Journaled File System)


Developed by IBM for the AIX operating system, later ported to Linux.
Offers:
Low CPU usage
Good performance with both large and small files
Supports dynamic resizing (though not shrinking)

Conclusion
Linux supports a variety of file systems, each optimized for different use cases such as speed, reliability, scalability, or
clustering. The ext4 remains the most commonly used, but others like XFS, JFS, and OCFS2 serve critical roles in
enterprise and performance-driven environments.
Here's a well-structured and expanded version of your notes on Types of Shells in Linux, ideal for exams, assignments, or
presentations:

Shell Variables in Linux


In Linux shell scripting, variables are used to store data temporarily so it can be reused during the execution of a script or
command.
Shell variables are classified into two main types:

🔹 1. Built-in Shell Variables (System or Environmental Variables)


➤ Definition:
These are predefined variables created and maintained by the Linux system itself. They define the shell’s operating
environment and affect the behavior of the shell and processes.
Also known as environment variables.
Defined using UPPERCASE letters by convention.
Accessible in the shell using the echo command (e.g., echo $HOME).
View all using:
set
or
printenv

🗂 Common Built-in Variables and Their Descriptions:


Variable Meaning
BASH Path to the current shell binary (e.g., /bin/bash)
Variable Meaning
BASH_VERSIO
Version of the bash shell
N
COLUMNS Number of columns on the terminal screen
HOME Path to the current user’s home directory
LINES Number of lines on the terminal screen
LOGNAME The login name of the current user
OSTYPE Operating system type (e.g., Linux)
PATH Colon-separated list of directories where the shell looks for commands
PS1 Primary command prompt string
PS2 Secondary command prompt (used when a command continues on the next line)
PWD Present working directory
SHELL Default shell path for the current user
USERNAME The current user’s username
MAIL Path to the user’s mail directory

🔹 2. User-Defined Variables (UDV)


➤ Definition:
These are variables created by the user within a shell session or a shell script.
Defined using lowercase letters by convention.
Temporary and limited to the session/scope of the script.
No type declaration is needed — the shell determines the data type at runtime.
The variable exists until:
The script ends, or
The terminal session is closed.

🧾 Syntax to Define a User Variable:


variablename=value
Important: No spaces around the = sign.
If the value has spaces, enclose it in double quotes:
greeting="Hello, World"
To access the value of the variable, prefix with $:
echo $greeting

🧪 Example:
#!/bin/bash
name="Alice"
echo "Hello, $name!"
Output:
Hello, Alice!

parameter passing and command line arguments


Overview of Command Line Arguments in Shell Scripts
When you run a shell script or command in Linux, you can pass additional arguments (data) to it. These arguments are
passed through the command line and are stored in variables called positional parameters. These parameters can be
used within the shell script to perform different tasks based on the input.
Key Concepts:
Command Line Arguments: When running a shell script, you can pass additional values (parameters) after the script
name. These are known as command-line arguments.
Positional Parameters: The arguments passed from the command line are stored in special variables called positional
parameters. The first argument is stored in $1, the second in $2, and so on.
$0: This special variable holds the name of the script itself (i.e., the shell script you are running).
Arguments from $1 to $9:
$1 to $9 store the first to the ninth command line arguments, respectively.
For arguments beyond $9, you need to enclose the argument number in curly braces (e.g., $10, $11, etc.).
Example of Passing Command Line Arguments:
1. Simple Example with 3 Arguments:
Let’s write a shell script example.sh that accepts at least 3 parameters:
#!/bin/bash

echo "First parameter is: $1"


echo "Second parameter is: $2"
echo "Third parameter is: $3"
How to run the script:
$ bash example.sh filename 456 arrange
Output:
First parameter is: filename
Second parameter is: 456
Third parameter is: arrange
Explanation:
$1 stores filename
$2 stores 456
$3 stores arrange
Special Variables:
$0: Represents the name of the script itself.
For example, running bash example.sh will make $0 equal to example.sh.
$#: Represents the number of arguments passed to the script.
For example, if you pass 3 arguments, $# will be equal to 3.
$@ and $*: Both represent all arguments passed to the script.
$@ treats each argument as a separate entity, and $* treats all arguments as a single string.
Handling Arguments Beyond $9:
For arguments that exceed the 9th parameter, you must enclose the argument number in curly braces, as follows:
Example with $10, $11:
#!/bin/bash

echo "10th parameter is: ${10}"


echo "11th parameter is: ${11}"
If you run the script with more than 9 parameters:
$ bash example.sh one two three four five six seven eight nine ten eleven
Output:
10th parameter is: ten
11th parameter is: eleven
Practical Use of Command Line Arguments:
You can use command line arguments in shell scripts for various purposes, like:
File names or paths
User inputs for dynamic execution
Configuration parameters

Conditional Statements (Branching Statements) in Shell Scripting + decision making

In a shell script, instructions are executed sequentially, one after another. However, in some cases, you may need to
execute different sets of instructions based on certain conditions. This is where branching statements come into play.
They allow the script to make decisions and control the flow of execution.
Linux has following types of branching statements:
The if statement
The case statement
Linux Shell supports following forms of if statement (Branching Statements)
If….fi statement
if else... fi statement
if...elif...else... fi statement
1. The if statement
The if statement checks whether a condition is true. If the condition is true, a set of commands is executed. If the
condition is false, no action is performed.
Syntax:
if [ condition ]
then
statement-block
fi
The condition can be any expression that returns true (0) or false (non-zero).
The statement-block is executed only if the condition is true.
Example:
#!/bin/bash
a=10
b=10

if [ $a -eq $b ]
then
echo "a and b are equal"
fi
Output:
a and b are equal

2. The if...else...fi statement


This structure allows you to execute one block of code if the condition is true, and a different block of code if the
condition is false.
Syntax:
if [ condition ]
then
true-block
else
false-block
fi
If the condition is true (returns 0), the true-block is executed.
If the condition is false (non-zero), the false-block is executed.
Example:
#!/bin/bash
a=10
b=20

if [ $a -eq $b ]
then
echo "a and b are equal"
else
echo "a and b are not equal"
fi
Output:
a and b are not equal

3. The if...elif...else...fi statement


The elif (short for "else if") statement allows you to test multiple conditions in a single if-else block. It checks for
additional conditions after the initial if condition.
Syntax:
if [ condition1 ]
then
statement-block1
elif [ condition2 ]
then
statement-block2
else
default-statements
fi
If condition1 is true, statement-block1 is executed.
If condition1 is false, it checks condition2.
If condition2 is true, statement-block2 is executed.
If none of the conditions are true, the else block is executed.
Example:
#!/bin/bash
a=10
b=20

if [ $a -gt $b ]
then
echo "$a is greater than $b"
elif [ $a -lt $b ]
then
echo "$a is less than $b"
else
echo "$a and $b are equal"
fi
Output:
10 is less than 20

4. Nested if statements
A nested if statement is used when one condition must be checked inside another condition. You can use an if inside the
true or false block of another if statement.
Syntax:
if [ condition1 ]
then
if [ condition2 ]
then
statement-block
else
statement-block2
fi
else
statement-block3
fi
Example:
#!/bin/bash
a=20
b=10

if [ $a -gt $b ]
then
if [ $a -gt 50 ]
then
echo "$a is greater than 50"
else
echo "$a is less than or equal to 50"
fi
else
echo "$a is less than $b"
fi
Output:
20 is greater than 50

5. The test command


The test command evaluates an expression and returns a status code. It is commonly used inside if statements to
evaluate conditions (like comparing integers, strings, or files).
Syntax:
test <expression>
Returns 0 (true) if the expression evaluates to true.
Returns 1 (false) if the expression evaluates to false.
Example: Integer comparison
#!/bin/bash
a=10
b=20

if test $a -gt $b
then
echo "$a is greater than $b"
else
echo "$a is not greater than $b"
fi
Output:
10 is not greater than 20

The case statement


The case statement is used for multi-way branching based on pattern matching. It compares a variable to several patterns
and executes the corresponding block of commands.
Syntax:
case $variable in
pattern1)
statement-block1
;;
pattern2)
statement-block2
;;
*)
default-block
;;
esac
The value of $variable is checked against each pattern.
If a match is found, the corresponding statement block is executed.
If no match is found, the default block (*) is executed.
Example:
#!/bin/bash
echo "Enter two numbers"
read a b
echo "Enter an operator (+, -, *, /)"
read op

case $op in
+)
result=$(($a + $b))
echo "$a + $b = $result"
;;
-)
result=$(($a - $b))
echo "$a - $b = $result"
;;
\*)
result=$(($a * $b))
echo "$a * $b = $result"
;;
/)
result=$(($a / $b))
echo "$a / $b = $result"
;;
*)
echo "Invalid operator"
;;
esac
Example Input:
Enter two numbers
10 5
Enter an operator (+, -, *, /)
+
Output:
10 + 5 = 15

Conclusion
Key Points:
if statements allow decision-making in a script.
The test command is used for evaluating expressions (e.g., comparing integers, strings, or checking file conditions).
Use if...then...fi, if...else...fi, and if...elif...else...fi structures to control the flow based on conditions.
Nested if statements help in more complex decision-making.
The case statement is useful for multi-way branching and matching patterns.
Domain Name System (DNS)
The Domain Name System (DNS) is an essential component of the Internet infrastructure, often referred to as the
"phonebook of the Internet". DNS is responsible for translating human-readable domain names (such as example.com)
into machine-readable IP addresses (like 198.105.232.4).
Purpose of DNS
Human-friendly names: Humans access websites through domain names (e.g., www.example.com), which are easier to
remember than numeric IP addresses.
Machine-readable addresses: The Internet works based on IP addresses, so DNS servers are used to convert domain
names into IP addresses, allowing browsers and other services to connect to websites.
Components of DNS
DNS operates through various components, including:
Domain Names: These are the names we type in the browser, such as www.example.com.
IP Addresses: These are the numeric addresses used by machines to locate each other, like 198.105.232.4.
DNS Records: These are entries in DNS databases that define various attributes of a domain, such as:
A records: Maps domain names to IPv4 addresses.
AAAA records: Maps domain names to IPv6 addresses.
MX records: Specifies mail servers for a domain.
CNAME records: Alias for a domain name.
NS records: Specifies the name servers for a domain.
DNS Query Process:
When a user wants to access a website, their computer sends a query to a DNS server.
The DNS server responds with the IP address corresponding to the domain name.

Types of DNS Servers


DNS servers come in various types, each playing a role in the resolution process:
DNS Resolver (Recursive Resolver):
The DNS resolver is responsible for receiving DNS queries from clients (such as web browsers) and initiating the
resolution process.
The resolver queries multiple DNS servers (root DNS servers, authoritative DNS servers, etc.) until it finds the correct IP
address for the domain name.
Authoritative DNS Server:
An authoritative DNS server holds the DNS records for a domain and provides definitive answers about the domain’s IP
address.
These servers are managed by the domain's owner and contain information about the domain’s records.
Root DNS Server:
A root DNS server is the starting point for DNS queries. It doesn't contain domain records but points to the Top-Level
Domain (TLD) servers.
There are a limited number of root DNS servers globally.
Caching DNS Server:
A caching DNS server stores DNS query results for a period of time. If a domain name is queried frequently, this server
will cache the result to reduce latency and improve performance.

DNS Server Software


DNS server software is responsible for implementing DNS functionality on a server. There are many DNS server software
packages available, but the two most popular are:
BIND (Berkeley Internet Name Domain):
BIND is the most commonly used DNS server software for Linux/Unix systems.
It is open-source and widely adopted in the industry.
BIND implements the DNS protocol and is flexible, allowing system administrators to configure various types of DNS
records.
Microsoft DNS:
On Microsoft Windows Server systems, DNS functionality is provided through Microsoft DNS.
It is often bundled with Windows Server editions and integrates with Active Directory for domain name resolution in
Windows environments.

Common DNS Terminology


FQDN (Fully Qualified Domain Name):
An FQDN is the complete domain name for a specific computer or host on the Internet. It includes the host name and the
domain name, such as www.example.com.
TTL (Time to Live):
TTL is the amount of time a DNS record is cached by DNS servers and clients before it must be refreshed. It is set in
seconds and can help control the load on DNS servers.
Zone Files:
DNS zone files contain mappings between domain names and IP addresses. These files are maintained by authoritative
DNS servers and contain records such as A, MX, and NS records.
What is an Editor?
An editor is a program or tool used to create, modify, and manage text files. Editors are essential in programming, web
development, system administration, and general text manipulation. They allow users to interact with and manipulate text
in a structured and efficient manner.
There are two main categories of editors:
Text Editors
Code Editors
Both types of editors are designed for specific purposes but share common functionality like cutting, copying, pasting,
searching, and replacing text.
Let's organize and expand the notes on the vi editor for better understanding and exam preparation.

vi Editor Overview
The vi editor is a screen-oriented text editor that is widely used in Linux and Unix systems. It was developed by Bill Joy at
the University of California, Berkeley. The vi editor is an interactive, full-screen editor that allows users to create, edit, and
view files.
Key Features of vi Editor:
Widely used: Available in nearly all Linux and Unix distributions.
Multiple file support: Capable of handling multiple files at the same time (these are called buffers in vi terminology).
Efficient text manipulation: Allows operations like cutting, copying, pasting, searching, replacing text, importing/exporting
text, and even spell-checking.
Not just for text files: vi is also used for composing emails and editing command-line scripts.
Modes in vi Editor
The vi editor operates in three main modes:
Command Mode (Default Mode)
Insert Mode (Input Mode)
Line Mode (Ex Mode)
Each mode serves a different function, and understanding how to switch between them is crucial for efficient use of vi.

1. Command Mode (Default Mode)


Purpose: The default mode in vi. In this mode, every keystroke is interpreted as a command.
Key Actions in Command Mode:
Move the cursor (e.g., with arrow keys or h, j, k, l).
Delete, copy, or paste text.
Search and replace text.
Save or exit files.
Enter insert mode for text input.
Examples of commands in command mode:
dd → Delete the current line.
yy → Copy the current line (yank).
p → Paste the copied text.
/text → Search for "text" in the file.
:w → Save the file.
:q → Quit vi.
:wq → Save and quit vi.
How to enter command mode: When you first open vi, you are automatically in command mode. Pressing Esc from other
modes will also return you to command mode.

2. Insert Mode (Input Mode)


Purpose: This mode allows you to insert text into the file. Everything typed in insert mode is considered input and gets
inserted into the file.
Entering Insert Mode: You can enter insert mode by pressing the following keys in command mode:
i → Insert before the cursor position.
I → Insert at the beginning of the current line.
a → Append after the cursor position.
A → Append at the end of the current line.
o → Open a new line below the current one.
O → Open a new line above the current one.
Exiting Insert Mode: To exit insert mode and return to command mode, simply press the Esc key.

3. Line Mode (Ex Mode)


Purpose: This mode is used for executing more advanced commands that affect the entire file or perform operations that
require a prompt.
Entering Line Mode: To enter line mode, press the colon : key while in command mode. This brings you to the bottom of
the screen where you can type commands that affect the whole file.
Key Features:
You can save, quit, or execute more complex operations.
Commands are echoed to the screen, and you submit them by pressing the Return key.
Examples of Line Mode Commands:
:w → Save the file.
:q → Quit vi.
:wq → Save and quit vi.
:x → Save and quit vi (same as :wq).
:q! → Quit without saving changes (force quit).
:set nu → Display line numbers.
:set nonu → Hide line numbers.

Basic vi Commands for File Editing


Here’s a list of some basic vi commands you will need to know for file manipulation:
Navigating in Command Mode:
h → Move left (one character).
j → Move down (one line).
k → Move up (one line).
l → Move right (one character).
gg → Go to the beginning of the file.
G → Go to the end of the file.
0 → Go to the beginning of the current line.
$ → Go to the end of the current line.
w → Move forward by one word.
b → Move backward by one word.
Text Editing:
x → Delete the character under the cursor.
dd → Delete the current line.
yy → Copy the current line.
p → Paste the copied content.
u → Undo the last change.
Ctrl+r → Redo the undone change.
:s/old/new/ → Replace the first occurrence of "old" with "new" on the current line.
:s/old/new/g → Replace all occurrences of "old" with "new" on the current line.
:%s/old/new/g → Replace all occurrences of "old" with "new" in the entire file.
Saving and Exiting:
:w → Save the file (write changes).
:q → Quit vi (if no changes are made).
:q! → Quit vi without saving changes.
:wq → Save and quit vi.
ZZ → Save and quit (same as :wq).
:x → Save and quit (similar to :wq).

Advanced vi Features
Searching for Text:
/text → Search forward for "text".
?text → Search backward for "text".
n → Move to the next occurrence of the search.
N → Move to the previous occurrence of the search.
Replacing Text:
:s/old/new/g → Replace "old" with "new" globally in the line.
:%s/old/new/g → Replace "old" with "new" globally in the entire file.
Multiple Files (Buffers):
:e filename → Open a different file in vi.
:n → Switch to the next file (if multiple files are open).
:prev → Switch to the previous file.
:

Telnet (TELecommunication NETwork)


Introduction
Telnet is a networking protocol and software utility used for accessing remote computers and terminals over the Internet
or a TCP/IP-based network.
It was developed in 1969 and is one of the earliest Internet standards, standardized by the Internet Engineering Task Force
(IETF).

Definition
Telnet refers to both:
The protocol used to connect to remote computers.
The command-line utility used to establish those connections.
It provides a bidirectional, interactive text-based communication facility between two computers.

Features
Remote Access:
Enables logging into a remote machine and controlling it via command-line interface.
The remote machine can be in the next room, city, or even a different country.
Cross-Platform Support:
Telnet clients are available on Windows, Linux, macOS, and other Unix-like operating systems.
Most network equipment (routers, switches) and operating systems that support TCP/IP come with a built-in Telnet server
or client.
Server Control:
Commonly used by network administrators to configure and control servers remotely.
Frequently used to test network services, debug servers, or connect to legacy systems.
Command Execution:
Allows execution of commands remotely as if entered directly at the remote terminal.
Requires login with a valid username and password.

Advantages
Simple to use and understand.
Lightweight protocol with low bandwidth consumption.
Useful for connecting to legacy systems that do not support modern remote access tools.

Disadvantages
Lacks security: Data, including usernames and passwords, is sent in plain text, making it vulnerable to interception.
No encryption: Unlike modern protocols like SSH, Telnet does not encrypt the session.
Mostly replaced by SSH (Secure Shell) for secure communication.

Here's a rearranged and expanded version of your notes on FTP (File Transfer Protocol), suitable for exam preparation:

FTP (File Transfer Protocol)


Introduction
FTP (File Transfer Protocol) is a standard network protocol used for transferring files between a client and a server over a
TCP/IP-based network such as the Internet.
It was developed in the 1970s and has since been a widely adopted method for file sharing and storage.
FTP allows files to be uploaded (put) to or downloaded (get) from a server using FTP client software.

Definition of FTP Server


An FTP Server is a software application running on a remote machine that enables file sharing using FTP.
It stores files and provides access to remote users, either with login credentials (username and password) or anonymous
access (limited, no login needed).
FTP servers are often used for centralized data sharing and software distribution.

Definition of FTP Client


An FTP Client is a software application that allows a user to connect to an FTP server to upload or download files.
Examples include: FileZilla, WinSCP, Cyberduck, and command-line FTP utilities.

Working of FTP
FTP uses the client-server model:
Client: initiates a request to connect to the server.
Server: responds and processes file operations.
FTP operates on two separate communication channels:
Command Channel: Used for sending control commands (e.g., login, change directory).
Data Channel: Used for transferring file data.

FTP Modes
Active Mode:
Client initiates connection to the server on port 21.
Server initiates the data connection back to the client.
Passive Mode:
Client initiates both command and data connections.
Works better with firewalls and NAT (Network Address Translation).
More secure and widely used in modern networks.

Operations Supported by FTP


Upload files to the server.
Download files from the server.
Rename, delete, move, or copy files.
List directories and their contents.
Resume interrupted downloads (Checkpoint restart support).

Features of an FTP Server


Allows both uploading and downloading of files.
Supports resumable downloads.
Access permissions can be configured by the administrator.
Supports anonymous access with restricted privileges.
File transfers can be done using web browsers (e.g., using ftp:// links), though without advanced features like secure FTP
(FTPS).

FTP Site Address Format


FTP site addresses start with:
ftp://servername_or_IP_address
Advantages of FTP
Advantage Explanation
Speed FTP is faster compared to many other file transfer protocols.
Efficiency FTP allows for partial file transfers and resumes.
Security (basic) Requires login credentials for access, which adds a basic level of security.
Two-Way Transfer Files can be transferred both to and from the server, making it ideal for collaboration.

Disadvantages of FTP
Disadvantage Explanation
Data, including passwords, is transferred in plain text. Vulnerable to eavesdropping and MITM
Lack of Encryption
attacks.
Limited Compatibility Not all systems support FTP natively or securely.
No Multi-Session Support Cannot perform simultaneous large file transfers efficiently.
Size Limit Some FTP implementations limit file size to 2 GB.
Vulnerable to Brute Force Passwords can be guessed if not properly secured.

Hardware Requirements for Linux


Linux is a flexible and efficient operating system that can run on a wide range of hardware—from old legacy systems to
modern desktops and servers. The hardware requirements depend on the Linux distribution and the purpose of the
system (desktop, server, minimal install, etc.).

1. Processor (CPU)
Minimum Requirement:
A 400 MHz Pentium processor is typically the minimum for a Graphical User Interface (GUI) installation.
Linux can run on even older processors like the Intel 486, though this would be for very lightweight or minimal
installations.
Recommended:
A 32-bit processor (x86) is fine for basic applications.
A 64-bit processor (x86_64) is recommended for modern systems, especially if you plan to run virtual machines or
high-performance applications.
Special CPU Features:
For virtualization (e.g., using KVM), your CPU must support hardware virtualization technologies like:
Intel VT-x (Intel Virtualization Technology)
AMD-V (AMD Virtualization)

2. RAM (Memory)
Minimum Requirement:
Linux can run on as little as 24MB of RAM for ultra-minimal distributions or embedded systems.
1GB of RAM is generally the minimum for a GUI-based Linux distribution.
Recommended:
2GB–3GB or more for a smooth desktop experience.
More RAM is needed for running heavy applications, multiple users, or virtual machines.

3. Disk Space (Storage)


Minimum Disk Space:
As low as 600MB for a minimal server installation (no GUI).
Typical Installation:
Requires about 10GB of disk space for a standard desktop environment.
Full Installation:
Can take up to 7GB or more, depending on the number of packages and desktop environments selected.
Additional Considerations:
Disk space requirements increase based on user data:
Documents consume little space.
Media files (videos, images) consume significant space.
Consider having at least 20GB–30GB free for comfortable usage.

4. Bootable Media (DVD/CD/USB)


You need a DVD drive, CD drive, or USB drive to boot the Linux installer.
If your system doesn’t support booting from DVD/CD:
You can boot from USB.
Alternatively, you can start the installation from a hard drive or network-based installer (PXE Boot).
Once installation begins, additional packages can be downloaded from the Internet.

5. Network Interface Card (NIC)


Required for network connectivity to:
Download software and updates.
Connect to online repositories.
Supported hardware:
Wired Ethernet cards.
Wireless (Wi-Fi) cards.
Most modern Linux distributions support a wide range of network adapters out of the box.

6. Optional & Special Hardware Features


Some advanced Linux features require special hardware support:
Virtualization: As mentioned earlier, to use KVM or QEMU efficiently, the CPU must support Intel-VT or AMD-V.
Graphics Acceleration: To use advanced desktop effects or for gaming, a compatible GPU (e.g., NVIDIA, AMD, Intel) with
proper driver support is necessary.
Touchscreens, Bluetooth, and Fingerprint sensors are also supported on many distributions with appropriate drivers.
Redirecting Input and Output in Linux
I/O Redirection is a powerful feature in Linux that allows users to change the default input and output sources when
running commands. By default, most Linux commands take input from the keyboard (stdin) and give output to the screen
(stdout). If errors occur, they're also displayed on the screen (stderr). Redirection allows these inputs and outputs to be
routed elsewhere, such as to files or other devices.

1. Standard Streams in Linux


There are three standard data streams used in input/output operations:
Stream Type Description File Descriptor Default Device
stdin Standard Input 0 Keyboard
stdout Standard Output 1 Screen (Display)
stderr Standard Error 2 Screen (Display)
stdin (0): Input stream. Commands read user input from here (usually the keyboard).
stdout (1): Output stream. The normal output of a command is displayed here (usually the screen).
stderr (2): Error stream. Error messages from commands are displayed here (also the screen by default).

2. Redirection Symbols
Symbol Purpose Description
> Output Redirection Overwrites file with command output
>> Output Redirection (Append) Appends command output to file
< Input Redirection Takes input from a file
<< Here Document (Input redirection) Multi-line input
2> Error Redirection Sends error messages to a file
2>> Error Redirection (Append) Appends error messages to a file

3. Output Redirection (>, >>)


By default, command output goes to stdout, which is the terminal screen. You can redirect this output to a file using > or
>>.
Syntax:
command > filename # Overwrites filename with output
command >> filename # Appends output to filename
Example 1:
cat sample > test
This redirects the output of the cat sample command to the file test.
If test does not exist, it is created.
If it exists, its contents are overwritten.

4. Input Redirection (<, <<)


Instead of reading input from the keyboard, you can redirect input from a file.
Syntax:
command < filename
Example:
sort < data.txt
This command sorts the contents of data.txt.

5. Error Redirection (2>, 2>>)


Errors generated by a command (stderr) can be redirected separately from standard output.
Syntax:
command 2> errorfile # Redirects errors to a file (overwrite)
command 2>> errorfile # Appends errors to a file
Example 2:
cat myfile 2> errorfile
If myfile does not exist, an error is generated.
Instead of showing on the terminal, the error is saved in errorfile.

6. Redirecting Both Output and Error


You can redirect both standard output and error to the same file.
Syntax:
command > outputfile 2>&1
2>&1 means redirect stderr (2) to the same place as stdout (1).
Alternative Syntax:
command &> outputfile
This redirects both stdout and stderr to outputfile.
7. Real-World Applications
Logging Output:
./my_script.sh > output.log 2> error.log
Silent Execution (suppressing all output and errors):
command > /dev/null 2>&1
Saving Program Results:
gcc program.c > compile_output.txt 2> compile_errors.txt

🔐
15marks
File Access Permissions in Linux
In Linux, file and directory access is controlled through a permission system. This system provides security and proper
access management in a multi-user environment. Every file and directory has defined permissions for the owner (user),
group, and others.

👥 Types of Ownership
User (u): The person who created the file. They are the owner.
Group (g): A group can contain multiple users. Members of the group can share file access.
Others (o): Any user who is not the owner or part of the group. This is sometimes referred to as "world".

📄Symbol
Types of Permissions
Permission Description
r Read Allows viewing the file’s contents or listing directory contents.
w Write Allows modifying file contents or adding/removing files in a directory.
x Execute Allows running the file as a program or entering a directory.
Each permission can be enabled (r, w, x) or disabled (-).

🔍 Viewing Permissions
Use the ls -l command to view file permissions:
ls -l filename
Example Output:
-rwxr-xr--
Breakdown:
- = regular file
rwx = permissions for user
r-x = permissions for group
r-- = permissions for others

✏️ Changing Permissions Using chmod

📌
You can change file permissions using the chmod (change mode) command.
Syntax:
chmod [options] mode filename

1️⃣ Symbolic Mode

🔠
In symbolic mode, letters and operators are used to assign or remove permissions.
Symbols
u = user (owner)
g = group
o = others


a = all (user + group + others)
Operators
Symbol Action
+ Adds a permission
- Removes a permission
= Assigns and overwrites

✅ Examples (Symbolic Mode)


Make a file executable for the owner:
chmod u+x sample.sh
Remove execute permission from group and others:
chmod go-x sample.sh
Add read and write permission to everyone:
chmod a+rw sample.sh
Copy user permissions to group:
chmod g=u sample.sh

2️⃣ Absolute (Numeric) Mode


In this mode, numbers (0-7) are used to define permission sets.
Permission Binary Value
--- 000 0
--x 001 1
Permission Binary Value
-w- 010 2
-wx 011 3
r-- 100 4
r-x 101 5
rw- 110 6
rwx 111 7

✅ Examples (Absolute Mode)


Set rw-r--r-- (read-write for owner, read for group and others):
chmod 644 script.sh
Full permissions for owner, none for others:
chmod 700 script.sh
Read and write for all:
chmod 666 script.sh
Recursive permission change:
chmod -R 755 /home/mydirectory
-R applies changes recursively
755 = rwxr-xr-x (owner has all; group and others can read and execute)

👑 Ownership Change: chown and chgrp


To change file ownership:
Change owner:
chown newuser filename
Change group:
chgrp newgroup filename
explanation of the uname and hostname commands in Linux:

1. uname Command
Purpose:
The uname command is used to display system information about the Linux operating system.
Syntax:
uname [options]
Common Options and Their Use:
Option Description
-a Displays all system information
-s Displays the kernel name
-n Displays the network node hostname
-r Displays the kernel release
-v Displays the kernel version
-m Displays the machine hardware name
-p Displays the processor type
-i Displays the hardware platform
-o Displays the operating system name
Example:
uname -a
Output:
Linux myhost 5.15.0-105-generic #115-Ubuntu SMP ... x86_64 GNU/Linux
Use in Practice:
To check the OS version and kernel details.
Useful for troubleshooting or system diagnostics.

2. hostname Command
Purpose:
The hostname command is used to display or set the name of the current host system.
Syntax:
hostname [new_name]
Common Uses:
Task Command Example
View the current hostname hostname
Set a new temporary hostname sudo hostname newname
View the full DNS domain name hostname -f
View the short hostname hostname -s
Note: Setting the hostname this way is temporary and will be reset after reboot unless changed in system config files (like
/etc/hostname).
Use in Practice:
Identify or modify the system's network name.
Helpful for network administration and SSH identification.
🌐 Apache Web Server (Expanded Notes)
🔷 Introduction
Apache (officially called Apache HTTP Server) is the most widely used web server software in the world.
It was developed by the Apache Software Foundation, with its first release in 1995.
Apache is open-source, free, and cross-platform (works on both Unix/Linux and Windows).

💡 What Is a Web Server?


A web server is software that:
Accepts HTTP requests from client browsers.
Processes and responds with the appropriate content (HTML, CSS, JavaScript, images, etc.).
Facilitates the client-server communication via the HTTP protocol.
Apache serves as this bridge between the server and the web browser (e.g., Chrome, Firefox, Safari).

⚙️ Key Features of Apache


Feature Description
Open-source Available free for personal and commercial use.
Modular Design Uses modules to add functionalities like security, URL rewriting, caching, etc.
Virtual Hosting Hosts multiple websites on a single Apache server.
Cross-platform Runs on Linux, Unix, and Windows.
Process-based structure Creates a new thread/process for each client connection.
Secure and Reliable Frequently updated with security patches.

🧩 Apache Modules
Modules are plug-ins that extend the core capabilities of Apache.
Some popular modules include:
mod_ssl: Enables HTTPS (SSL/TLS support)
mod_rewrite: URL rewriting
mod_auth: User authentication
mod_cache: Content caching
mod_proxy: Proxy and load balancing support
Admins can enable/disable these based on their needs.

🔄 How Apache Works (Client-Server Flow)


A user opens a browser and requests a website (e.g., www.example.com).
The browser sends an HTTP request to the server.
Apache receives this request.
Apache locates the requested file (HTML, image, etc.).
Apache sends the file back to the browser as an HTTP response.
The browser renders the content for the user.
This process happens quickly and seamlessly, managed entirely by Apache.

✅ Advantages of Apache Web Server


Open-source and Free: No cost for usage, including commercial purposes.
Reliable and Stable: Trusted by millions of websites since 1995.
Regular Security Patches: Maintained by a strong open-source community.
Highly Configurable: Easy to add or remove features with modules.
Beginner-Friendly: Simple setup, configuration, and good documentation.
Cross-platform: Compatible with both Unix/Linux and Windows.
Large Community: Active forums and support resources available.

❌ Disadvantages of Apache Web Server


Performance Limitations: Not ideal for websites with extremely high traffic.
Complex Configurations: Too many options may create security loopholes if not handled properly.
Frequent Updates Needed: Requires regular updating to stay secure and stable.

Sure! Here's a neatly rearranged and expanded version of your Linux file processing and mathematical commands notes,
tailored for your exam preparation:

🗂️ File Processing Commands in Linux


1. wc – Word Count
Purpose: Displays the number of lines, words, and characters in a file.
Syntax:
wc [options] <filename>
Options:
Option Description
-l Count lines
-w Count words
-c Count bytes
-m Count characters
Example:
wc sample.txt # Shows lines, words, and characters
wc -l sample.txt # Only line count
wc file1 file2 file3 # Displays counts for all files + total

2. cut – Extract Specific Fields


Purpose: Extracts sections from lines of input, either by bytes, characters, or fields.
Syntax:
cut [options] <filename>
Options:
Option Description
-b Select by byte positions
-c Select by character positions
-f Select by field (columns)
-d Specify delimiter for fields (default: TAB)
Examples:
cut -b 1,2,5 sample.txt # Extracts byte positions 1, 2, and 5
cut -c 2,5,7 sample.txt # Extracts characters 2, 5, 7
cut -d " " -f 1 sample.txt # Extracts first word from each line

3. paste – Merge Lines from Files


Purpose: Merges lines of multiple files horizontally using TAB or custom delimiter.
Syntax:
paste [options] <file1> <file2>
Options:
Option Description
-s Sequential (instead of parallel merge)
-d Specify custom delimiter
Examples:
paste state.txt capital.txt # Merges line by line
paste -s state.txt capital.txt # Merges sequentially
paste -d ":" state.txt capital.txt # Uses ":" as delimiter

🧮 Mathematical Commands
1. expr – Evaluate Integer Expressions
Purpose: Perform arithmetic operations (integers only).
Syntax:
expr operand1 operator operand2
Operators:
Symbol Operation
+ Addition
- Subtraction
* Multiplication
/ Division
% Modulo
Note: Use \* for multiplication to avoid shell interpretation.
Examples:
expr 10 + 5 # Output: 15
expr 20 / 4 # Output: 5
expr 10 \* 3 # Output: 30

2. bc – Basic Calculator
Purpose: Performs calculations with integers and floating-point numbers.
To Use:
bc
Float Division:
scale=2
10/3 # Output: 3.33
Example:
echo "scale=2; 22/7" | bc # Output: 3.14

3. factor – Prime Factorization


Purpose: Displays the prime factors of a number.
Syntax:
factor [number]
Examples:
factor 100 # Output: 100: 2 2 5 5
factor # Accepts input interactively

Connecting Processes with Pipes in Linux


✅ What is a Pipe?
In Linux, a pipe (|) is a form of redirection that allows the output of one command to be used as the input to another
command. This process is called piping.
It acts like a real-world pipe where data flows from one end to another. In this case, the data flows from the output of the
first command to the input of the second command.

🔹 Key Features:
Pipes are unidirectional (data flows left to right).
Pipes allow multiple commands to work together in a chain.
Pipes help in reducing intermediate files by sending output directly between commands.
Pipes work in main memory (RAM), not on disk.

✅ Basic Syntax of Pipe:


command1 | command2 | command3 ...
command1: sends its output (stdout) to the pipe.
command2: reads from the pipe as its input (stdin).
This continues for as many commands as needed.

✅ How Pipes Work Internally:


stdin: Standard Input (usually keyboard)
stdout: Standard Output (usually terminal)
stderr: Standard Error (errors sent to terminal)
In a pipe:
First command: takes input from stdin and sends output to pipe.
Second command: takes input from the pipe and produces its own output.
Error messages are not piped — they still go to the terminal unless redirected.

✅ Examples of Using Pipes:


🔸 Example 1: View Long Output One Page at a Time
ls -l | more
ls -l lists files with details.
more displays them page by page.
The pipe (|) connects them.
Output is displayed one screen at a time.

🔸 Example 2: Sort a File and Show Only Unique Entries


sort file1 | uniq
sort arranges lines in order.
uniq removes duplicate lines.
The combination prints sorted and unique lines.
Sample Input (file1):
orange
apple
mango
orange
apple
grape
Output:
apple
grape
mango
orange

🔸 Example 3: Count the Number of Unique Words


cat file.txt | tr ' ' '\n' | sort | uniq -c
Breaks words by space, puts each on a new line, sorts them, and counts unique words.

✅ Using Multiple Pipes:


You can chain multiple commands using pipes:
cat file.txt | grep "apple" | sort | uniq
This command:
Reads a file,
Filters lines with "apple",
Sorts them,
Removes duplicates.


🔹 tee Command (Pipe + Save to File)
Definition:
The tee command reads from standard input, then:
Displays the data (to stdout), and

🔹
Writes the data into one or more files.
Syntax:
command1 | tee filename

🔹
Use tee when you want to save and view output at the same time.
Example:
ls -l | tee output.txt

🔹
Displays the result of ls -l on the screen and saves it in output.txt.
Example with Append:
ls -l | tee -a output.txt
Appends output to an existing file.

✅ Conclusion:
Pipes connect commands for efficient, real-time data processing.
They help eliminate temporary files and speed up scripting tasks.
Combined with commands like grep, sort, uniq, and tee, pipes make Linux a powerful tool for automation and processing.

Role of a System Administrator


A System Administrator (SysAdmin) is responsible for managing, maintaining, and ensuring the smooth functioning of a
computer system or network, especially in multi-user environments like Linux systems. The main objective is to provide
reliable, efficient, and secure services to users.
Key Responsibilities of a System Administrator

1. Adding New Users and Configuring User Accounts


Creating user accounts using commands like adduser or useradd.
Setting up home directories and assigning default files using skeleton directories (/etc/skel).
Configuring user privileges, setting file permissions, and assigning users to appropriate groups.
Ensuring secure password policies and authentication mechanisms.

2. Software Installation and Maintenance


Installing new application software as per user requirements or organizational needs.
Updating and patching the operating system and installed applications to fix bugs or security vulnerabilities.
Using package managers like apt, yum, or dnf depending on the Linux distribution.
Managing services and daemons (e.g., Apache, MySQL, SSH) to ensure they run properly.

3. File System Monitoring and Backup Management


Regularly checking disk usage using commands like df, du, and tools like ncdu.
Preventing users from consuming excessive disk space using disk quotas.
Planning and performing regular data backups using tools like rsync, tar, or backup scripts.
Ensuring backups are stored securely and are easily restorable in case of data loss or corruption.

4. User Support and Troubleshooting


Responding to user queries and issues related to system access, performance, or software problems.
Diagnosing system errors by analyzing log files in /var/log/ (e.g., syslog, auth.log, dmesg).
Collaborating with software vendors or hardware suppliers when issues exceed internal expertise.
Maintaining documentation for troubleshooting procedures and system configurations.

5. Installing and Configuring Hardware


Setting up and configuring new hardware components such as printers, scanners, storage devices, etc.
Ensuring compatibility with the operating system and installing necessary drivers.
Performing system upgrades involving CPU, RAM, and storage for better performance.
Handling hardware failure detection and replacement procedures.

6. Network Services Management


Managing and configuring network interfaces and protocols (TCP/IP, DNS, DHCP).
Ensuring stable and secure network access including internet connectivity and internal communications.
Administering services like:
Email (SMTP, IMAP, POP3) – for internal and external mail communication.
Remote Access (SSH, FTP, VPN) – for allowing users to connect securely to the system remotely.
Monitoring network traffic and security using tools like netstat, iftop, and firewalld.

7. Ensuring System Security


Applying security patches regularly to prevent vulnerabilities.
Setting up firewalls, access control, and intrusion detection systems.
Auditing user activities and maintaining logs for accountability.
Implementing data encryption and secure communication channels (e.g., using SSL/TLS).
Here’s an expanded and well-organized version of your notes on Linux filter commands and server concepts, perfect for
BCA exam revision:

🌐 Filter Commands in Linux


What Are Filters?
Filter commands are Linux utilities that:
Accept input from standard input (keyboard or pipe) or files.
Perform text manipulation or processing.
Produce output to standard output (screen).
These commands are used for tasks such as sorting, pattern searching, cutting, translating characters, etc.
🔹 1. pr – Format for Printing
Purpose: Formats a file with headers, footers, and pagination for printing.
Syntax: pr [options] <filename>
Common Options:
-n → Number lines.
-l <num> → Set page length (default: 66).
-h "header" → Set custom header.
-t → Omit headers and footers.
-w <width> → Set page width.
Example:
pr -n sample.txt → Format and number lines of the file.

🔹 2. head – Show Top Lines


Purpose: Displays the first N lines of a file.
Syntax: head [options] <filename>
Options:
-n <num> → Print first <num> lines.
-c <num> → Print first <num> bytes.
Example:
head -5 file.txt → First 5 lines of file.txt.

🔹 3. tail – Show Bottom Lines


Purpose: Displays the last N lines of a file.
Syntax: tail [options] <filename>
Options:
-n <num> → Print last <num> lines.
-c <num> → Print last <num> bytes.
Example:
tail -3 sample.txt → Last 3 lines of the file.

🔹 4. cut – Extract Columns or Fields


Purpose: Cuts selected parts (fields, bytes, or characters) from each line.
Syntax: cut [options] <filename>
Options:
-b → Bytes (e.g., cut -b 1,3 sample.txt)
-c → Characters (e.g., cut -c 2,5 sample.txt)
-f → Fields (requires -d delimiter)
-d → Delimiter (e.g., cut -d "," -f1 sample.csv)
Note: Must specify at least one option.

🔹 5. sort – Sort Text


Purpose: Sorts lines in a file based on ASCII values or numerically.
Syntax: sort [options] <filename>
Common Options:
-r → Reverse order.
-n → Numeric sort.
-f → Ignore case.
-o <file> → Save sorted output to a file.
-m <filelist> → Merge sorted files.
Example:
sort -r file.txt → Sort in reverse.

🔹 6. uniq – Remove Duplicate Lines


Purpose: Reports or removes repeated lines in sorted files.
Syntax: uniq [options] <filename>
Options:
-d → Show only duplicate lines.
-u → Show only unique lines.
-c → Count occurrences.
Example:
sort file.txt | uniq -c → Count unique lines in sorted file.

🔹 7. tr – Translate or Delete Characters


Purpose: Translates, deletes, or squeezes characters.
Syntax: tr [options] [SET1] [SET2]
Options:
-d → Delete characters.
-c → Use complement of SET1.
Useful Sets:
[:lower:], [:upper:], [:digit:], [:space:]
Examples:
Convert to uppercase: cat file | tr '[:lower:]' '[:upper:]'
Delete digits: cat file | tr -d '[:digit:]'
Remove spaces: cat file | tr -d ' '

🧠
🔹 Pattern Filtering with Regular Expressions
8. grep – Global Regular Expression Print
Purpose: Searches files for lines that match a pattern.
Syntax: grep [options] "pattern" <filename>
Options:
-i → Ignore case.
-v → Invert match.
-c → Count of matched lines.
-n → Show line numbers.
-w → Match whole word.
Example:
grep -w "Linux" file.txt → Matches "Linux" as a whole word.

🔹 9. egrep – Extended GREP


Purpose: Similar to grep but supports extended regex (e.g., |, +, ?).
Example:
egrep "Linux|Unix" file.txt → Matches either "Linux" or "Unix".

🔹 10. sed – Stream Editor


Purpose: Performs find and replace, deletion, insertion in files without opening them.
Syntax: sed [options] 'command' <filename>
Commands:
s/pattern/replacement/ → Substitute.
d → Delete lines.
i → Insert line before.
a → Append line after.
p → Print lines.
Examples:
Replace "Linux" with "UNIX": sed 's/Linux/UNIX/' file.txt
Delete line 2: sed '2d' file.txt
Insert text before every line: sed 'i\--- HEADER ---' file.txt

You might also like