0% found this document useful (0 votes)
45 views18 pages

Linux and UNIX:: I/O Hardware

ftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wvftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wvftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wvftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wvftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wvftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wvftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views18 pages

Linux and UNIX:: I/O Hardware

ftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wvftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wvftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wvftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wvftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wvftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wvftgfgdfgrdy fdgfg sert rtfesrtwcfrtvterfr rtgttg wv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

I/O Hardware is a key component of operating systems.

It enables communication between the computer and


other devices such as keyboards, mouse, printers, etc. I/O hardware has evolved over time and today there are many
different types of I/O interfaces available in modern computers that can be used depending on the specific needs of
your project or application

I/O Software is used for interaction with I/O devices like mouse, keyboards, USB devices, printers, etc. Several
commands are made via external available devices which makes the OS function upon each of them one by one.
I/O software is organized in the following ways:
User Level Libraries– Provides a simple interface to program for input-output functions.
Kernel Level Modules– Provides device driver to interact with the device-independent I/O modules and device
controller.
Hardware-A layer including hardware controller and actual hardware which interact with device drivers.
Let us now see all the goals of I/O software in the below illustrated section one after the another:

Linux And UNIX:


UNIX is an innovative or ground breaking operating system which was developed in the 1970s by Ken Thompson,
Dennis Ritchie, and many others at AT&T Laboratories. It is like a backbone for many modern operating systems
like Ubuntu, Solaris, Kali Linux, Arch Linux, and also POSIX. Originally, It was designed for developers only,
UNIX played a most important role in the development and creation of the software and computing environments.
Its distribution to government and academic institutions led to its widespread adoption across various types of
hardware components. The core part of the UNIX system lies in its base Kernel, which is integral to its
architecture, structure, and key functionality making it the heart of the operating system.
For those preparing for exams like GATE , a thorough understanding of operating systems, including Unix, is
essential. Our GATE course provides an in-depth exploration of Unix, covering its history, structure, and key
concepts that are crucial for the exam
The basic design philosophy of UNIX is to provide simple, powerful tools that can be combined to perform
complex tasks. It features a command-line interface that allows users to interact with the system through a series of
commands, rather than through a graphical user interface (GUI).
Some of the Key Features of UNIX Include
Multiuser support: UNIX allows multiple users to simultaneously access the same system and share resources.
Multitasking: UNIX is capable of running multiple processes at the same time.
Shell scripting: UNIX provides a powerful scripting language that allows users to automate tasks.
Security: UNIX has a robust security model that includes file permissions, user accounts, and network security
features.
Portability: UNIX can run on a wide variety of hardware platforms, from small embedded systems to large
mainframe computers.
Communication: UNIX supports communication methods using the write command, mail command, etc.
Process Tracking: UNIX maintains a record of the jobs that the user creates. This function improves system
performance by monitoring CPU usage. It also allows you to keep track of how much disk space each user uses,
and the use that information to regulate disk space.

Linux
A popular open-source operating system is Linux. It was initially created by Linus Torvalds in 1991. At the time,
Torvalds was a computer science student at the University of Helsinki, Finland and began working on the Linux
project as a personal endeavour. The name Linux is a combination of his first name, Linus, and Unix, the operating
system that inspired his projects. At the time, most operating systems were proprietary and expensive. Torvalds
wanted to create an operating system that was freely available to anyone who wanted to use the operating system,
He originally released Linux as free software under the GNU General Public License. This meant that anyone could
use, modify, and redistribute his source code.
Early versions of Linux were primarily used by technology enthusiasts and software developers, but over time it
has grown in popularity and is used in various types of devices such as servers, smartphones, and embedded
systems. Linux is considered one of the most stable, secure and reliable operating systems and is widely used in
servers, supercomputers and enterprise environments. Today, Linux is one of the most widely used operating
systems in the world, with an estimated 2.76% of all desktop computers and more than 90% of the world’s top
supercomputers running on Linux, and approx. 71.85% of all mobile devices run on Android, which is, you
guessed it, Linux-based. The Linux community has expanded to include thousands of developers and users who
work on the creation and upkeep of the operating system. Nowadays Linux has many distributions (versions)
namely:
Ubuntu
Fedora
Arch
Plasma
KDE
Mint

How does Linux Work?


Think of the operating system as the engine of your car. The engine can move on its own, but when connected to
the gearbox, axles and wheels it becomes a functioning car. If the engine is not working properly, the rest of the car
will not work. Linux was designed to be similar to UNIX but evolved to run on hardware ranging from phones to
supercomputers. All Linux-based operating systems include a Linux kernel that manages hardware resources and a
set of software packages that make up the rest of the operating system. Organizations can also run Linux operating
systems on Linux servers.
Kernel: This is actually a component of the “Linux” system as a whole. The kernel, which controls the CPU,
memory, and peripherals, serves as the brain of the system. The operating system’s kernel is at the most
fundamental level.
Desktop Environment: The user actually engages in interaction at this point. There are numerous desktop
environments available (GNOME, Cinnamon, Mate, Pantheon, Enlightenment, KDE, Xfce, etc.). Every desktop
environment has pre-installed programmes (file managers, configuration tools, web browsers, games, etc.).

Why Use Linux?


There are several reasons why one might choose to use Linux:
Open-source: Linux is open-source software, meaning that the source code is freely available for anyone to use,
modify, and distribute. This allows for a large and active community of developers to contribute to the
development and maintenance of the operating system.
Customizability: Linux is highly customizable, and users can easily install and configure different software
packages to suit their needs.
Stability and security: Linux is known for its stability and security, as it is less prone to crashes and viruses than
other operating systems.
Cost-effective: Linux is free to download and use, making it a cost-effective option for individuals and businesses.

Development of Linux
The Linux ecosystem is a constantly evolving and expanding platform, so there is a lot of development going on.
Notable recent developments include:
Linux 5.11 kernel release. It includes new features such as AMD Zen 3 processor support, memory management
system improvements, and new hardware support.
Continued development of various Linux distributions. Ubuntu 21.04 released in April 2021. It features an updated
Gnome desktop environment, improved ZFS file system support, and new security features.
Development of new open-source software and tools for Linux. For example, the release of version 6.0 of Ansible
automation tools brings new features such as support for Windows Subsystem for Linux 2 (WSL2) and improved
support for Kubernetes.
The rise of containerization and orchestration technologies such as Docker and Kubernetes. They are becoming
more and more common in deploying and managing Linux-based applications.
Linux is growing in popularity in the cloud computing space, with many major cloud providers offering Linux-
based virtual machines and managed services.

Processes in Linux/Unix
A program/command when executed, a special instance is provided by the system to the process. This instance
consists of all the services/resources that may be utilized by the process under execution.
Whenever a command is issued in Unix/Linux, it creates/starts a new process. For example, pwd when issued
which is used to list the current directory location the user is in, a process starts.
Through a 5 digit ID number Unix/Linux keeps an account of the processes, this number is called process ID or
PID. Each process in the system has a unique PID.
Used up pid’s can be used in again for a newer process since all the possible combinations are used.
At any point of time, no two processes with the same pid exist in the system because it is the pid that Unix uses to
track each process.

Initializing a process
A process can be run in two ways:
Method 1: Foreground Process : Every process when started runs in foreground by default, receives input from the
keyboard, and sends output to the screen. When issuing pwd command
$ ls pwd
Output:
$ /home/geeksforgeeks/root
When a command/process is running in the foreground and is taking a lot of time, no other processes can be run or
started because the prompt would not be available until the program finishes processing and comes out.

Method 2: Background Process: It runs in the background without keyboard input and waits till keyboard input is
required. Thus, other processes can be done in parallel with the process running in the background since they do
not have to wait for the previous process to be completed.
Adding & along with the command starts it as a background process
$ pwd &
Since pwd does not want any input from the keyboard, it goes to the stop state until moved to the foreground and
given any data input. Thus, on pressing Enter:
Output:
[1] + Done pwd
$
That first line contains information about the background process – the job number and the process ID. It tells you
that the ls command background process finishes successfully. The second is a prompt for another command.

Memory management in Linux


Memory management in Linux is a crucial aspect of its operating system kernel, designed to efficiently handle the
system's memory resources, prevent fragmentation, and provide isolation between processes. Linux memory
management encompasses several important concepts such as virtual memory, paging, swapping, and memory
protection. Here's an overview of the main elements:

1. Virtual Memory

Linux, like most modern operating systems, uses virtual memory. This means that each process is given the
illusion of having its own large, contiguous block of memory, while in reality, the physical memory (RAM) is
fragmented and shared between processes.

 Virtual Address Space: Each process has its own virtual address space. The operating system's kernel
maps this virtual space to physical memory (RAM) via a process known as paging.
 Address Translation: The Memory Management Unit (MMU) in the processor handles the translation
between virtual addresses and physical addresses. This is done using page tables.

2. Paging

Paging is the mechanism by which the operating system breaks down both virtual and physical memory into fixed-
size blocks called pages (usually 4 KB in size).

 Page Tables: These store the mapping between virtual addresses and physical addresses. Each process has
its own page table.
 Page Fault: If a process tries to access a page not currently in memory, a page fault occurs, and the kernel
must load the required page from disk (usually from swap space).

3. Swap Space

Swap space is an area on the disk used to extend the system's physical memory. When the system runs out of RAM,
pages that are not actively used are moved to swap space to free up physical memory for active processes.

 Swapping: The process of moving pages between RAM and swap space is called swapping. It can lead to
performance degradation (called "thrashing") if overused, as accessing disk is much slower than accessing
RAM.
 Swap Partitions/Files: Linux can use either a dedicated swap partition or a swap file located on a regular
file system.

4. Memory Allocation

The Linux kernel uses several algorithms for allocating memory to processes and the kernel itself.

 Slab Allocator: Used for allocating and deallocating memory for small, frequently used objects, like kernel
data structures.
 Buddy System: A memory allocation technique used for managing physical memory in larger chunks. It's
efficient in terms of reducing fragmentation.
 kmalloc and vmalloc: kmalloc is used for allocating memory in the kernel's physical memory, while
vmalloc is used for allocating memory in virtual space (though it's slower than kmalloc).

5. Memory Protection

Memory protection ensures that processes cannot access the memory areas of other processes or the kernel. The
MMU plays a key role in this by enforcing permissions (e.g., read, write, execute) on memory regions.

 Segmentation: This provides a way to define different sections of memory (e.g., text, data, stack, heap)
with different access permissions.
 Page-level Protection: The MMU can enforce page-level protection (e.g., marking a page as read-only or
executable).
 Kernel vs User Space: The virtual memory of a user-space process is isolated from the kernel space. User
space typically has no access to kernel memory, except through system calls.
6. Kernel Memory Management

 Kernel Space: This is reserved for the operating system and drivers. It has a higher privilege level than
user space.
 User Space: This is where user processes run. Each process gets its own separate virtual address space.
 Direct Memory Access (DMA): For certain hardware devices, the kernel must be able to directly access
physical memory to perform I/O operations efficiently.

7. Memory Overcommitment

Linux allows memory overcommitment, meaning that the system can allocate more memory to processes than is
physically available, assuming not all processes will use their allocated memory at the same time.

 OOM Killer: If memory overcommitment causes the system to run out of memory, Linux uses the Out-
Of-Memory (OOM) killer to terminate a process and free up memory.
 vm.overcommit_memory: This kernel parameter controls the overcommit behavior. The three modes are:
o 0: Heuristic overcommit handling (default).
o 1: Always overcommit, never check if enough memory is available.
o 2: Don’t overcommit unless there’s enough free memory.

8. HugePages

HugePages is a feature that allows the system to use larger memory pages (e.g., 2 MB or 1 GB pages) instead of
the default 4 KB pages. This reduces the overhead of managing many smaller pages and can improve performance
for memory-intensive applications.

 HugePages are particularly useful for applications like databases or scientific computing, where large
memory chunks are accessed frequently.

9. Cache Management

The Linux kernel also includes various caches to improve performance:

 Page Cache: Caches file system data in memory to reduce disk I/O.
 Dentry and Inode Caches: Cache metadata structures of files and directories to speed up file system
operations.
 CPU Cache: The kernel is aware of the CPU's cache hierarchy (L1, L2, etc.) and optimizes memory
accesses accordingly.

10. Memory Compression

Linux also supports zswap and zram, which compress pages before swapping them out to disk. This can help save
space in swap and reduce the amount of time spent swapping data to and from the disk.

11. NUMA (Non-Uniform Memory Access)

On multi-processor systems, Linux supports NUMA, which allows each processor (or node) to access its local
memory faster than memory on other processors. Linux will try to allocate memory closer to the processor that
needs it for better performance, but it also ensures that memory is allocated across all nodes for balance.

12. Memory Management Metrics

Linux provides various tools for monitoring memory usage, such as:

 free: Shows the amount of free and used memory in the system.
 top or htop: Interactive tools that show real-time memory usage by processes.
 vmstat: Provides detailed information about the kernel's memory management, swapping, and paging.
 /proc/meminfo: Provides detailed statistics about memory usage, including total memory, free memory,
and cached memory.
Summary

In Linux, memory management is a complex, highly optimized subsystem that balances the needs of processes,
handles memory overcommitment, ensures memory protection, and provides efficient use of both physical and
virtual memory. The kernel employs various techniques, including paging, swapping, and the use of multiple
allocators, to maximize performance while maintaining system stability. Through tools and kernel parameters,
administrators can fine-tune memory management to suit specific workloads.

I/O in LINUX:

I/O (Input/Output) management in Linux is a fundamental part of the operating system, providing the mechanisms
to interact with hardware devices, files, and networks. Linux abstracts device interactions via files and the
filesystem, allowing processes to read from and write to devices as if they were regular files. Understanding how
I/O works in Linux involves understanding several key concepts, including device management, file systems,
buffering, and I/O scheduling.

1. Types of I/O in Linux

In Linux, I/O operations generally fall into two categories:

 Block I/O: Refers to I/O that deals with block devices, such as hard drives, SSDs, and CD-ROMs. Block
I/O operates on fixed-size blocks (typically 512 bytes or 4 KB).
 Character I/O: Deals with character devices such as keyboards, mice, and serial ports, where data is
handled one character at a time, and there is no fixed block size.

2. I/O Abstraction in Linux

Linux abstracts I/O through a unified interface where devices appear as files. This abstraction allows users and
programs to interact with hardware devices via system calls like open(), read(), write(), and close(), just as they
would for normal files.

 Files as Devices: In Linux, devices are represented as files in the /dev directory (e.g., /dev/sda for hard
drives, /dev/tty for terminals).
 Device Drivers: The actual interaction with hardware is performed by device drivers, which translate high-
level system calls into low-level hardware operations.

3. System Calls for I/O

Linux provides a set of system calls that programs can use to perform I/O operations:

 open(): Opens a file or device for reading, writing, or both.


 read(): Reads data from a file or device.
 write(): Writes data to a file or device.
 close(): Closes a file or device after use.
 ioctl(): Performs device-specific operations or configuration.

4. File Descriptors and File Types

When an application opens a file or device, the kernel returns a file descriptor (FD), a small integer that the
application uses to refer to the file or device in subsequent system calls.

 Standard Streams: Linux provides three default file descriptors:


o 0: Standard input (stdin).
o 1: Standard output (stdout).
o 2: Standard error (stderr).
 File Types: Linux supports several types of files, including regular files, directories, symbolic links, and
special device files (e.g., block and character devices).
5. File Systems

The Linux kernel supports many different file systems for storing data, such as ext4, XFS, Btrfs, F2FS, and others.
The file system handles how data is organized on disk and provides an interface for reading, writing, and managing
files.

 Inodes: Every file in Linux is represented by an inode, a data structure that stores metadata (e.g.,
permissions, owner, size, location on disk) about the file.
 Directory Structure: The file system organizes files in a hierarchical directory structure, with the root
directory (/) at the top.

6. Buffering and Caching

I/O in Linux involves buffering and caching to optimize performance. When a process reads or writes data, it
doesn't necessarily interact with the hardware immediately. Instead, the data is temporarily held in memory.

 Page Cache: Linux uses a page cache to store frequently accessed file data in memory, reducing disk I/O.
When a file is read, the kernel first checks if the data is in the cache before reading from disk.
 Buffer Cache: The buffer cache stores metadata (such as file system structures) and raw disk blocks.
 Writeback Cache: Data written to files is first placed in memory and then written back to disk
asynchronously.

Advantages: These caches minimize disk access and improve I/O performance by taking advantage of the speed of
RAM versus the much slower speed of storage devices.

7. I/O Scheduling

I/O scheduling refers to how the Linux kernel decides the order in which I/O requests are handled, especially in the
context of block devices (e.g., hard drives, SSDs).

 I/O Schedulers: The kernel uses I/O schedulers to optimize the sequence of read and write operations to
minimize disk head movement (for spinning disks) and improve performance. Common I/O schedulers
include:
o CFQ (Completely Fair Queuing): Tries to balance fairness between different processes
performing I/O.
o Deadline: Prioritizes I/O requests with deadlines to improve responsiveness.
o NOOP: A simple, minimal scheduler for devices that don’t require sophisticated scheduling (e.g.,
SSDs).
o BFQ (Budget Fair Queuing): Focuses on providing good I/O performance for interactive
applications.

 Asynchronous I/O: In some cases, Linux allows asynchronous I/O (AIO), where I/O operations are
initiated, and the program can continue execution while waiting for the I/O to complete. This can be
particularly useful for high-performance applications.

8. Blocking vs Non-Blocking I/O

 Blocking I/O: By default, most I/O operations in Linux are blocking. For example, if a program performs
a read() system call, the process will be blocked until data is available or the read operation completes.
 Non-Blocking I/O: Non-blocking I/O allows a process to check for data without getting stuck waiting for
it. If data is not ready, the system call immediately returns with an error (usually EAGAIN or
EWOULDBLOCK).
 Select, Poll, and Epoll: Linux provides several mechanisms to handle non-blocking I/O for multiple file
descriptors:
o select(): Allows a process to wait for I/O readiness on multiple file descriptors.
o poll(): Similar to select(), but more scalable for large numbers of file descriptors.
o epoll(): A more efficient and scalable mechanism for handling large numbers of I/O events,
particularly in network servers.
9. Direct I/O and Memory-Mapped I/O

 Direct I/O: This method bypasses the kernel's buffer cache and allows user-space applications to directly
read or write data from/to disk, which is useful for applications requiring very high performance (e.g.,
database systems). Direct I/O is typically used for large files and raw devices.
 Memory-Mapped I/O (mmap): This allows a file or device to be mapped directly into the process's
address space. It is often used for large files or devices, enabling the process to manipulate file contents as
if they were part of memory.

10. Network I/O

In addition to disk I/O, Linux also supports network I/O, which is handled using the socket API. Network
communication in Linux can use both stream-based sockets (TCP) and datagram-based sockets (UDP). The Linux
kernel manages network buffers, handles packet routing, and provides various I/O mechanisms (e.g., blocking/non-
blocking sockets, select/epoll).

 TCP/IP Stack: Linux implements a full TCP/IP stack, which is responsible for managing network I/O,
including handling incoming/outgoing packets, connection management, and more.
 Zero-Copy I/O: Linux supports zero-copy network I/O, which allows data to be transferred between user-
space buffers and network interfaces without being copied into kernel space, improving performance in
network-intensive applications.

11. I/O Performance Tuning

Linux provides several ways to tune I/O performance:

 sysctl parameters: Parameters like vm.swappiness, vm.dirty_ratio, and vm.dirty_background_ratio can be


adjusted to control memory management and swap behavior.
 I/O Scheduler: You can change the I/O scheduler for block devices using the echo command to adjust
parameters in /sys/block/<device>/queue/scheduler.
 Mount options: File system performance can be improved with specific mount options like noatime (to
prevent updates to access times) or barrier=0 (disabling write barriers).

Summary
Linux I/O is designed to be efficient, flexible, and scalable. I/O operations in Linux are abstracted through a unified
file-based interface, allowing for transparent interaction with hardware devices. Through advanced features like
buffering, caching, I/O scheduling, and non-blocking I/O, Linux optimizes performance for a wide range of use
cases, from interactive applications to high-performance servers. Additionally, Linux supports various file systems,
direct I/O, memory-mapped files, and networking, enabling efficient data handling across different types of
devices.

Linux File System


Operating systems, the software that powers your computer, rely on a crucial element known as the file system.
Think of it as a virtual organizational tool that manages, stores, and retrieves your data efficiently. In the Linux
world, a diverse range of file systems has emerged, each crafted to address specific needs and preferences. This
article aims to simplify the intricacies of Linux file systems, guiding beginners through their layers, characteristics,
and implementations. By shedding light on these nuances, we empower users to make informed choices in
navigating the dynamic landscape of Linux operating systems.
What is the Linux File System
The Linux file system is a multifaceted structure comprised of three essential layers. At its foundation, the Logical
File System serves as the interface between user applications and the file system, managing operations like
opening, reading, and closing files. Above this, the Virtual File System facilitates the concurrent operation of
multiple physical file systems, providing a standardized interface for compatibility. Finally, the Physical File
System is responsible for the tangible management and storage of physical memory blocks on the disk, ensuring
efficient data allocation and retrieval. Together, these layers form a cohesive architecture, orchestrating the
organized and efficient handling of data in the Linux operating system.
In this article, we will be focusing on the file system for hard disks on a Linux OS and discuss which type of file
system is suitable. The architecture of a file system comprises three layers mentioned below.
Linux File System Structure
A file system mainly consists of 3 layers. From top to bottom:
1. Logical File System:
The Logical File System acts as the interface between the user applications and the file system itself. It facilitates
essential operations such as opening, reading, and closing files. Essentially, it serves as the user-friendly front-end,
ensuring that applications can interact with the file system in a way that aligns with user expectations.
2. Virtual File System:
The Virtual File System (VFS) is a crucial layer that enables the concurrent operation of multiple instances of
physical file systems. It provides a standardized interface, allowing different file systems to coexist and operate
simultaneously. This layer abstracts the underlying complexities, ensuring compatibility and cohesion between
various file system implementations.
3. Physical File System:
The Physical File System is responsible for the tangible management and storage of physical memory blocks on the
disk. It handles the low-level details of storing and retrieving data, interacting directly with the hardware
components. This layer ensures the efficient allocation and utilization of physical storage resources, contributing to
the overall performance and reliability of the file system.

Architecture Of a File System


Characteristics of a File System
Space Management: how the data is stored on a storage device. Pertaining to the memory blocks and fragmentation
practices applied in it.
Filename: a file system may have certain restrictions to file names such as the name length, the use of special
characters, and case sensitive-ness.
Directory: the directories/folders may store files in a linear or hierarchical manner while maintaining an index table
of all the files contained in that directory or subdirectory.
Metadata: for each file stored, the file system stores various information about that file’s existence such as its data
length, its access permissions, device type, modified date-time, and other attributes. This is called metadata.
Utilities: file systems provide features for initializing, deleting, renaming, moving, copying, backup, recovery, and
control access of files and folders.
Design: due to their implementations, file systems have limitations on the amount of data they can store.
Some important terms:
1) Journaling:
Journaling file systems keep a log called the journal, that keeps track of the changes made to a file but not yet
permanently committed to the disk so that in case of a system failure the lost changes can be brought back.
2) Versioning:
Versioning file systems store previously saved versions of a file, i.e., the copies of a file are stored based on
previous commits to the disk in a minutely or hourly manner to create a backup.
3) Inode:
The index node is the representation of any file or directory based on the parameters – size, permission, ownership,
and location of the file and directory.
Now, we come to part where we discuss the various implementations of the file system in Linux for disk storage
devices.
Linux File Systems:
Note: Cluster and distributed file systems will not be included for simplicity.

Types of File System in Linux


1) ext (Extended File System):
Implemented in 1992, it is the first file system specifically designed for Linux. It is the first member of the ext
family of file systems.
2) ext2:
The second ext was developed in 1993. It is a non-journaling file system that is preferred to be used with flash
drives and SSDs. It solved the problems of separate timestamp for access, inode modification and data
modification. Due to not being journaled, it is slow to load at boot time.
3) Xiafs:
Also developed in 1993, this file system was less powerful and functional than ext2 and is no longer in use
anywhere.
4) ext3:
The third ext developed in 1999 is a journaling file system. It is reliable and unlike ext2, it prevents long delays at
system boot if the file system is in an inconsistent state after an unclean shutdown. Other factors that make it better
and different than ext2 are online file system growth and HTree indexing for large directories.
5) JFS (Journaled File System):
First created by IBM in 1990, the original JFS was taken to open source to be implemented for Linux in 1999. JFS
performs well under different kinds of load but is not commonly used anymore due to the release of ext4 in 2006
which gives better performance.
6) ReiserFS:
It is a journal file system developed in 2001. Despite its earlier issues, it has tail packing as a scheme to reduce
internal fragmentation. It uses a B+ Tree that gives less than linear time in directory lookups and updates. It was the
default file system in SUSE Linux till version 6.4, until switching to ext3 in 2006 for version 10.2.
7) XFS:
XFS is a 64-bit journaling file system and was ported to Linux in 2001. It now acts as the default file system for
many Linux distributions. It provides features like snapshots, online defragmentation, sparse files, variable block
sizes, and excellent capacity. It also excels at parallel I/O operations.
8) SquashFS:
Developed in 2002, this file system is read-only and is used only with embedded systems where low overhead is
needed.
9) Reiser4:
It is an incremental model to ReiserFS. It was developed in 2004. However, it is not widely adapted or supported
on many Linux distributions.
10) ext4:
The fourth ext developed in 2006, is a journaling file system. It has backward compatibility with ext3 and ext2 and
it provides several other features, some of which are persistent pre-allocation, unlimited number of subdirectories,
metadata checksumming and large file size. ext4 is the default file system for many Linux distributions and also
has compatibility with Windows and Macintosh.
11) btrfs (Better/Butter/B-tree FS):
It was developed in 2007. It provides many features such as snapshotting, drive pooling, data scrubbing, self-
healing and online defragmentation. It is the default file system for Fedora Workstation.
12) bcachefs:
This is a copy-on-write file system that was first announced in 2015 with the goal of performing better than btrfs
and ext4. Its features include full filesystem encryption, native compression, snapshots, and 64-bit check summing.
13) Others:
Linux also has support for file systems of operating systems such as NTFS and exFAT, but these do not support
standard Unix permission settings. They are mostly used for interoperability with other operating systems.
ext4 in Linux File System
Ext4 was designed to be backward compatible with ext3 and ext2, its previous generations. It’s better than the
previous generations in the following ways:
It provides a large file system as described in the table above.
Utilizes extents that improve large file performance and reduces fragmentation.
Provides persistent pre-allocation which guarantees space allocation and contiguous memory.
Delayed allocation improves performance and reduces fragmentation by effectively allocating larger amounts of
data at a time.
It uses HTree indices to allow unlimited number of subdirectories.
Performs journal checksumming which allows the file system to realize that some of its entries are invalid or out of
order after a crash.
Support for time-of-creation timestamps and improved timestamps to induce granularity.
Transparent encryption.
Allows cleaning of inode tables in background which in turn speeds initialization. The process is called lazy
initialization.
Enables writing barriers by default. Which ensures that file system metadata is correctly written and ordered on
disk, even when write caches lose power.
There are still some features in the process of developing like metadata checksumming, first-class quota supports,
and large allocation blocks.
However, ext4 has some limitations. Ext4 does not guarantee the integrity of your data, if the data is corrupted
while already on disk then it has no way of detecting or repairing such corruption. The ext4 file system cannot
secure deletion of files, which is supposed to cause overwriting of files upon deletion. It results in sensitive data
ending up in the file-system journal.
Security in Linux

Security in Linux is a multi-layered, complex, and highly configurable aspect of the operating system that involves
protecting the system from unauthorized access, ensuring data integrity, managing user permissions, and defending
against various types of cyber threats. Linux security involves a combination of kernel-level protections, user-space
tools, and best practices that work together to create a secure environment. Below is an overview of key security
features and tools in Linux:

1. User Authentication and Authorization

 User Accounts: In Linux, every process runs under a user account. User accounts are authenticated via
passwords, public keys, or other authentication mechanisms. The /etc/passwd and /etc/shadow files store
user account details, while /etc/group defines group memberships.
 Pluggable Authentication Modules (PAM): PAM is a framework that allows system administrators to
configure different authentication methods for applications, such as two-factor authentication, biometric
authentication, or smart card authentication.
 sudo: The sudo command allows authorized users to execute commands with elevated (root) privileges,
reducing the need for users to log in as the root user directly. This helps limit the risk of unauthorized
access.
 Root User and Least Privilege: In Linux, the root user has unrestricted access to the entire system.
However, Linux security principles often emphasize the least privilege model, where users and processes
are given only the minimal level of access they need to perform their tasks.

2. Access Control and File Permissions

Linux uses a permission-based model to control access to files and resources. This model is designed to prevent
unauthorized access to system files and user data.

 File Permissions: Every file and directory has three types of permissions: read (r), write (w), and execute
(x), and these are set for three categories of users: owner, group, and others. Permissions can be viewed and
set using the ls -l, chmod, chown, and chgrp commands.
 Access Control Lists (ACLs): ACLs provide more granular control over file permissions by allowing you
to define permissions for specific users or groups on a file or directory beyond the basic
owner/group/others model.
 Sticky Bit: The sticky bit is a special permission that can be set on directories. When set, only the file
owner (or root) can delete or rename files within the directory. It is often used on directories like /tmp to
prevent users from deleting other users' temporary files.
 Capabilities: Linux allows granular control over process privileges using capabilities. This provides a
more fine-grained control than simply granting full root privileges. For example, a process can be allowed
to bind to network ports without needing full root access.

3. File System Security

 SELinux (Security-Enhanced Linux): SELinux is a mandatory access control (MAC) system that
enforces security policies on processes and files. It uses labels to define security contexts for processes and
resources, preventing unauthorized access even by root users.
 AppArmor: Similar to SELinux, AppArmor is another MAC system that limits the capabilities of
applications based on predefined security profiles. AppArmor is generally considered easier to configure
than SELinux and is available in many distributions.
 Encryption:
o dm-crypt and LUKS: Linux supports full disk encryption using dm-crypt and LUKS (Linux
Unified Key Setup). This ensures that data on disk is encrypted and inaccessible without the proper
key.
o eCryptfs: A stacked cryptographic filesystem that allows users to encrypt individual directories or
files without encrypting the entire disk.
o GPG/PGP: For encrypting individual files and communications, tools like GPG (GNU Privacy
Guard) can be used for file encryption and email signing/encryption.
4. Network Security

 Firewall (iptables/nftables): Linux includes powerful tools like iptables and the newer nftables to filter
network traffic. These tools allow administrators to configure rules for incoming and outgoing traffic based
on IP addresses, ports, protocols, and other factors.
 TCP Wrappers: This is a host-based access control mechanism that can be used to restrict access to
network services based on the source IP address.
 SELinux/AppArmor and Networking: SELinux and AppArmor can also enforce security policies on
network services and connections, limiting which processes can communicate over the network.
 SSH (Secure Shell): SSH is the primary method of remote access to Linux systems. It provides secure
encrypted connections for logging in, executing commands, and transferring files. Secure configurations
for SSH include using key-based authentication, disabling root login, and restricting access via firewalls.
 Fail2Ban: This tool is used to protect Linux systems from brute-force attacks by monitoring log files for
failed login attempts and temporarily blocking IP addresses that exhibit suspicious behavior.

5. Kernel Security

 Linux Security Modules (LSM): LSM is a framework that allows the implementation of security policies
in the Linux kernel. SELinux and AppArmor are examples of LSMs. LSMs provide mandatory access
control, allowing the kernel to enforce rules on process behavior.
 Kernel Hardening: Various techniques can be applied to "harden" the Linux kernel, making it more
resistant to exploits:
o ExecShield: Protects the system against buffer overflow attacks by marking certain memory areas
as non-executable.
o Stack Smashing Protection (SSP): A compiler feature that inserts canary values into function
stacks to detect and prevent stack buffer overflows.
o Kernel Address Space Layout Randomization (KASLR): Randomizes the memory address
space to make it harder for attackers to predict the location of important kernel data.
o Control Flow Integrity (CFI): A technique to prevent attackers from hijacking the control flow of
programs, making it more difficult to exploit vulnerabilities.

6. Security Auditing and Monitoring

 Auditd: The Linux audit daemon (auditd) provides a framework for auditing security-relevant events, such
as user logins, file accesses, and system calls. Audit logs are stored in /var/log/audit/audit.log and can be
analyzed to detect suspicious activity.
 Syslog: Syslog is a system for logging events generated by the operating system and applications. It can be
configured to forward logs to remote servers for centralized logging and monitoring.
 Logwatch: A tool that provides summaries of system logs to help administrators spot unusual activity or
security events.
 OSSEC: An open-source host-based intrusion detection system (HIDS) that can monitor system logs, file
integrity, and configuration changes, as well as detect rootkit installations and malware.

7. Security Updates and Patch Management

 Package Management: Most Linux distributions come with a package manager (e.g., apt for
Debian/Ubuntu, yum or dnf for RHEL/CentOS/Fedora) that allows easy installation and management of
software packages.
 Automatic Security Updates: It is crucial to keep the system up-to-date with the latest security patches.
Tools like unattended-upgrades (Debian/Ubuntu) or dnf-automatic (RHEL/CentOS/Fedora) allow
automatic installation of security updates.
 Security Advisories: Many distributions have mailing lists or websites that provide security advisories.
Administrators should monitor these to stay informed about vulnerabilities that affect their systems.

8. Backup and Disaster Recovery

 Regular Backups: Ensuring regular backups of important data and configurations is a crucial part of
maintaining a secure system. Tools like rsync, tar, or backup solutions like Bacula and Amanda can help
automate backups.
 Disaster Recovery Plans: Having a disaster recovery plan in place ensures that the system can be restored
to a secure state in the event of a breach or hardware failure.
9. Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS)

 Snort: A widely-used open-source IDS/IPS that can detect and prevent a variety of network attacks, such
as SQL injection and buffer overflows.
 Suricata: A high-performance IDS/IPS engine that can be used to monitor network traffic for malicious
activity.

Summary

Linux security is multi-faceted and involves numerous tools, features, and best practices to protect systems from
unauthorized access, malicious code, and data breaches. Key areas of Linux security include user authentication,
file permissions, kernel hardening, network security, auditing, patch management, and backup strategies. Effective
security in Linux is achieved through a combination of proper configuration, regular updates, access control, and
monitoring, along with the use of tools such as SELinux, AppArmor, firewalls, and intrusion detection systems.

Android Operating System


Android is a mobile operating system based on a modified version of the Linux kernel and other open-source
software, designed primarily for touchscreen mobile devices such as smartphones and tablets. Android is developed
by a partnership of developers known as the Open Handset Alliance and commercially sponsored by Google. It was
disclosed in November 2007, with the first commercial Android device, the HTC Dream, launched in September
2008.

It is free and open-source software. Its source code is Android Open Source Project (AOSP), primarily licensed
under the Apache License. However, most Android devices dispatch with additional proprietary software pre-
installed, mainly Google Mobile Services (GMS), including core apps such as Google Chrome, the digital
distribution platform Google Play and the associated Google Play Services development platform.

o About 70% of Android Smartphone runs Google's ecosystem, some with vendor-customized user interface
and some with software suite, such as TouchWizand later One UI by Samsung, and HTC Sense.
o Competing Android ecosystems and forksinclude Fire OS (developed by Amazon) or LineageOS.
However, the "Android" name and logo are trademarks of Google which impose standards to restrict
"uncertified" devices outside their ecosystem to use android branding.
Features of Android Operating System
Below are the following unique features and characteristics of the android operating system, such as:

1. Near Field Communication (NFC)

Most Android devices support NFC, which allows electronic devices to interact across short distances easily. The
main goal here is to create a payment option that is simpler than carrying cash or credit cards, and while the market
hasn't exploded as many experts had predicted, there may be an alternative in the works, in the form of Bluetooth
Low Energy (BLE).

2. Infrared Transmission

The Android operating system supports a built-in infrared transmitter that allows you to use your phone or tablet as
a remote control.
3. Automation

The Tasker app allows control of app permissions and also automates them.

4. Wireless App Downloads

You can download apps on your PC by using the Android Market or third-party options like AppBrain. Then it
automatically syncs them to your Droid, and no plugging is required.

5. Storage and Battery Swap

Android phones also have unique hardware capabilities. Google's OS makes it possible to upgrade, replace, and
remove your battery that no longer holds a charge. In addition, Android phones come with SD card slots for
expandable storage.

6. Custom Home Screens

While it's possible to hack certain phones to customize the home screen, Android comes with this capability from
the get-go. Download a third-party launcher like Apex, Nova, and you can add gestures, new shortcuts, or even
performance enhancements for older-model devices.

7. Widgets

Apps are versatile, but sometimes you want information at a glance instead of having to open an app and wait for it
to load. Android widgets let you display just about any feature you choose on the home screen, including weather
apps, music widgets, or productivity tools that helpfully remind you of upcoming meetings or approaching
deadlines.

8. Custom ROMs

Because the Android operating system is open-source, developers can twist the current OS and build their versions,
which users can download and install in place of the stock OS. Some are filled with features, while others change
the look and feel of a device. Chances are, if there's a feature you want, someone has already built a custom ROM
for it.

Architecture of Android OS
The android architecture contains a different number of components to support any android device needs. Android
software contains an open-source Linux Kernel with many C/C++ libraries exposed through application framework
services.

Among all the components, Linux Kernel provides the main operating system functions to Smartphone and Dalvik
Virtual Machine (DVM) to provide a platform for running an android application. An android operating system is a
stack of software components roughly divided into five sections and four main layers, as shown in the below
architecture diagram.

o Applications
o Application Framework
o Android Runtime
o Platform Libraries
o Linux Kernel
1. Applications

An application is the top layer of the android architecture. The pre-installed applications like camera, gallery,
home, contacts, etc., and third-party applications downloaded from the play store like games, chat applications,
etc., will be installed on this layer.

It runs within the Android run time with the help of the classes and services provided by the application framework.

2. Application framework

Application Framework provides several important classes used to create an Android application. It provides a
generic abstraction for hardware access and helps in managing the user interface with application resources.
Generally, it provides the services with the help of which we can create a particular class and make that class
helpful for the Applications creation.

It includes different types of services, such as activity manager, notification manager, view system, package
manager etc., which are helpful for the development of our application according to the prerequisite.

The Application Framework layer provides many higher-level services to applications in the form of Java classes.
Application developers are allowed to make use of these services in their applications. The Android framework
includes the following key services:

o Activity Manager: Controls all aspects of the application lifecycle and activity stack.
o Content Providers: Allows applications to publish and share data with other applications.
o Resource Manager: Provides access to non-code embedded resources such as strings, colour settings and
user interface layouts.
o Notifications Manager: Allows applications to display alerts and notifications to the user.
o View System: An extensible set of views used to create application user interfaces.
3. Application runtime

Android Runtime environment contains components like core libraries and the Dalvik virtual machine (DVM). It
provides the base for the application framework and powers our application with the help of the core libraries.

Like Java Virtual Machine (JVM), Dalvik Virtual Machine (DVM) is a register-based virtual machine designed
and optimized for Android to ensure that a device can run multiple instances efficiently.

It depends on the layer Linux kernel for threading and low-level memory management. The core libraries enable us
to implement android applications using the standard JAVA or Kotlin programming languages.

4. Platform libraries

The Platform Libraries include various C/C++ core libraries and Java-based libraries such as Media, Graphics,
Surface Manager, OpenGL, etc., to support Android development.

o app: Provides access to the application model and is the cornerstone of all Android applications.
o content: Facilitates content access, publishing and messaging between applications and application
components.
o database: Used to access data published by content providers and includes SQLite database, management
classes.
o OpenGL: A Java interface to the OpenGL ES 3D graphics rendering API.
o os: Provides applications with access to standard operating system services, including messages, system
services and inter-process communication.
o text: Used to render and manipulate text on a device display.
o view: The fundamental building blocks of application user interfaces.
o widget: A rich collection of pre-built user interface components such as buttons, labels, list views, layout
managers, radio buttons etc.
o WebKit: A set of classes intended to allow web-browsing capabilities to be built into applications.
o media: Media library provides support to play and record an audio and video format.
o surface manager: It is responsible for managing access to the display subsystem.
o SQLite: It provides database support, and FreeType provides font support.
o SSL: Secure Sockets Layer is a security technology to establish an encrypted link between a web server
and a web browser.
5. Linux Kernel

Linux Kernel is the heart of the android architecture. It manages all the available drivers such as display, camera,
Bluetooth, audio, memory, etc., required during the runtime.

The Linux Kernel will provide an abstraction layer between the device hardware and the other android architecture
components. It is responsible for the management of memory, power, devices etc. The features of the Linux kernel
are:

o Security: The Linux kernel handles the security between the application and the system.
o Memory Management: It efficiently handles memory management, thereby providing the freedom to
develop our apps.
o Process Management: It manages the process well, allocates resources to processes whenever they need
them.
o Network Stack: It effectively handles network communication.
o Driver Model: It ensures that the application works properly on the device and hardware manufacturers
responsible for building their drivers into the Linux build.
Android Applications
Android applications are usually developed in the Java language using the Android Software Development Kit.
Once developed, Android applications can be packaged easily and sold out either through a store such as Google
Play, SlideME, Opera Mobile Store, Mobango, F-droid or the Amazon Appstore.

Android powers hundreds of millions of mobile devices in more than 190 countries around the world. It's the
largest installed base of any mobile platform and growing fast. Every day more than 1 million new Android devices
are activated worldwide.
Android Emulator
The Emulator is a new application in the Android operating system. The Emulator is a new prototype used to
develop and test android applications without using any physical device.

The android emulator has all of the hardware and software features like mobile devices except phone calls. It
provides a variety of navigation and control keys. It also provides a screen to display your application. The
emulators utilize the android virtual device configurations. Once your application is running on it, it can use
services of the android platform to help other applications, access the network, play audio, video, store, and retrieve
the data.

Advantages of Android Operating System


We considered every one of the elements on which Android is better as thought about than different platforms.
Below are some important advantages of Android OS, such as:

o Android Google Developer: The greatest favourable position of Android is Google. Google claims an
android operating system. Google is a standout amongst the most trusted and rumoured item on the web.
The name Google gives trust to the clients to purchase Android gadgets.
o Android Users: Android is the most utilized versatile operating system. More than a billion individuals
clients utilize it. Android is likewise the quickest developing operating system in the world. Various clients
increment the number of applications and programming under the name of Android.
o Android Multitasking: The vast majority of us admire this component of Android. Clients can do heaps
of undertakings on the double. Clients can open a few applications on the double and oversee them very.
Android has incredible UI, which makes it simple for clients to do multitasking.
o Google Play Store App: The best part of Android is the accessibility of many applications. Google Play
store is accounted for as the world's largest mobile store. It has practically everything from motion pictures
to amusements and significantly more. These things can be effortlessly downloaded and gotten to through
an Android phone.
o Android Notification and Easy Access: Without much of a stretch, one can access their notice of any
SMS, messages, or approaches their home screen or the notice board of the android phone. The client can
view all the notifications on the top bar. Its UI makes it simple for the client to view more than 5 Android
notices immediately.
o Android Widget: Android operating system has a lot of widgets. This gadget improves the client
encounter much and helps in doing multitasking. You can include any gadget relying on the component
you need on your home screen. You can see warnings, messages, and a great deal more use without
opening applications.
Disadvantages of Android Operating System
We know that the Android operating system has a considerable measure of interest for users nowadays. But at the
same time, it most likely has a few weaknesses. Below are the following disadvantages of the android operating
system, such as:

o Android Advertisement pop-ups: Applications are openly accessible in the Google play store. Yet, these
applications begin demonstrating tons of advertisements on the notification bar and over the application.
This promotion is extremely difficult and makes a massive issue in dealing with your Android phone.
o Android require Gmail ID: You can't get to an Android gadget without your email ID or password.
Google ID is exceptionally valuable in opening Android phone bolts as well.
o Android Battery Drain: Android handset is considered a standout amongst the most battery devouring
operating systems. In the android operating system, many processes are running out of sight, which brings
about the draining of the battery. It is difficult to stop these applications as the lion's share of them is
system applications.
o Android Malware/Virus/Security: Android gadget is not viewed as protected when contrasted with
different applications. Hackers continue attempting to take your data. It is anything but difficult to target
any Android phone, and each day millions of attempts are done on Android phones.

You might also like