Module 4 File System
Module 4 File System
File structure
Logical storage unit
Collection of related information
File system resides on secondary storage (disks)
Provided user interface to storage, mapping logical to physical
Provides efficient and convenient access to disk by allowing data
to be stored, located retrieved easily
Disk provides in-place rewrite and random access
I/O transfers performed in blocks of sectors (usually 512 bytes)
File control block – storage structure consisting of information
about a file
Device driver controls the physical device
File system organized into layers
Layered File System
File System Layers
Device drivers manage I/O devices at the I/O control layer
Given commands like “read drive1, cylinder 72, track 2, sector 10,
into memory location 1060” outputs low-level hardware specific
commands to hardware controller
Basic file system given command like “retrieve block 123” translates
to device driver
Also manages memory buffers and caches (allocation, freeing,
replacement)
Buffers hold data in transit
Caches hold frequently used data
File organization module understands files, logical address, and
physical blocks
Translates logical block # to physical block #
Manages free space, disk allocation
File System Layers (Cont.)
Logical file system manages metadata information
Translates file name into file number, file handle, location by
maintaining file control blocks (inodes in UNIX)
Directory management
Protection
Layering useful for reducing complexity and redundancy, but adds
overhead and can decrease performanceTranslates file name into
file number, file handle, location by maintaining file control blocks
(inodes in UNIX)
Logical layers can be implemented by any coding method
according to OS designer
File System Layers (Cont.)
Many file systems, sometimes many within an operating system
Each with its own format (CD-ROM is ISO 9660; Unix has
UFS, FFS; Windows has FAT, FAT32, NTFS as well as floppy,
CD, DVD Blu-ray, Linux has more than 40 types, with
extended file system ext2 and ext3 leading; plus
distributed file systems, etc.)
New ones still arriving – ZFS, GoogleFS, Oracle ASM, FUSE
File-System Implementation
We have system calls at the API level, but how do we implement
their functions?
On-disk and in-memory structures
Boot control block contains info needed by system to boot OS from
that volume
Needed if volume contains OS, usually first block of volume
Volume control block (superblock, master file table) contains
volume details
Total # of blocks, # of free blocks, block size, free block pointers
or array
Directory structure organizes the files
Names and inode numbers, master file table
File-System Implementation (Cont.)
Per-file File Control Block (FCB) contains many details about the
file
inode number, permissions, size, dates
NFTS stores into in master file table using relational DB
structures
In-Memory File System Structures
The API is to the VFS interface, rather than any specific type of
file system
Virtual File System Implementation
For example, Linux has four object types:
inode, file, superblock, dentry
VFS defines set of operations on the objects that must be implemented
Every object has a pointer to a function table
Function table has addresses of routines to implement that
function on that object
For example:
• int open(. . .)—Open a file
• int close(. . .)—Close an already-open file
• ssize t read(. . .)—Read from a file
• ssize t write(. . .)—Write to a file
• int mmap(. . .)—Memory-map a file
Directory Implementation
Linear list of file names with pointer to the data blocks
Simple to program
Time-consuming to execute
Linear search time
Could keep ordered alphabetically via linked list or use B+
tree
Hash Table – linear list with hash data structure
Decreases directory search time
Collisions – situations where two file names hash to the same
location
Only good if entries are fixed size, or use chained-overflow
method
Allocation Methods - Contiguous
LA/512
Many newer file systems (i.e., Veritas File System) use a modified
contiguous allocation scheme
Mapping
Q
LA/511
R
Block to be accessed is the Qth block in the linked chain of blocks
representing the file.
Indexed allocation
Each file has its own index block(s) of pointers to its data blocks
Logical view
index table
Example of Indexed Allocation
Indexed Allocation (Cont.)
Random access
Q1
LA / (512 x 511)
R1
Q1 = block of index table
R1 is used as follows:
Q2
R1 / 512
R2
Q1
LA / (512 x 512)
R1
More index blocks than can be addressed with 32-bit file pointer
Performance
Best method depends on file access type
Contiguous great for sequential and random
Linked good for sequential, not random
Declare access type at creation -> select either contiguous or
linked
Indexed more complex
Single block access could require 2 index block reads then
data block read
Clustering can help improve throughput, reduce CPU
overhead
Performance (Cont.)
Adding instructions to the execution path to save one disk I/O is
reasonable
Intel Core i7 Extreme Edition 990x (2011) at 3.46Ghz = 159,000
MIPS
http://en.wikipedia.org/wiki/Instructions_per_second
Typical disk drive at 250 I/Os per second
159,000 MIPS / 250 = 630 million instructions during one
disk I/O
Fast SSD drives provide 60,000 IOPS
159,000 MIPS / 60,000 = 2.65 millions instructions during
one disk I/O
Free-Space Management
0 1 2 n-1
…
1 block[i] free
bit[i] =
0 block[i] occupied
Grouping
Modify linked list to store address of next n-1 free blocks in first
free block, plus a pointer to next block that contains free-block-
pointers (like this one)
Counting
Because space is frequently contiguously used and freed, with
contiguous-allocation allocation, extents, or clustering
Keep address of first free block and count of following free
blocks
Free space list then has entries containing addresses and
counts
Free-Space Management (Cont.)
Space Maps
Used in ZFS
Consider meta-data I/O on very large file systems
Full data structures like bit maps couldn’t fit in memory ->
thousands of I/Os
Divides device space into metaslab units and manages metaslabs
Given volume can contain hundreds of metaslabs
Each metaslab has associated space map
Uses counting algorithm
But records to log file rather than file system
Log of all block activity, in time order, in counting format
Metaslab activity -> load space map into memory in balanced-tree
structure, indexed by offset
Replay log into that structure
Combine contiguous free blocks into single entry
Efficiency and Performance
Performance
Keeping data and metadata close together
Buffer cache – separate section of main memory for frequently
used blocks
Synchronous writes sometimes requested by apps or needed by
OS
No buffering / caching – writes must hit disk before
acknowledgement
Asynchronous writes more common, buffer-able, faster
Free-behind and read-ahead – techniques to optimize
sequential access
Reads frequently slower than writes
Page Cache
A page cache caches pages rather than disk blocks using virtual memory
techniques and addresses
Routine I/O through the file system uses the buffer (disk) cache
A unified buffer cache uses the same page cache to cache both
memory-mapped pages and ordinary file system I/O to avoid
double caching
UNIX file-system interface (based on the open, read, write, and close
calls, and file descriptors)
Virtual File System (VFS) layer – distinguishes local files from remote
ones, and local files are further distinguished according to their file-
system types
The VFS activates file-system-specific operations to handle local
requests according to their file-system types
Calls the NFS protocol procedures for remote requests