IT - Linux Course Book
IT - Linux Course Book
A-150-1980
SYSTEMS ADMINISTRATION COURSE
INTRODUCTION TO UNIX/LINUX/SOLARIS
TRAINEE GUIDE
2 Mar 2017
TRAINEE GUIDE
NAME______________________________
CLASS NUMBER____________________
Safety/Hazard Awareness
Notice Original Information Sheet 3-1-1-2 Original
TABLE OF CONTENTS
Contents Page
All personnel involved in operation or maintenance of electronic equipment must be thoroughly familiar
with the electronic equipment safety precautions contained in Electronic Installation and Maintenance
Book, General, NAVSEA SE000-00-EIM-100, Section 3, and Naval Ships' Technical Manual, Chapter
300, S9086-KC-STM-010/CH-300, Section 2. In addition, attention is directed to the Navy Safety
Program Instructions, OPNAVINST 5100.19(series) and 5100.23(series), and the safety training
requirements contained in NETCINST 5100.1(series).
This equipment employs voltages that are dangerous and may be fatal if contacted by operating or
maintenance personnel. There are mechanical safety devices associated with this equipment that must be
maintained in a constant state of readiness to preclude causing injury to personnel and/or damage to
equipment. Extreme caution must be exercised when working with or handling this equipment. Some
components are extremely heavy. Rigid pre-inspections must be made to handling equipment to ensure
their safety and safety summaries must be read to the handling teams prior to conducting dangerous
evolutions. Hazard awareness dictates that this equipment must always be viewed as an integral part of a
system and not as a component. While every practical precaution has been incorporated into this
equipment, it is not possible or practical to try to list every condition or hazard that you may encounter.
Therefore, all operating or maintenance personnel must at all times observe as a minimum, the
following:
submitted to the Naval Safety Center, Norfolk, VA, in accordance with OPNAVINST 5102.1(series).
This will ensure that this hazard will be investigated, publicized, or corrected, as required.
Additionally, SSPINST 3100.1(series) requires SWS personnel to submit special check TFRs when a
potential or actual unsafe condition is noticed that could cause injury to personnel and/or damage to
equipment. When a problem/failure occurs involving the safety of personnel or equipment and it cannot
be immediately resolved by command/technical assistance on-site, the TFR data shall be transmitted to
SSP and others by Naval Message.
TRAINEE GUIDE
This publication is now in your custody and is for your use while learning about Unix/Linux basics in
system administration.
Upon completion of this course of instruction, return this Trainee Guide to your instructor.
This Trainee Guide was prepared to guide your training on the Unix/Linux basics.
Several other pertinent publications will be referred to frequently during the course.
The effectiveness of this Trainee Guide depends upon the conscientious accomplishment of the reading
and study assignments in the reference publications.
The Unix/Linux basics course material is divided into lessons and sections presented in a logical
sequence. The knowledge and skills to be acquired are stated for each section so that you can check your
progress.
Testing for the Unix/Linux basics consists of knowledge exams and practical performance tests that will
be administered by the instructor.
SAFETY PRECAUTIONS
When performing procedures on equipment, you must adhere to all safety precautions, including Notes,
Warnings, and Cautions contained in the procedural documentation. Practice safety while learning and
while maintaining equipment. Take time to be safe.
SECURITY
In the event that classified information is added to the Trainee Guide as a result of trainee notes, the
Trainee Guide shall be marked and handled in accordance with the regulations of the latest edition of the
Department of the Navy (DoN) Information Security Program (ISP) Regulation (SECNAVINST
5510.36(series)).
TERMINAL OBJECTIVES
A. INTRODUCTION
Navy System Administrators may frequently encounter server systems utilizing some version
of Unix or Linux as its operating system. Basic familiarity with the environment, architecture
and command structure as well as an understanding of available resources is essential to
successful administration.
B. OBJECTIVE:
Upon successful completion of this section, you will be able to:
ENTER the correct Bourne Again Shell (BASH) command to identify the running OS
LABEL the layers of a Linux/Unix OS
LIST core functions of an OS
DIFFERENTIATE between kernel and user mode
IDENTIFY similarities between Linux/Unix and Windows kernel designs
IDENTIFY Linux/Unix OS processes from Power-On Self-test (POST) to user interaction
IDENTIFY the default command shell in Linux/Unix
DEMONSTRATE the process of creating a shell script using Visual Interface (vi).
EXECUTE a shell script from the command line
ENTER common Linux/Unix Command Line Interface (CLI) system commands
INTERPRET output of common Linux/Unix CLI system commands
TROUBLESHOOT error messages resulting from common Linux/Unix system commands
EMPLOY cron/crontabs to schedule daily system checks
PERFORM daily system checks by reviewing generated logs for error messages
LOCATE system configuration files/scripts
C. SECTION OUTLINE
1. Introduction
2. Unices
a. Linux/Unix Family Tree
3. Identify the running OS
4. Layers of an OS
5. Linux Kernel Primary Functions
6. Kernel and User Mode(s)
7. Similarities between Unices and Windows
8. Linux/Unix Boot Process
9. Default Command Shell
A. INTRODUCTION
This information sheet is designed to introduce the history and evolution of the Unix family of
operating systems and the basics of their architecture, including similarities with other types of
operating systems. It includes the boot process and procedures, basic commands for identifying
the operating system, troubleshooting error messages, an introduction to the vi editor, cron, and
daily system checks.
B. REFERENCES
C. INFORMATION
UNICES
a. Background and history Version 1 – 4 of Unix released between ‘71 and ’73.
b. Evolving from an unnamed DEC OS for the PDC-7, originally released in 1965.
c. Unix/Linux have a shared history, the origins of which can be traced as far back
as 1965.
3. As early unix evolved, it split repeatedly. By 1980, major variants included BSD, Xenix
and System III. Of these, only BSD was not purely proprietary.
4. In 1990, we had:
5. NEXTSTEP (proprietary)
9. System V, R4
10. HP-UX
13. The forking of the various releases continued. By 1990 many more—including the the
now fully open source BSD, and Minix. Minux, like Linux, is considered a clone rather
than a true Unix as despite the similarities, the code base was written from scratch,
primarily to avoid intellectual property issues.
d. Used for all levels of computing, from “smart devices” to 498 of the worlds
fastest supercomputers.
f. First prerelease of Linux in 1991. Not formally released as a version 1.0 until
1994.
g. Considered like Minix, a unix clone. In practice, that means only that it is coded
from the ground up, rather than evolving from an earlier code base. The command
structure, the hardware support, mirrors that of a “true” unix.
16. Linux/Unix Family Tree: Illustrates the “family tree” and evolution of Linux/Unix from
the earliest PDP-7 OS through 2013. Intended to show the high degree of
interrelatedness—and similarity between the various unices, linux, minux and even the
Mac OS X (next page):
b. Similar command shell(s), with bash (Bourne Again Shell) the most common
across a wide range of OS’s—including recent availability in Windows 10
(though not installed by default).
a. Unices tend to be more targeted to specific hardware, and more consistent within
a particular flavor of Unix. Differences between unices can be as great as the
differences between any one of them and linux.
b. Unices tend to have fewer filesystem options, most of which linux will support (to
varying degrees—some may be read-only) in addition to its native filesystem
formats.
d. Most core applications are the similar, if not apparently identical to the user.
e. Easier for developers to target a single Unix, where development for Linux
similar to attempting to develop for all unices. (1)
f. System administration typically simpler on Unix, with each vendor developing its
own tool. Linux lack a standard administration tool, though multiple options are
available.
g. This is essentially the same problem Microsoft encounters with Windows when
attempting to support a large potential variation in platforms.
h. Specific Differences?
1. Using bash:
a. How to identify the running OS from the command line in both Linux (CentOS)
and Unix (BSD).
LAYERS OF AN OS
1. Applications Layer (Ring 3): (Sometimes referred to as User Mode): User mode consists
of running user processes. Each user’s processes are protected from another user’s
processes. Processes and their threads start at user mode and since the user cannot talk
directly to hardware, all hardware requests are handled via the kernel. Access to kernel
routines is only possible through the system call interface. User processes cannot directly
access kernel memory space.
2. Shell: The Linux CLI is a separate program referred to as the shell. The shell provides
the user interface to the kernel. The shell, commands, and utilities are separate programs
that are part of the OS distribution, but not part of the kernel. There are literally dozens
of shells available and several different shells can be used at any given time. Each shell
has its own specific set of functions and choice of distributions.
a. The shell presents each user with a prompt, executes user commands, and
supports a custom environment for each user. It is both an interpreter and a
scripting language.
3. Kernel (Ring 0): In kernel mode, the executing code has full, unrestricted access to the
hardware. It can execute any CPU instruction and reference any memory address. The
kernel performs a privileged task on behalf of the calling process (API/system call).
Tasks may include reading or writing a file, sending commands to a device, creating a
process, or managing memory. Kernel mode is normally reserved for the lowest-level,
most trusted function of the OS to prevent kernel crashes. Since the kernel is the OS,
kernel crashes are catastrophic and will halt the entire PC.
a. Kernel (Ring 0)
b. User (Ring 3)
a. Ring structure allows for differing levels of access providing protection from any
number of possible sources.
a. Drivers/Dynamic modules
b. Memory management
c. Network stack
d. Inter-process communication
e. Process management
a. Libraries
b. Window manager
8. User Mode: Includes user-executable and controllable functions. Less protection from
outside threats, hardware access intermediated by kernel.
a. Another model for viewing the OS security structure is that of rings, with
decreasing degrees of access as we work outward from the innermost ring 0, to
(typically) ring 3.
b. Ring 0 is essentially the equivalent of Kernel Mode, with the least protection and
the most access to resources. OS runs in this mode during startup.
c. Ring 3 is equivalent to User mode. Used for most applications. Most protection
and least resource access. Interface to kernel handled with system calls (specifics
and terminology vary depending on OS and hardware).
e. “Rings 1 and 2 are unneeded, as device drivers can run in either ring.”
11. Further discussion of ring structure and its relationship to security, including the vestigial
rings 1 and 2:
a. wiki.osdev.org/Security
b. Set up applications,
d. Configure drivers,
e. Configure networking,
2. Comparison, including path location of key files, with the directory structure of unices
and windows:
4. Commands performing the same task in windows or in Unix. E.g. dir for a directory
listing in windows is ls in Unix.
5. A good translator between Windows and Linux/Unix commands may be found at:
www.covingtoninnovations.com/mc/winforunix.html
6. The differences most likely to affect you as an administrator are the topology of the
system/filesystem layout, and
b. They can be made permanent using the ~/.bashrc or ~/.bash_profile files and a
text editor, like vi, gedit or nano.
a. When the system is powered on, the Basic Input/Output (BIOS) performs the
Power on Self-Test (POST) and the master boot record (MBR) is executed.
3. MBR Phase
a. Contains the primary bootloader, partition table, and MBR validity check.
c. Usually GRUB.
6. Depending on OS, bootloader may be either Grand Unified Bootloader (GRUB) or Linux
Loader (LILO).
7. GRUB Phase
a. This is the single most important piece of software on the system. GRUB is
dynamically configurable with the capability to make changes during boot. These
changes include altering boot entries, selecting different kernels, and modifying
the initial RAM disk (initrd).
8. OS selection screen where if no choice is made, default kernel specified in the grub
configuration file (/boot/grub/grub.conf) is loaded.
9. Loads and executes kernel and initrd images – initial ramdisk; a scheme for loading a
temporary root file system into memory—used as part of the Linux startup process.
a. During the kernel phase, the Linux kernel sets up the rest of the system by
mounting the root file system and executing /sbin/init.
b. Mounts root file system specified in the root= entry in grub.conf file.
c. Executes /sbin/init daemon, the very first process started with a process ID (PID)
of 1.
a. Since the earliest releases of System V UNIX (Original AT&T version), the init
process (daemon) was used to place and maintain the system in a specified run
level or state.
b. At any point in time, the system is in one of up to eight possible run levels.
c. The number of run levels varies between different versions of UNIX. A run level
is a software configuration where only a select group of processes exists.
d. Processes spawned by init for each of these run levels are defined in /etc/inittab.
12. systemd
c. An in-depth comparison of sys v init with systemd is well beyond the scope of
this course; however, given the trend of most distributions to make the change to
systemd, it must be addressed. Older versions of unices using sys v init are going
to predominate for the foreseeable future and will be our primary focus. Systemd
is supposed to be backwards compatible with sys v init
a. 0 Halt mode All processes terminated; orderly halt (Do not set initdefault)
c. 2 Multi-user mode Allows users to access the system; limited network services
2. Determining the currently running shell can be determined with the “echo $SHELL”
command:
3. Linux:
4. Unix:
a. In Linux environments, the binary in use is the open source variant vim (invoked
with vi).
a. Command Mode:
b. Input Mode:
b. The colon (:) moves cursor to bottom of screen where rest of command is entered.
(2) We can determine the available paths with the echo $PATH command:
(3) We will place our script in the traditional location for scripts everyone on
a system is allowed to use: /usr/local/bin
6. vi /<path>/<filename>;
7. vi /usr/local/sbin/myscript
14. Attempting to run the script without first making it executable results in a ‘Permission
denied’ error:
15. In order to run, we must first adjust the permissions to allow it to execute.
16. We can adjust permissions to allow anyone to execute with the chmod command,
specifically chmod 755 /usr/local/bin/myscript (we will be exploring the
details and fundamentals of Unix/Linux permissions and chmod command later in the
curriculum).
18. Neither owner, group or world is allowed to execute (these are the three security
principles to whom permissions are assigned).
20. Now all principals have the execute permission. Point out the change in permissions – the
x present for all principals. Let’s try it:
21. Success!
22. We can also view the script (file) from the command line with the cat command:
23. man
a. Man pages – initiates the online manual/help. Provides detailed information on all
standard commands and features. <space> to page, <enter> for line by line
display. Ctrl+c to exit:
b. Truncated output:
2. clear
3. mkdir
4. Command elements:
5. ls
6. ls –a
b. Note appearance of hidden files/directories starting with a “.” using the –a switch,
with examples of results:
7. ls –l
a. Long/verbose format:
8. ls -al
a. cd
c. Cat file1 file2 >newfile ; combine file1 and file2 into a new file
a. more filename
15. cp – copy
a. cp source destination
b. cp file1 file2
c. cp /root/Documents/file1 /root/Documents2/file1
a. mv source destination
18. du – disk usage; summarizes and displays disk usage in 512-byte blocks. For
/root/Documents:
22. su – superuser use alternate credentials (not necessarily root); similar to ‘runas’ in
windows cmd.
23. Bootup/shutdown.
c. shutdown command:
a. Used for backup & restore; will be covered in much greater detail in Module 4.
b. Wrong switch/argument
c. Wrong command
2. Use man command or command --help to determine correct switches and usage for a
given command.
(1) 45 23 * * 6 /home/oracle/scripts/export_dump.sh
a. /error.
9. Note that many GUI-based tools are available for parsing logs, and the use of any GUI-
based text editor (gedit) makes the process easier…
SYSTEM CONFIGURATION FILES/SCRIPTS
1. Solaris OS default run level (3) scripts in:
a. /etc/rc2.d/
b. /etc/rc3.d
a. /etc/rc.d/rc5.d
a. /etc/init.d
4. Directory - Description
b. /run/systemd/system/ - Systemd unit files created at run time. This directory takes
precedence over the directory with installed service unit files.
a. Note the single line at the end that actually does something.
A. INTRODUCTION
This job sheet details the procedures for the initial setup of a workstation and VMware to support
the labs for the Linux/Unix basics module.
B. EQUIPMENT
Trainee Workstation with VMWare workstation installed and a copy of the 64 bit CentOS
Version 7 DVD installer .iso image (official filename: CentOS-7-x86_64-DVD-1511.iso).
C. REFERENCES
D. SAFETY PRECAUTIONS
E. JOB STEPS
Setup: Trainees are logged on to the Local machine as student with an empty copy (no virtual machines
present in console) of VMWare Workstation running. Recommended directory structure for virtual
machines is to create a subdirectory under “C:\users\%user%\My Documents\VMS\”. All pathnames
used will be relative to VMS hierarchy, as in ..\VMS\ISO or ..\VMS\CentOSSrv1”. It is also
recommended that the VMS hierarchy be compressed using the NTFS compression option for
performance reasons unless you’re lucky enough to be running your VM’s on an SSD. Your CentOS 7
ISO image should be placed in the ..\VMS\ISO subdirectory.
2. In the New Virtual Machine Wizard Dialog, on the Welcome screen, select the Custom (advanced)
radio button and next.
3. In the Hardware Compatibility dialog, select Workstation 9.0 (should be default) and next.
FOR TRAINING USE ONLY 39
TRAINEE GUIDE
4. On the Guest Operating System Installation screen, select Installer disc image file (iso), and browse
to ..\VMS\ISO and select your .iso file. (You may receive a “Could not detect which operating
system is in this disc image.” message. This is expected. Select next to continue.
5. On the Select a Guest Operating System screen, select the Linux radio button, and on the dropdown,
select CentOS 64-bit, and next.
6. On the Name the Virtual Machine screen, change the default to CentOSSRV-1, click the browse
button, browse to and select ..\VMS, adding CentOSSRV-1 to the end, so it will appear as
“..\VMS\CentOSSRV-1” for the location.
7. On the Processor Configuration screen, select a single processor and 2 cores, and next.
8. On the Memory for the Virtual Machine screen, change the entry to 2048MB (assuming you’re
running a workstation with at least 8GB of RAM. If you’re on an older machine with less than 8GB,
reduce this value to no less than 1024MB). Select next.
10. On the Network Type screen, select “Use host-only networking” radio button and next.
11. On the Select I/O Controller Types screen, leave at defaults and select next.
12. On the Select a Disk Type screen, leave at defaults and select next.
13. On the Select a Disk screen, ensure “Create a new virtual disk” is selected.
14. On the Specify Disk Capacity screen, leave at defaults (20GB split into multiple files) and select
next.
15. On the Specify Disk File screen, leave at defaults and select next.
17. The new virtual machine will appear in VMWare console, labeled “CentOS 64-bit”. Select “Edit
Virtual machine settings” on the left side of the screen, then the options tab. Ensure General is
selected, and under Virtual machine name, change to CentOSSRV-1, and click OK. The name at
the top of the console and the tab will change to the new machine name.
19. Click inside the VM window, and select “I” and press enter key again to begin installation.
20. When the CentOS installation screen, ensure the defaults of English (United States) is selected, and
click continue.
21. On the installation summary screen, select DATE & TIME and select time zone (Americas, New
York for EST, Chicago for CST, Los Angeles for PST), ensure date & time are set correctly and
click done. Scroll down and select SOFTWARE SELECTION, the “Server with a GUI” radio
button, and the DNS Name Server, FTP Server, File and Storage Server, and KDE checkboxes.
Click Done. Also under SYSTEM, click INSTALLATION DESTINATION and Done to
accept/confirm the defaults. Upon returning to the INSTALLATION SUMMARY screen, the Begin
Installation button will be enabled. Click it.
22. On the configuration screen, select ROOT PASSWORD, and set as P@ssw0rd (this is considered a
weak password, forcing you to click Done twice). The install process will continue. When complete,
the Reboot button will appear, click it to restart the VM. The reboot process will stop at the “Initial
setup of CentOS Linux 7 (Core) screen for License acceptance. Enter 1 and c, then 2 and c and c
again to complete the acceptance process.
23. At the Welcome screen, click Next, at the Typing screen, click Next, at the Time Zone screen, enter
New York, Chicago, or Los Angeles, corresponding to your time zone. Select and click Next. On the
About You screen, enter Student One for Full Name and student1 for username and Next. At the Set
a Password screen, enter P@ssw0rd twice, and Next.
24. Click the Start using CentOS Linux button. After a brief pause, the desktop and the Getting Started
will be displayed.
25. Select the dropdown in the upper right of the screen next to the power button, select wired and
wired settings, click the button to enable the interface and click the gear button. Select IPv4 and
change the dropdown to manual; add 192.168.10.101/24 for SRV-1. Click apply and ensure the
Wired button is still on and close the dialog.
26. Pause your CentOSSRV-1 VM, take a snapshot and label it Initial Installation Complete.
Solaris Installation
A. INTRODUCTION
This job sheet details the procedures for installation of Solaris in a VMware virtual machine.
B. EQUIPMENT
Trainee Workstation with VMWare workstation installed and a copy of the Solaris x86 text
based 64 bit installer .iso image (official filename: sol-11_3-text-x86.iso).
C. REFERENCES
D. SAFETY PRECAUTIONS
E. JOB STEPS
Setup: Logged on to the Local machine as student with an empty copy (no virtual machines present in
console) of VMWare Workstation running. Recommended directory structure for virtual machines is to
create a subdirectory under “C:\users\%user%\My Documents\VMS\”. All pathnames used will be
relative to VMS hierarchy, as in ..\VMS\ISO or ..\VMS\CentOSSrv1”. It is also recommended that the
VMS hierarchy be compressed using the NTFS compression option for performance reasons unless
you’re lucky enough to be running your VM’s on an SSD. Solaris 11 ISO image should be placed in the
..\VMS\ISO subdirectory.
2. In the New Virtual Machine Wizard Dialog, on the Welcome screen, select the Custom (advanced)
radio button and next.
3. In the Hardware Compatibility dialog, select Workstation 9.0 (should be default) and next.
4. On the Guest Operating System Installation screen, select Installer disc image file (iso), and browse
to ..\VMS\ISO and select .iso file. A “Solaris 11 64-bit detected.” message should appear. Select
next to continue.
5. On the Name the Virtual Machine screen, change the default to “Solaris11-x64”, click the browse
button, browse to and select ..\VMS, adding “Solaris11-x64” to the end, so it will appear as
“..\VMS\Solaris11-x64” for the location.
6. On the Processor Configuration screen, select a single processor and 2 cores, and next.
7. On the Memory for the Virtual Machine screen, keep the default 2048MB (assuming a workstation
with at least 8GB of RAM. If it’s an older machine with less than 8GB, reduce this value to no less
than 1024MB). Select next.
8. On the Network Type screen, select “Use host-only networking” radio button and next.
9. On the Select I/O Controller Types screen, leave at defaults and select next.
10. On the Select a Disk Type screen, leave at defaults and select next.
11. On the Select a Disk screen, ensure “Create a new virtual disk” is selected.
12. On the Specify Disk Capacity screen, leave at defaults (16GB split into multiple files) and select
next.
13. On the Specify Disk File screen, leave at defaults and select next.
14. On the Ready to Create Virtual Machine screen, clear the Power on this virtual machine after
creation checkbox and click Finish.
15. The new virtual machine will appear in VMWare console, labeled “Solaris11-x64”. Select “Edit
Virtual machine settings” on the left side of the screen, then the options tab. Ensure General is
selected, and under Virtual machine name, change to Solaris11-x64, and click OK. The name at the
top of the console and the tab will change to the new machine name.
17. On the first “USB Keyboard” screen, type 27 and enter (DO NOT USE THE NUMBERPAD
KEYS).
19. At the Welcome to the Oracle Solaris installation menu screen, select 1 and enter.
20. At the Welcome to Oracle Solaris screen, read the messages and select F2 to continue.
21. At the Select discovery method for disks screen, accept the default of Local Disks (will be
highlighted), and F2 to continue.
23. At the GPT Partitions: 16.0GB scsi Boot screen, accept the default partitioning scheme and Use the
entire disk. F2 to continue.
24. At the System Identity screen, backspace over Solaris and replace with Solaris11-x64. F2 to
continue.
27 On the Time Zones: Locations screen, scroll down and select United States. F2 to continue.
28. On the Time Zone screen, scroll down to time zone, select and F2 to continue.
30. On Locale: Territory, accept the default of United States and F2 to continue.
31. On the Date and Time screen, ensure date and time values are correct, fix if necessary and F2 to
continue.
33. At the Users screen, enter a root password (P@ssw0rd recommended), and user information used to
log in following restarting Solaris VM. F2 to continue.
34. At the Support – Registration screen, enter a simulated email address and Oracle Support password.
35. Installer will attempt to contact Oracle, upon failing it will offer the option to press F2 to save
anyway. Take it.
36. At the Support – Network Configuration screen, select No proxy and F2 to continue.
37. At the Installation Summary screen, review selections and press F2 to begin the install process.
39. At the GNU GRUB screen, press enter to speed the process of booting the highlighted selection.
40. At the console login, enter username (not root!) and password to log in.
42. Enter su and the root password to elevate console to root privileges.
45. Enter echo $SHELL to confirm /usr/bin/bash as the default shell (note that despite that
Solaris is considered a true Unix, the same as CentOS and most Linux distributions).
A. INTRODUCTION
Navy System Administrators may be frequently tasked with administering Linux/Unix
servers, and in that capacity a high degree of familiarity with the procedures of file and
permissions management, navigation of the filesystem structure and the tools available to
accomplish those tasks is required.
B. OBJECTIVE:
Upon successful completion of this section, you will be able to:
NAVIGATE the OS hierarchy using GUI and CLI
EXPLAIN file system structure
IDENTIFY file system folder content type
INTERPRET Linux/Unix Access Control List (ACL) permissions
MODIFY directory and file permissions
C. SECTION OUTLINE
1. Introduction
2. Navigating the OS Hierarchy
3. Linux/Unix Filesystem Structure
4. OS Hierarchy and Navigation
5. Filesystem and ACL Permissions
6. Summary and Review
A. INTRODUCTION
B. Navy System Administrators may be frequently tasked with administering Linux/Unix servers,
and in that capacity a high degree of familiarity with the procedures of file and permissions
management, navigation of the filesystem structure and the tools available to accomplish those
tasks is required.
C. REFERENCES
D. INFORMATION
3. cd – change directory
b. Konqueror
c. Dolphin
d. Krusader
e. Nautilus
f. Thunar
g. PCmanFM
h. XFE
j. Midnight Commander (Technically not a GUI file manager. It will run from the
command line without an X-Windows environment, but it shares much in terms
of layout and functionality with pure GUI-based managers.
8. Files (and most other GUI-based managers) will feel familiar to anyone used to Windows
Explorer.
9. Root directory displaying icons; note scope pane to the left, current location highlighted:
(1) /bin - Binary directory. Executables used at the command line. (ls, mkdir,
cp…)
(2) /dev - Memory block and device files. Devices may create subdirectories
and file identifiers during install.
2. ls –l output breakdown:
b. Directory (d) – Files containing other files; may include information about
location and attributes of files.
d. Named pipe (p) – Also called First In First Out (FIFO) files. Allows unrelated
programs to exchange information.
2. Permissions field split into file type, owner, group and all others.
(1) r – Read
(2) w – Write
(3) x - execute
e. Permission definitions:
(1) Files:
(2) Directories:
f. Default permissions
3. Setting Permissions:
(1) Octal uses octal code to specify three octal digits each for owner, group
and other:
a. r read 4
b. w write 2
c. x execute 1
g. 4 gives r—
a. r read
b. w write
c. x execute
d. u (user) owner
e. g group
f. o other
i. - removes permissions
(2) Modifies settings by subtracting from the default; if set to 0022, subtracts
from the default 0666 and 0777, resulting in 0644 and 0755.
i. 0022
i. u=rwx,g=rx,o=rx
i. 0077
i. u=rwx,g=,o=
A. INTRODUCTION
This job sheet details common command line commands and options.
B. EQUIPMENT
Trainee Workstation with VMWare workstation installed and an installed copy of CentOS
Version 7 running in a virtual machine.
C. REFERENCES
D. SAFETY PRECAUTIONS
E. JOB STEPS
Setup: Logged on to Linux system as a user (student1) and will be executing common command line
operations in the bash shell. You may be required to utilize the “SU” command to elevate your
privileges/access level for some operations. Answer any questions asked on a piece of lined notebook
paper.
1. Determine the OS version using the uname and uname -srv commands. Which version of the Linux
kernel is running? (3.10) What is the kernel timestamp? What is the response of the plain vanilla
uname command?
2. Determine the current runlevel with the who -r command. What is the current runlevel?
3. Force a system restart with the init command. What argument is required to achieve a restart?Log
back in following the restart.
4. Determine you current shell with the “echo $SHELL” command. What is your current shell?
5. Create an empty file in the /tmp directory named file1, with the following command: touch
/tmp/file1
6. After enabling superuser mode, with the following command, rename file1 to file1.txt and place it in
the root directory without making a copy: mv /tmp/file1 /root/file1.txt
7. Use the appropriate command to change to the directory containing file1.txt. What is this command?
8. Using the appropriate command, (ls) determine if the file1.txt file is present in the /root directory.
9. Within your present working directory (/root) create with a single command nested directories of
SysAdmin and Stuff using: mkdir -p SysAdmin/Stuff Using the ls command, determine if
the new directory structure was created. What does the -p switch do for the mkdir command?
10. Determine the usage percentage of /boot. What command(s) will accomplish this? What is the
result?
11. What type of file is /root/file1.txt (note: the correct answer is not text)? What command reports this?
12. Execute the find command with find ~/ -name ‘*.txt’ What is the result?
13. Open a new terminal window. What directory are you in? How did you determine this?
15. Change to the root directory. What is the last listing of the ‘ls’ command? How did you change to
the root directory?
16. Execute the ls -l command on the root directory. What type of file is usr?
A. INTRODUCTION
B. EQUIPMENT
Trainee Workstation with VMWare workstation installed and an installed copy of CentOS
Version 7 running in a virtual machine.
C. REFERENCES
D. SAFETY PRECAUTIONS
E. JOB STEPS
Setup: You are logged on to your Linux system as a user (student1) and will be executing common
command line operations in the bash shell. You may be required to utilize the “SU” command to elevate
your privileges/access level for some operations. Answer any questions asked on a piece of lined
notebook paper.
1. Logged in as Student1, open a new terminal window and place it in superuser mode.
3. Change to the /temp directory with cd /temp, and create three empty files with touch file1
file2 test.
4. View full file permissions and information. What command did you use? What are the current
permissions in numerical form?
5. Using the numerical chmod command, change the permissions on file1 to add group execute.
6. Check your results with ls -lisa. How do the permissions for file1 appear?
7. View the permissions on the /etc/shadow file. What is the numeric equivalent? Who has the ability
to read this file?
A. INTRODUCTION
B. EQUIPMENT
Trainee Workstation with VMWare workstation installed and an installed copy of CentOS
Version 7 running in a virtual machine.
C. REFERENCES
D. SAFETY PRECAUTIONS
E. JOB STEPS
Setup: You are logged on to your Linux system as a user (student1) and will be executing common
command line operations in the bash shell. You may be required to utilize the “SU” command to elevate
your privileges/access level for some operations. Answer any questions asked on a piece of lined
notebook paper.
1. Open a new terminal in user mode. Your present working directory should be /home/student1.
4. Type the following, making sure to press Enter at the end of each line.
The quick brown fox jumped over the lazy brown dog.
Sentence two.
Sentence three.
Sentence four.
Vi will never become vii.
Sentence six.
6. Press 1G to move to the first line, and then type 4l to position the cursor over the q in quick.
7. Type cw, which removes the work quick and puts you into insert mode.
8. Type the word slow, and then press Esc. Now your file should look like this:
The slow brown fox jumped over the lazy brown dog.
Sentence two.
Sentence three.
Sentence four.
Vi will never become vii.
Sentence six.
9. Type 2j to move down two lines. The cursor will be on the last e in Sentence on line 3.
10. Type r and then type E. The sentence should look like this:
SentencE three.
11. Type k once to move up one line. Type 2yy to copy two lines.
12. Type 4j to move to the last line, and then type p to paste the text from the buffer. Here’s the result:
The slow brown fox jumped over the lazy brown dog.
Sentence two.
SentencE three.
Sentence four.
Vi will never become vii.
Sentence six.
Sentence two.
SentencE three.
13. Press Esc, :, and then type q! to exit the file without saving the output. If you get your modes mixed,
press i to return to insert, then Esc, : and q! to quit.
14. Let’s now apply what we’ve learned with permissions and vi and create a script. With your terminal
still in user mode, enter vi /tmp/testscript
#!/bin/bash
# We will now tell the world of our plans…
echo ‘Hello World! We are the Navy System Administrators’
echo ‘All of your base are belong to us!’
echo ‘Be afraid. Be warned.’
18. Attempt to run the script: /tmp/testscript Were you successful? Why not?
19. Let’s change the permissions on /tmp/testscript to allow anyone to execute. This is done with chmod
command. What number will give read and execute permissions to all, but only write to owner?
20. You (or any other user) should now be able to execute. What do the listed permissions now look like
after executing the command ls -la /tmp/testscript ?
21. Execute the script to see the result, and close out the terminal window.
D. INTRODUCTION
Navy System Administrators may be frequently tasked with administering and managing
users and groups on Linux/Unix servers, and a high degree of familiarity with the procedures
of user and group management and the tools available to accomplish those tasks is required.
E. OBJECTIVE:
Upon successful completion of this section, you will be able to:
CREATE user accounts and associated files
MANAGE user accounts and user associated files
CREATE user groups
MANAGE user group membership
F. SECTION OUTLINE
1. Introduction
2. User associated files
3. Creating user accounts
4. Managing user accounts
5. Creating group accounts
6. Summary and Review
E. INTRODUCTION
Navy System Administrators may be frequently tasked with administering and managing users
and groups on Linux/Unix servers, and a high degree of familiarity with the procedures of user
and group management and the tools available to accomplish those tasks is required.
F. REFERENCES
G. INFORMATION
b. /etc/shadow – Authentication
c. /etc/group – Authorization
2. /etc/passwd:
5. System Accounts
(1) Daemon
(2) bin
(3) sys
(4) adm
(5) lp
(6) listen
(7) nobody
e. Do not delete!
6. /etc/passwd considerations:
a. Root: First entry, UID of 0, home directory of ‘/’, root user’s home ‘/root’.
b. When adding user to end of file (editing /etc/passwd directly), and the last field is
left blank, bourne shell (sh) assigned by default;
c. When adding using the useradd command, default new user information including
shell assignment is used:
7. /etc/shadow file:
e. 11000 – Number of days between 01Jan1970 and the last password modification
date.
k. Only root has read permission – other class cannot read. Best security.
CREATING USER ACCOUNTS
1. useradd – creates a new user or alters fields in /etc/shadow and etc/passwd. Options:
2. Passwd – Enables a user to change their password or for the administrator to set and
modify password settings. Options:
a. Establishes group membership. Each configured group has an entry, all group
members are listed:
d. 4 – Group ID (GID)
f. root (group) has GID of 0; may also belong to other groups such as bin, sys or
adm.
g. Truncated contents:
2. /etc/group file:
c. Change to a new group with newgrp (similar to su). Example (user in others
group, wishes to change membership to create a file in the admin group – of
which the user is also a member):
(2) vi testfile
A. INTRODUCTION
This job sheet details common command line commands and options to manage, create and
modify User and Group Accounts.
B. EQUIPMENT
Trainee Workstation with VMWare workstation installed and an installed copy of CentOS
Version 7 running in a virtual machine.
C. REFERENCES
D. SAFETY PRECAUTIONS
E. JOB STEPS
Setup: You are logged on to your Linux system as a user (student1) and will be executing common
command line operations in the bash shell. You may be required to utilize the “SU” command to elevate
your privileges/access level for some operations. Answer any questions asked on a piece of lined
notebook paper.
1. Log into your Linux system, open a terminal window and enable superuser mode.
2. Run the command useradd -D to see what the system default group is when adding a new user.
3. What are the default group, home directory and shell when adding a new user?
GROUP=100
HOME=/home
SHELL=/bin/bash
4. Create a group called instructors with a GID of 110 using the following command:
5. Create two new users named dan and joe, both with a primary group of users and a supplementary
group of student. Neither user’s home directory already exists, but should be created. Create each
account separately with the following commands:
passwd dan
passwd joe
Note: you will get a ‘password too simplistic’ message. Ignore and re-enter.
7. Create another user named sally. Create her home directory. Make her primary group users, but do
not add her to the instructors group. Give her a password of abcd.
8. Verify supplementary group membership for the instructor group only with the following command:
9. The /etc/group file shows the users group isn’t used by anyone as a secondary group. However, dan
and joe have instructors as their secondary group. su to dan and create a file; check ownership:
su – dan
touch file1
ls -l
Note: file1 is owned by dan, and its group ownership is set to its primary group, users.
10. In order to create files that are owned (and are accessible) by members of a group, us newgrp to
switch to that group before creating the file, just as if su-ing to another user:
newgrp instructors
touch file2
ls -l
The example above shows a different group owns each file, depending on which group was set as the
primary at creation time. Notice dan wasn’t prompted for a password when running newgrp. The
reason for this is that dan is a member of the instructors group.
11. What happens when a user who isn’t a member of that group attempts the same command:
su – sally
newgrp instructors
newgrp: Password
This time sally was prompted for the instructors group password since she is neither root nor a
member of that group. If no password exists, she will be unable to create files owned by members in
that group. Logout from user sally and log back in as user root.
12. Create a user nancy, create her home directory making it if it does not exist, and add a comment field
of ‘finance’:
13. Create a user beth as above, but assign her to the system’s default group instead of her own private
group. Add a comment field of “HR”.
14. Verify the creation of both accounts and note their primary groups with the following commands:
Substitute beth in the above command as needed. What are Nancy and Beth’s primary groups?
A. INTRODUCTION
Navy System Administrators may be frequently tasked with backing up and/or restoring
Linux/Unix servers and a high degree of familiarity with the procedures of system and file
backup and restoration and the tools available to accomplish those tasks is required.
B. OBJECTIVE:
Upon successful completion of this section, you will be able to:
IDENTIFY backup mechanisms
EXPLAIN typical backup/recovery schemes in a Linix/Unix environment
IDENTIFY optional backup and recovery applications
EXECUTE a backup Tape Archive (TAR) command
EXECUTE a backup dd disk imaging command
C. SECTION OUTLINE
1. Introduction
2. Backup mechanisms
3. Backup & recovery schemes
4. Summary and Review
A. INTRODUCTION
Navy System Administrators may be frequently tasked with backing up and/or restoring
Linux/Unix servers and a high degree of familiarity with the procedures of system and file
backup and restoration and the tools available to accomplish those tasks is required.
B. REFERENCES
C. INFORMATION
a. Typical file and folder backups; backups based on the individual files and folders
and the characteristics of those files and folders, such as path or date.
(1) tar (tape archive) is the most common file/folder mechanism in the
linux/unix world.
(2) Many backup programs such as “Kdat” are simply user-friendly front-ends
for tar.
a. Image backups, such as those created with dd or dump may also be used as the
starting point.
a. Incremental backups will backup the files created or modified since the most
recent backup—either incremental or full. This means a full restore requires all
backups
b. Differential backups will backup all files created or modified since the last full
backup, regardless of any other backups performed in the interim.
c. Pros/cons of each:
3. Archive bits:
c. These options limit tar to operate only on files modified after the date specified; a
file’s status is considered to have changed if its contents, owner permissions, or
other attribute have changed.
4. tar syntax:
(2) -v ;verbose
b. To create a tarfile with the contents of the entire /etc directory to a properly
configured tape device named /dev/rmt0:
d. .tar extension standard practice to make tar files (“tarballs”) easily recognizable.
Other options include:
6. Disk Imaging
b. During the process of imaging, best results will be obtained from the target
filesystems in an offline state, with dd (or dump) invoked from a “live” OS such
as booting from a cd/dvd or flash drive. Pre-configured distributions such as
clonezilla (clonezilla.org) can make this a simple and painless procedure, walking
through the steps and options of dd.
7. dd syntax examples:
e. if –input file
f. of –output file
g. bs –block size
h. Conv –options
8. In addition to disk imaging, dd also has the capability to perform several other related and
useful tasks:
b. Disk wipe
A. INTRODUCTION
This job sheet details use of the clonezilla live distribution to image (clone) a Linux/Unix
system.
B. EQUIPMENT
Trainee Workstation with VMWare workstation installed and an installed copy of CentOS
Version 7 running in a virtual machine, with an .iso image of Clonezilla dated 20161121 to be
mounted to the CD/DVD drive in the virtual machine. Filename as downloaded from
clonezilla.org is clonezilla-live-20161121-yakkety-amd64.iso
C. REFERENCES
D. SAFETY PRECAUTIONS
E. JOB STEPS
Setup: You are logged on to your Linux system as a user (student1) and will be executing common
command line operations in the bash shell. You may be required to utilize the “SU” command to elevate
your privileges/access level for some operations. Answer any questions asked on a piece of lined
notebook paper.
1. In order to prepare for imaging of your system using clonezilla, first power down your virtual
machine. First, we need to add an additional hard disk to the virtual machine to clone your image to.
With your VM powered down, in the VMWare console, click the “Edit virtual machine settings”
link.
4. In the Hardware Type window, select Hard Disk and Next. On the Select a Disk Type, accept the
defaults with Next. On the Select a Disk screen, ensure the default of Create a new virtual disk is
selected, and next. For Specify Disk Capacity, accept the default value of 20 GB with Split virtual
disk into multiple files selected, and next. Accept the default under Specify Disk File and Finish.
5. While still on the Hardwdare tab, select the CD/DVD (IDE) device, click the Browse button under
Use ISO image file:, and select your clonezilla live .iso file. It should be in the same location as your
CentOS7 .iso.
6. Before performing the clonezilla imaging, we first want to log onto our CentOS system to perform
some before and after comparisons, so power on normally, and log in this time as root by clicking
the Not Listed? link below the existing names. Enter “root” and “P@ssw0rd” to log in. If this is the
first time you’ve logged in as root, you’ll need to click through the welcome screens and accept the
defaults, Start using CentOS Linux and close the final Getting Started screen.
7. Logged in as root, click the Applications dropdown, select Utilities and Disks. Observe the first 21
GB Hard Disk, its two partitions and the /dev/sda1 device path. For the second Hard Disk, note that
it is not initialized or partitioned. Close the Disks application and power off your virtual machine.
8. In order to clone our system using clonezilla, we’ll need to boot to the mounted .iso image. In
vmware, select the VM menu item, power, then power on to BIOS.
9. When the BIOS Setup Utility appears, arrow over to the Boot menu item. Arrow down so the CD-
ROM Drive is selected. Tap the + key twice to move it to the top of the listing. Press F10 to save and
exit, yes to accept the changes. You will boot to the clonezilla live CD.
10. On the initial opening screen, you may hit enter to accept the default resolution.
11. On the choose language screen, accept the default by tabbing to <Ok> and enter.
12. On Configuring console-data, accept the default “Don’t touch keymap” by tabbing to OK, and enter.
14. On the first Clonezilla – OCS screen, arrow down to device-device, tab to Ok, and enter.
19. For the on-the-fly parameters, select -sfsck Skip… (the default), Ok, and enter.
20. On the action to perform when finished screen, select the final Shutdown option, Ok, and enter.
23. On the second WARNING!! Are you sure screen, Y and enter.
25. Clonezilla will now proceed to image sda to sdb. Press enter to continue one more time. System will
now power down.
27. In the BIOS screen, arrow over to Boot once more, select the CD-ROM Drive, and hit the ‘-‘ key
twice to move the CD-Rom option back to its original location.
28. F10 to save and exit, Yes to confirm. The system will now boot back into CentOS.
31. Select the second sdb hard disk. It now shows the identical partitioning as sda, though in an
unmounted state. In the event of a failure of your first sda drive, you could use clonezilla in much
the same fashion to clone the sdb disk back onto its replacement.
A. INTRODUCTION
B. EQUIPMENT
Trainee Workstation with VMWare workstation installed and an installed copy of CentOS
Version 7 running in a virtual machine.
C. REFERENCES
D. SAFETY PRECAUTIONS
E. JOB STEPS
Setup: You are logged on to your Linux system as a user (student1) and will be executing common
command line operations in the bash shell. You may be required to utilize the “SU” command to elevate
your privileges/access level for some operations. Answer any questions asked on a piece of lined
notebook paper.
3. Backup the /etc director to /backup/etc.tar using tar with the following command: tar -cvf
/backup/etc.tar /etc The breakdown of this command is -c: begin writing at the beginning
of the tarfile, -v: verbose mode, and -f: user to provide a target filename. /backup/etc.tar is the target
file, /etc is the directory hierarchy to be backed up.
4. You can view the contents of the tarfile with tar -tvf /backup/etc.tar This will yield
essentially a ls -l listing of the tar file. The -t switch will list the contents.
5. Extract the file to the /backup directory with tar -xvf /backup/etc.tar. Upon completion,
you will see a /backup/etc directory containing the extracted contents of the tar file. Do ls -l
/backup/etc to view.
DATABASE ADMINISTRATION
A. INTRODUCTION
Navy System Administrators may be frequently tasked with database management and a high
degree of familiarity with the concepts of database administration including maintenance,
backup and restoration is required.
B. OBJECTIVE:
Upon successful completion of this section, you will be able to:
EXPLAIN the architecture and functioning of a transactional database
IDENTIFY the different file types and file purposes in a transactional database
EXPLAIN the different type of database backups and the impact of backup type on files backed
up and the clearing of transaction logs
PERFORM common database administration tasks, including backup, compression and integrity
verification
C. SECTION OUTLINE
1. Introduction
2. Architecture and functioning of a transactional database
3. Transactional database file types
4. Database backup
5. Common database administration tasks
6. Summary and Review
A. INTRODUCTION
Navy System Administrators may be frequently tasked with database management and a high
degree of familiarity with the concepts of database administration including maintenance, backup
and restoration is required.
B. REFERENCES
1. en.wikipedia.org/wiki/Database_transaction
2. msdn.Microsoft.com/en-us/library/ms189563.aspx “Database Files and Filegroups”
3. www.veritas.com/support/en_US/article.v53937423_v113799574
4. technet.Microsoft.com/en-us/library/dd894051(v=sql.100).aspx “Data Compression:
Strategy, Capacity Planning and Best Practices”
5. msdn.Microsoft.com/en-us/library/ms139858.aspx “Check Database Integrity Task”
C. INFORMATION
(1) Provide reliable units of work that allow correct recovery from failures
and keep the database consistent even in cases of system failure.
(2) Provide isolation between programs accessing a database at the same time.
c. ACID:
(2) Consistent
(3) Isolated
(4) Durable
3. Transaction pattern:
a. Begin transaction
5. Different database engines (even those sharing a common language, like SQL, may differ
in their transactional implementation).
a. Primary: Contains startup information and points to other files in the database.
User data and objects may be contained in this or in secondary files. Each
database has a single primary file. Filename extension is .mdf
b. Secondary: Optional, user defined, and store user data. Can be used to spread data
across multiple disks and to reduce the size of individual files in the event of
filesystem limitation(s). Filename extension .ndf
c. Transaction log: Contains log information used to recover the database. At least
one per database. Filename extension .ldf
d. Other types of files may be present depending on the specific database engine. An
example of this is a checkpoint file, used to indicate which of the logged
transactions have been completed.
DATABASE BACKUP
7. Database backup types:
8. Full Backup: Backs up entire database. Will clear transaction logs/checkpoint files or
whatever mechanism the database engine uses to determine transaction completion (SQL
differential baseline).
9. Full Copy: Backs up entire database. No effect on or clearing of, transaction logs,
checkpoint files or differential baseline. No effect on future backups.
10. Incremental backup: Backs up transaction logs/differential baseline only and clears
following the backup as in a full backup. Individual backups only capture the time since
the most recent full or incremental backup. Does not backup primary/secondary database
files.
11. Differential backup: Backs up transaction logs/differential baseline and does not clear
those files. All differential backups capture the period of time since the last full backup,
regardless of differential backups made in the interim. Does not backup
primary/secondary database files.
a. Full or copy only: Database offline, files restored, transaction logs replayed into
database and cleared. Database may now be brought online.
b. Full + Incremental: Database offline, files restored including all transaction logs
from every incremental performed since the last full backup. Transaction log(s)
replayed into database and cleared. Database brought online.
c. Full + Differential: Database offline, files restored from the full and most recent
differential only. Transaction log(s) replayed into database and cleared. Database
brought online.
13. For backup and recovery, specifics and options may vary depending on the database
engine in use, the backup software and the running environment of the database.
COMMON DATABASE ADMINISTRATION TASKS
14. Compression
a. Over time, the primary/secondary files of a database will typically grow, with
increasing amounts of “dead space” where changed data is marked as stale and
rewritten elsewhere.
b. Most database engines will perform some kind of online compression, rewriting
data in sequential order when idle, in order to minimize read/write latency.
c. True compression and reducing the size of the database typically required taking
the database offline, utilizing a utility such as eseutil (extensible storage engine
specific) and creating a fresh copy of the database containing only the active data.
Stale data is truncated, and the size of the resulting new file is reduced.
b. Depending on the severity, external utilities may be able to repair such errors with
the database in an offline state.
c. These include isinteg for the extensible storage engine, and various utilities for
other database engines.
e. In the event corruption is too severe to be repaired, recovery from good backups
will be necessary.
DATABASE ADMINISTRATION
A. INTRODUCTION
Included references cover those online reference resources not readily available due to
network restrictions or other reasons.
B. INCLUDED REFERENCES
NAME
uname - print system information
SYNOPSIS
uname [OPTION]...
DESCRIPTION
Print certain system information. With no OPTION, same as -s.
-a, --all
print all information, in the following order, except omit -p
and -i if unknown:
-s, --kernel-name
print the kernel name
-n, --nodename
print the network node hostname
-r, --kernel-release
print the kernel release
-v, --kernel-version
print the kernel version
-m, --machine
print the machine hardware name
-p, --processor
print the processor type or "unknown"
-i, --hardware-platform
print the hardware platform or "unknown"
-o, --operating-system
print the operating system
AUTHOR
Written by David MacKenzie.
COPYRIGHT
Copyright © 2013 Free Software Foundation, Inc. License GPLv3+: GNU
GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
SEE ALSO
arch(1), uname(2)
command
dd (Unix)
From Wikipedia, the free encyclopedia
Jump to: navigation, search
dd is a command-line utility for Unix and Unix-like operating systems whose primary purpose is
to convert and copy files.[1]
On Unix, device drivers for hardware (such as hard disk drives) and special device files (such as
/dev/zero and /dev/random) appear in the file system just like normal files; dd can also read
and/or write from/to these files, provided that function is implemented in their respective driver.
As a result, dd can be used for tasks such as backing up the boot sector of a hard drive, and
obtaining a fixed amount of random data. The dd program can also perform conversions on the
data as it is copied, including byte order swapping and conversion to and from the ASCII and
EBCDIC text encodings.[2]
The name dd is an allusion to the DD statement found in IBM's Job Control Language (JCL),[3][4]
in which the initials stand for "Data Definition".[5] The command's syntax resembles the JCL
statement more than it does other Unix commands, so the syntax may have been a joke.[3]
Originally intended to convert between ASCII and EBCDIC, dd first appeared in Version 5
Unix.[6] The dd command is specified by IEEE Std 1003.1-2008, which is part of the Single
UNIX Specification.
Contents
[hide]
• 1 Usage
• 2 Output messages
• 3 Block size
• 4 Uses
o 4.1 Data transfer
o 4.2 Master boot record backup and restore
o 4.3 Data modification
o 4.4 Disk wipe
o 4.5 Data recovery
o 4.6 Benchmarking drive performance
o 4.7 Generating a file with random data
o 4.8 Converting a file to upper case
• 5 Limitations
• 6 Dcfldd
• 7 See also
• 8 Notes
• 9 References
• 10 External links
Usage
The command line syntax of dd differs from many other Unix programs, in that it uses the
syntax option=value for its command line options, rather than the more-standard -option
value or --option=value formats. By default, dd reads from stdin and writes to stdout,
but these can be changed by using the if (input file) and of (output file) options.
Usage varies across different operating systems. Also, certain features of dd will depend on the
computer system capabilities, such as dd's ability to implement an option for direct memory
access. Sending a SIGINFO signal (or a USR1 signal on Linux) to a running dd process makes it
print I/O statistics to standard error once and then continue copying. dd can read standard input
from the keyboard. When end-of-file (EOF) is reached, dd will exit. Signals and EOF are
determined by the software. For example, Unix tools ported to Windows vary as to the EOF:
Cygwin uses Ctrl + D (the usual Unix EOF) and MKS Toolkit uses ctrl + z (the usual Windows
EOF).
Output messages
The GNU variant of dd as supplied with coreutils on Linux does not describe the format of the
messages displayed on standard output on completion. However, these are described by other
implementations, e.g. that with BSD.
Each of the "Records in" and "Records out" lines shows the number of complete blocks
transferred + the number of partial blocks, e.g. because the physical medium ended before a
complete block was read, or a physical error prevented reading the complete block.
Block size
A block is a unit measuring the number of bytes that are read, written, or converted at one time.
Command line options can specify a different block size for input/reading (ibs) compared to
output/writing (obs), though the block size (bs) option will override both ibs and obs. The
default value for both input and output block sizes is 512 bytes (the traditional block size of
disks, and POSIX-mandated size of "a block"). The count option for copying is measured in
blocks, as are both the skip count for reading and seek count for writing. Conversion
operations are also affected by the "conversion block size" (cbs).
For some uses of the dd command, block size may have an effect on performance. For example,
when recovering data from a hard disk, a small block size will generally cause the most bytes to
be recovered. Issuing many small reads is an overhead and may be non-beneficial to execution
performance. For greater speed during copy operations, a larger block size may be used.
However, because the amount of bytes to copy is given by bs×count, it is impossible to copy a
prime number of bytes in a single dd command without making one of two bad choices, bs=N
count=1 (memory use) or bs=1 count=N (read request overhead). Alternative programs
(see below) permit specifying bytes rather than blocks. When dd is used for network transfers,
the block size may have also an impact on packet size, depending on the network protocol used.
The value provided for block size options is interpreted as a decimal (base 10) integer and can
also include suffixes to indicate multiplication. The suffix w means multiplication by 2, b means
512, k means 1024, m means 1024 × 1024, G means 1024 × 1024 × 1024, and so on.
Additionally, some implementations understand the x character as a multiplication operator for
both block size and count parameters.
Uses
Data transfer
dd can duplicate data across files, devices, partitions and volumes. The data may be input or
output to and from any of these; but there are important differences concerning the output when
going to a partition. Also, during the transfer, the data can be modified using the conv options
to suit the medium.
An attempt to copy the entire disk using cp may omit the final block if it is of an unexpected
length[citation needed]; whereas dd may succeed. The source and destination disks should have the same
size.
The noerror option means to keep going if there is an error, while the sync option causes
output blocks to be padded.
It is possible to repair a master boot record. It can be transferred to and from a repair file.
To create an image of the entire x86 master boot record (including a MS-DOS partition table and
MBR magic bytes):
To create an image of only the boot code of the master boot record (without the partition table
and without the magic bytes required for booting):
dd can modify data in place. For example, this overwrites the first 512 bytes of a file with null
bytes:
The notrunc conversion option means do not truncate the output file — that is, if the output
file already exists, just replace the specified bytes and leave the rest of the output file alone.
Without this option, dd would create an output file 512 bytes long.
For security reasons, it is sometimes necessary to have a disk wipe of a discarded device.
When compared to the data modification example above, notrunc conversion option is not
required as it has no effect when the dd's output file is a block device.[8]
The bs=4k option makes dd read and write 4 kilobytes at a time. For modern systems, an even
greater block size may be beneficial due to the transport capacity (think RAID systems). Note
that filling the drive with random data will always take a lot longer than zeroing the drive,
because the random data must be rendered by the CPU and/or HWRNG first, and different
designs have different performance characteristics. (The PRNG behind /dev/urandom may be
slower than libc's.) On most relatively modern drives, zeroing the drive will render any data it
contains permanently irrecoverable.[9]
Zeroing the drive will render any data it contains irrecoverable by software; however it still may
be recoverable by special laboratory techniques.[citation needed]
The shred program provides an alternate method for the same task, and finally, the wipe[10]
program present in many Linux distributions provides an elaborate tool (the one that does it
"well", going back to the Unix philosophy mentioned before) with many ways of clearing.
Data recovery
The early history of open-source software for data recovery and restoration of files, drives and
partitions included the GNU dd, whose copyright notice starts in 1985,[11] with one block size per
dd process, and no recovery algorithm other than the user's interactive session running one form
of dd after another. Then, a C program called dd_rescue[12] was written in October 1999,
having two block sizes in its algorithm. However, the author of the 2003 shell script dd_rhelp,
which enhances dd_rescue's data recovery algorithm, recommends GNU ddrescue,[13][14] a
data recovery program unrelated to dd that was initially released in 2004.
To help distinguish the newer GNU program from the older script, alternate names are
sometimes used for GNU's ddrescue, including addrescue (the name on freecode.com and
freshmeat.net), gddrescue (Debian package name), and gnu_ddrescue (openSUSE
package name). Another open-source program called savehd7 uses a sophisticated algorithm,
but it also requires the installation of its own programming-language interpreter.
To make drive benchmark test and analyze the sequential (and usually single-threaded) system
read and write performance for 1024-byte blocks:
To make a file of 100 random bytes using the kernel random driver:
Limitations
As stated in a part of documentation provided by Seagate, "certain disc [sic] utilities, such as
DD, which depend on low-level disc [sic] access may not support 48-bit LBAs until they are
updated".[15][citation needed] Using ATA hard disk drives over 128 GiB in size requires system support
48-bit LBA; however, in Linux, dd uses the kernel to read or write to raw device files instead of
accessing hardware directly.[a] At the same time, support for 48-bit LBA has been present since
version 2.4.23 of the kernel, released in 2003.[16][17]
Dcfldd
dcfldd is a fork of dd that is an enhanced version developed by Nick Harbour, who at the time
was working for the United States' Department of Defense Computer Forensics Lab.[18][19][20]
Compared to dd, dcfldd allows for more than one output file, supports simultaneous multiple
checksum calculations, provides a verification mode for file matching, and can display the
percentage progress of an operation.
See also
• Backup
• Disk cloning
• Disk Copy
• Disk image
• .img (filename extension)
• List of Unix programs
Notes
1. Jump up ^ This is verifiable with strace.
References
1. Jump up ^ Austin Group. "POSIX standard: dd invocation". Retrieved 2016-09-29.
2. Jump up ^ Sam Chessman. "How and when to use the dd command?". CodeCoffee. Retrieved 2008-02-19.
3. ^ Jump up to: a b Eric S. Raymond. "dd". Retrieved 2008-02-19.
4. Jump up ^ Dennis Ritchie (Feb 17, 2004). "Re: origin of the UNIX dd command".
Newsgroup: alt.folklore.computers. Usenet: [email protected]. Retrieved
January 10, 2016. dd was always named after JCL dd cards.
5. Jump up ^ Barry Shein (Apr 22, 1990). "Re: etymology of the Unix "dd" command".
Newsgroup: alt.folklore.computers. Usenet: [email protected]. Retrieved 2016-07-
14.
6. Jump up ^ McIlroy, M. D. (1987). A Research Unix reader: annotated excerpts from the Programmer's
Manual, 1971–1986 (PDF) (Technical report). CSTR. Bell Labs. 139.
7. Jump up ^ Reading an ISO image from a CD, DVD, or BD, ARCH Linux documentation, accessed: 2017-
01-22.
8. Jump up ^ "linux - Why using conv=notrunc when cloning a disk with dd?". Stack Overflow. 2013-12-11.
Retrieved 2014-03-24.
9. Jump up ^ Wright, Craig; Kleiman, Dave; Sundhar R.S., Shyaam (2008). "Overwriting Hard Drive Data:
The Great Wiping Controversy". Lecture Notes in Computer Science. Information Systems Security. 5352:
243–257. doi:10.1007/978-3-540-89862-7_21. Retrieved 7 March 2012.
10. Jump up ^ "Wipe: Secure File Deletion". Wipe.sf.net. Retrieved 2014-03-24.
11. Jump up ^ "Savannah Git Hosting – coreutils.git/blob – src/dd.c". git.savannah.gnu.org. Retrieved
January 21, 2015.
12. Jump up ^ "dd_rescue". garloff.de.
13. Jump up ^ "Ddrescue - GNU Project - Free Software Foundation (FSF)". gnu.org.
14. Jump up ^ LAB Valentin (19 September 2011). "dd_rhelp author's repository". Important note: For some
times, dd_rhelp was the only tool (AFAIK) that did this type of job, but since a few years, it is not true
anymore: Antonio Diaz did write an ideal replacement for my tool: GNU 'ddrescue'.
15. Jump up ^ Windows 137GB (128 GiB) Capacity Barrier - Seagate Technology (March 2003)
16. Jump up ^ "ChangeLog-2.4.23". www.kernel.org. Retrieved 2009-12-07.
17. Jump up ^ Linux-2.4.23 released Linux kernel mailing list, 2003.
18. Jump up ^ "DCFLDD at Source Forge". Source Forge. Retrieved 2013-08-17.
19. Jump up ^ Jeremy Faircloth, Chris Hurley (2007). Penetration Tester's Open Source Toolkit. Syngress.
pp. 470–472. ISBN 9780080556079.
20. Jump up ^ Jack Wiles, Anthony Reyes (2011). The Best Damn Cybercrime and Digital Forensics Book
Period. Syngress. pp. 408–411. ISBN 9780080556086.External links
This section's use of external links may not follow Wikipedia's policies or
guidelines. Please improve this article by removing excessive or inappropriate external
links, and converting useful links where appropriate into footnote references. (October
2015) (Learn how and when to remove this template message)
• dd: convert and copy a file – Commands & Utilities Reference, The Single UNIX®
Specification, Issue 7 from The Open Group
• dd: manual page from the GNU Core Utilities.
• dd(1) – Darwin and macOS General Commands Manual
• dd for Windows.
• savehd7 - Save a potentially damaged harddisk partition
• dd_rhelp
• Softpanorama dd page.
Categories:
Cron
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other uses, see Cron (disambiguation).
This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged
and removed. (November 2011) (Learn how and when to remove this template message)
The software utility Cron is a time-based job scheduler in Unix-like computer operating
systems. People who set up and maintain software environments use cron to schedule jobs
(commands or shell scripts) to run periodically at fixed times, dates, or intervals. It typically
automates system maintenance or administration—though its general-purpose nature makes it
useful for things like downloading files from the Internet and downloading email at regular
intervals.[1] The origin of the name cron is from the Greek word for time, χρόνος (chronos).[2]
(Ken Thompson, author of cron, has confirmed this in a private communication with Brian
Kernighan.)
cron is most suitable for scheduling repetitive tasks. Scheduling one-time tasks is often more
easily accomplished using the associated at utility.
Contents
[hide]
• 1 Overview
o 1.1 Nonstandard predefined scheduling definitions
o 1.2 cron permissions
o 1.3 Timezone handling
• 2 History
o 2.1 Early versions
o 2.2 Multi-user capability
o 2.3 Modern versions
• 3 CRON expression
o 3.1 Non-Standard Characters
• 4 See also
• 5 References
• 6 External links
Overview
Cron is driven by a crontab (cron table) file, a configuration file that specifies shell commands to
run periodically on a given schedule. The crontab files are stored where the lists of jobs and other
instructions to the cron daemon are kept. Users can have their own individual crontab files and
often there is a system-wide crontab file (usually in /etc or a subdirectory of /etc) that only
system administrators can edit.
Each line of a crontab file represents a job, and is composed of a CRON expression, followed by
a shell command to execute. Some cron implementations, such as in the popular 4th BSD edition
written by Paul Vixie and included in many Linux distributions, add a sixth field: an account
username that runs the specified job (subject to user existence and permissions). This is allowed
only in the system crontabs—not in others, which are each assigned to a single user to configure.
The sixth field is alternatively sometimes used for year instead of an account username—the
nncron daemon for Windows does this.
While normally the job is executed when the time/date specification fields all match the current
time and date, there is one exception: if both "day of month" (field 3) and "day of week" (field 5)
are restricted (not "*"), then one or both must match the current day.[3]
The following clears the Apache error log at one minute past midnight (00:01) every day,
assuming that the default shell for the cron user is Bourne shell compliant:
This example runs a shell program called export_dump.sh at 23:45 (11:45 PM) every Saturday.
45 23 * * 6 /home/oracle/scripts/export_dump.sh
The configuration file for a user can be edited by calling crontab -e regardless of where the
actual implementation stores this file.
Equivalent
Entry Description
to
@yearly (or
@annually)
Run once a year at midnight of 1 January 0 0 1 1 *
@reboot configures a job to run once when the daemon is started. Since cron is typically never
restarted, this typically corresponds to the machine being booted. This behavior is enforced in
some variations of cron, such as that provided in Debian,[5] so that simply restarting the daemon
does not re-run @reboot jobs.
@reboot can be useful if there is a need to start up a server or daemon under a particular user,
and the user does not have access to configure init to start the program.
cron permissions
• /etc/cron.allow - If this file exists, it must contain your username for you to use cron
jobs.
• /etc/cron.deny - If the cron.allow file does not exist but the /etc/cron.deny file does exist
then, to use cron jobs, you must not be listed in the /etc/cron.deny file.
Note that if neither of these files exist then, depending on site-dependent configuration
parameters, either only the super user can use cron jobs, or all users can use cron jobs.
Timezone handling
Most cron implementations simply interpret crontab entries in the system time zone setting that
the cron daemon runs under. This can be a source of dispute if a large multi-user machine has
users in several time zones, especially if the system default timezone includes the potentially
confusing DST. Thus, a cron implementation may as a special-case any
"CRON_TZ=<timezone>" environment variable setting lines in user crontabs, interpreting
subsequent crontab entries relative to that timezone.[6]
History
Early versions
The cron in Version 7 Unix was a system service (later called a daemon) invoked from
/etc/inittab when the operating system entered multi-user mode. Its algorithm was
straightforward:
1. Read /usr/etc/crontab
2. Determine if any commands must run at the current date and time, and if so, run them as
the superuser, root.
3. Sleep for one minute
4. Repeat from step 1.
This version of cron was basic and robust but it also consumed resources whether it found any
work to do or not. In an experiment at Purdue University in the late 1970s to extend cron's
service to all 100 users on a time-shared VAX, it was found to place too much load on the
system.
Multi-user capability
The next version of cron, with the release of Unix System V, was created to extend the
capabilities of cron to all users of a Unix system, not just the superuser. Though this may seem
trivial today with most Unix and Unix-like systems having powerful processors and small
numbers of users, at the time it required a new approach on a one MIPS system having roughly
100 user accounts.
In the August, 1977 issue of the Communications of the ACM, W. R. Franta and Kurt Maly
published an article entitled "An efficient data structure for the simulation event set" describing
an event queue data structure for discrete event-driven simulation systems that demonstrated
"performance superior to that of commonly used simple linked list algorithms," good behavior
given non-uniform time distributions, and worst case complexity , "n" being the number of
events in the queue.
A graduate student, Robert Brown, reviewing this article, recognized the parallel between cron
and discrete event simulators, and created an implementation of the Franta–Maly event list
manager (ELM) for experimentation. Discrete event simulators run in virtual time, peeling
events off the event queue as quickly as possible and advancing their notion of "now" to the
scheduled time of the next event. Running the event simulator in "real time" instead of virtual
time created a version of cron that spent most of its time sleeping, waiting for the scheduled time
to execute the task at the head of the event list.
The following school year brought new students into the graduate program, including Keith
Williamson, who joined the systems staff in the Computer Science department. As a "warm up
task" Brown asked him to flesh out the prototype cron into a production service, and this multi-
user cron went into use at Purdue in late 1979. This version of cron wholly replaced the
/etc/cron that was in use on the computer science department's VAX 11/780 running 32/V.
1. On start-up, look for a file named .crontab in the home directories of all account
holders.
2. For each crontab file found, determine the next time in the future that each command
must run.
3. Place those commands on the Franta–Maly event list with their corresponding time and
their "five field" time specifier.
4. Enter main loop:
1. Examine the task entry at the head of the queue, compute how far in the future it
must run.
2. Sleep for that period of time.
3. On awakening and after verifying the correct time, execute the task at the head of
the queue (in background) with the privileges of the user who created it.
4. Determine the next time in the future to run this command and place it back on
the event list at that time value.
Additionally, the daemon responds to SIGHUP signals to rescan modified crontab files and
schedules special "wake up events" on the hour and half-hour to look for modified crontab files.
Much detail is omitted here concerning the inaccuracies of computer time-of-day tracking, Unix
alarm scheduling, explicit time-of-day changes, and process management, all of which account
for the majority of the lines of code in this cron. This cron also captured the output of stdout and
stderr and e-mailed any output to the crontab owner.
The resources consumed by this cron scale only with the amount of work it is given and do not
inherently increase over time with the exception of periodically checking for changes.
Williamson completed his studies and departed the University with a Masters of Science in
Computer Science and joined AT&T Bell Labs in Murray Hill, New Jersey, and took this cron
with him. At Bell Labs, he and others incorporated the Unix at command into cron, moved the
crontab files out of users' home directories (which were not host-specific) and into a common
host-specific spool directory, and of necessity added the crontab command to allow users to
copy their crontabs to that spool directory.
This version of cron later appeared largely unchanged in Unix System V and in BSD and their
derivatives, Solaris from Sun Microsystems, IRIX from Silicon Graphics, HP-UX from Hewlett-
Packard, and AIX from IBM. Technically, the original license for these implementations should
be with the Purdue Research Foundation who funded the work,[citation needed] but this took place at a
time when little concern was given to such matters.
Modern versions
With the advent of the GNU Project and Linux, new crons appeared. The most prevalent of these
is the Vixie cron, originally coded by Paul Vixie in 1987. Version 3 of Vixie cron was released
in late 1993. Version 4.1 was renamed to ISC Cron and was released in January 2004. Version
3, with some minor bugfixes, is used in most distributions of Linux and BSDs.
In 2007, Red Hat forked vixie-cron 4.1 to the cronie project and included anacron 2.3 in 2009.
Other popular implementations include anacron, dcron, and fcron. However, anacron is not an
independent cron program. Another cron job must call it. dcron was made by DragonFly BSD
founder Matt Dillon, and its maintainership was taken over by Jim Pryor in 2010.[7]
A webcron solution schedules ring tasks to run on a regular basis wherever cron implementations
are not available in a web hosting environment.
CRON expression
A CRON expression is a string comprising five or six fields separated by white space[8] that
represents a set of times, normally as a schedule to execute some routine.
Day of
Yes 1-31 * , - ? L W -
month
1-12 or JAN-
Month Yes * , - -
DEC
In some uses of the CRON format there is also a seconds field at the beginning of the pattern. In
that case, the CRON expression is a string comprising 6 or 7 fields.[9]
Comma ( , )
Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the
5th field (day of week) means Mondays, Wednesdays and Fridays.
Hyphen ( - )
Hyphens define ranges. For example, 2000-2010 indicates every year between 2000 and
2010, inclusive.
Percent ( % )
Percent-signs (%) in the command, unless escaped with backslash (\), are changed into
newline characters, and all data after the first % are sent to the command as standard
input.[10]
Non-Standard Characters
The following are non-standard characters and exist only in some cron implementations, such as
Quartz java scheduler.
L
'L' stands for "last". When used in the day-of-week field, it allows you to specify
constructs such as "the last Friday" ("5L") of a given month. In the day-of-month field, it
specifies the last day of the month.
W
The 'W' character is allowed for the day-of-month field. This character is used to specify
the weekday (Monday-Friday) nearest the given day. As an example, if you were to
specify "15W" as the value for the day-of-month field, the meaning is: "the nearest
weekday to the 15th of the month." So, if the 15th is a Saturday, the trigger fires on
Friday the 14th. If the 15th is a Sunday, the trigger fires on Monday the 16th. If the 15th
is a Tuesday, then it fires on Tuesday the 15th. However, if you specify "1W" as the
value for day-of-month, and the 1st is a Saturday, the trigger fires on Monday the 3rd, as
it does not 'jump' over the boundary of a month's days. The 'W' character can be specified
only when the day-of-month is a single day, not a range or list of days.
Hash ( # )
'#' is allowed for the day-of-week field, and must be followed by a number between one
and five. It allows you to specify constructs such as "the second Friday" of a given
month.[11] For example, entering "6#3" in the day-of-week field corresponds to the third
Friday of every month.
Question mark ( ? )
In some implementations, used instead of '*' for leaving either day-of-month or day-of-
week blank. Other cron implementations substitute "?" with the start-up time of the cron
daemon, so that ? ? * * * * would be updated to 25 8 * * * * if cron started-up on 8:25am,
and would run at this time every day until restarted again.[12]
Slash ( / )
In vixie-cron, slashes can be combined with ranges to specify step values.[4] For example,
*/5 in the minutes field indicates every 5 minutes (see note). It is shorthand for the more
verbose POSIX form 5,10,15,20,25,30,35,40,45,50,55,00. POSIX does not define
a use for slashes; its rationale (commenting on a BSD extension) notes that the definition
is based on System V format but does not exclude the possibility of extensions.[3]
Note that frequencies in general cannot be expressed; only step values which evenly divide their
range express accurate frequencies (for minutes and seconds, that's /2, /3, /4, /5, /6, /10, /12, /15,
/20 and /30 because 60 is evenly divisible by those numbers; for hours, that's /2, /3, /4, /6, /8 and
/12); all other possible "steps" and all other fields yield inconsistent "short" periods at the end of
the time-unit before it "resets" to the next minute, second, or day; for example, entering */5 for
the day field sometimes executes after 1, 2, or 3 days, depending on the month and leap year; this
is because cron is stateless (it does not remember the time of the last execution nor count the
difference between it and now, required for accurate frequency counting—instead, cron is a mere
pattern-matcher).
See also
• at (Unix)
• Launchd
• List of Unix utilities
• Scheduling (computing)
References
1. Jump up ^ "Newbie Introduction to cron". Unixgeeks.org. Retrieved 2013-11-06.
2. Jump up ^ "What is the etymology of cron". Retrieved 2015-12-23.
3. ^ Jump up to: a b "crontab", The Open Group Base Specifications Issue 7 — IEEE Std 1003.1, 2013 Edition,
The Open Group, 2013, retrieved May 18, 2015
4. ^ Jump up to: a b "FreeBSD File Formats Manual for CRONTAB(5)". The FreeBSD Project.
5. Jump up ^ "Bugs.debian.org". Bugs.debian.org. Retrieved 2013-11-06.
6. Jump up ^ "crontab(5): tables for driving cron - Linux man page". Linux.die.net. Retrieved 2013-11-06.
7. Jump up ^ "[arch-general] [arch-dev-public] Cron". Mailman.archlinux.org. 2010-01-05. Retrieved
2013-11-06.
8. Jump up ^ "Ubuntu Cron Howto". Help.ubuntu.com. 2013-05-04. Retrieved 2013-11-06.
9. Jump up ^ "CronTrigger Tutorial". Quartz Scheduler Website. Retrieved 24 October 2011.
10. Jump up ^ "mcron crontab reference". Gnu.org. Retrieved 2013-11-06.
11. Jump up ^ "Oracle® Role Manager Integration Guide". Docs.oracle.com. Retrieved 2013-11-06.
12. Jump up ^ "Cron format". nnBackup. Retrieved 2014-05-27.
External links
This article's use of external links may not follow Wikipedia's policies or guidelines.
Please improve this article by removing excessive or inappropriate external links, and
converting useful links where appropriate into footnote references. (June 2015) (Learn how
and when to remove this template message)
• crontab: schedule periodic background work – Commands & Utilities Reference, The
Single UNIX® Specification, Issue 7 from The Open Group
• GNU cron (mcron)
• ISC Cron 4.1
• cronie
• ACM Digital library – Franta, Maly, "An efficient data structure for the simulation event
set" (requires ACM pubs subscription)
• Crontab syntax tutorial - Crontab syntax explained
• UNIX / Linux cron tutorial - a quick tutorial for UNIX like operating systems with
sample shell scripts.
• Dillon's cron (dcron)
• Cron Expression translator - Small site that tries to convert a cron tab expression to
English
• CronSandbox - Cron simulator, try out crontab timing values in a sandbox. - Put in the
cron time/date values and get back a list of future run-times.
• Crontab syntax generator - Choose date/time, enter command, output path and Email
address (for receiving notification) to generate a Crontab syntax.
• CronKeep - Web-based crontab manager that allows running cron jobs on demand or
adding new ones in a human-friendly way.
• Set cron job in linux - Set cron job in linuxThis page was last modified on 11 January
2017, at 04:31.
Contents:
A UNIX-Windows phrase book
Operating system history
Security
Essential Windows skills
UNIX users moving to Windows 2000/XP are often unaware of just how powerful this
operating system is. This is a brief and incomplete guide to help them get oriented, with an
emphasis on systems-related topics that ordinary Windows users can often ignore.
For help on most Windows commands, simply type the command followed by: /?
cat (to concatenate files) copy (for details type help copy)
/dev/tty CON:
ftp ftp
ls
dir
ls -al
help
man
Also type any command followed by /?
mkdir mkdir
netstat
netstat
net view
nslookup nslookup
Perl and Python Available for Windows too, with same functionality
ping ping
rmdir rmdir
set set
sort
sort (Pipes and redirection work much the same as in
UNIX.)
telnet telnet
traceroute tracert
w
net session
who
There's a lot more. For further guidance, I recommend the Windows books by Mark Minasi.
• Windows 1.0, 2.0, 3.0, and 3.1, a graphical user interface for DOS that grew until it
almost became a separate operating system.
• Windows NT, 2000, and XP, a multitasking operating system derived from the
Microsoft/IBM OS/2 project (though the code is not derived from OS/2, despite
similarities). Windows NT was originally architected by Dave Cutler, designer of
VAX/VMS.
• Windows 95, 98, and ME, a compromise between the two products, distinctly less
reliable but compatible with DOS device drivers (which the NT line isn't).
In my opinion, only the NT/2000/XP product line is the real thing, and that's what this
document is about.
Windows is bigger, newer, and more elaborate than UNIX. The days of DOS are long gone,
but there are still people who don't realize Windows is a fully multitasking, virtual-memory
operating system, and some who aren't even aware that it has a command prompt.
Different architectures
When UNIX was invented, recursion was an exciting new concept, and structured programming
was just getting started. Internally, UNIX relies on arrays and text files as its fundamental data
structures.
Windows is built on considerably newer technology. Object orientation and default inheritance
pervade it. For example, file and directory permissions are inherited from the parent directory.
Also, under UNIX you can only assign file permissions to everyone, group members, or the file
owner. Under Windows you manipulate Access Control Lists (ACLs) which let you assign
permissions to any user, set of users, computer, or combination thereof.
UNIX system administration requires the sysadmin to memorize, or look up, a large number of
arbitrary codes, file names, and the like. Windows system administration is largely menu-driven.
Security
It is often claimed that Windows is harder to keep secure than UNIX. My experience has been
the opposite, partly because a Windows system normally offers less for the outside user to break
into. (For instance, Telnet service is normally turned off, and many of the functions of the
machine simply aren't available from outside.) To keep Windows secure:
• Don't log in with administrator privileges if you don't have to. Make yourself a non-
administrator account for everyday use. That greatly limits the damage a virus can do.
• Use NTFS, not FAT or FAT32 (DOS-compatible) filesystems. To convert a disk to
NTFS, use a command such as:
convert c: /fs:ntfs
• Set reasonable permissions on all files and directories. Do not give "write" or "full
control" permission to "Everyone" unless you have a good reason.
• Run antivirus software and keep it up to date.
• Use Microsoft's Windows Update facility (built in) and keep it up to date (or even use the
newer Automatic Updates option).
• If you run IIS (the HTTP and FTP server), run URLSCAN (available free from
Microsoft) to block abnormal or suspicious HTTP requests.
To get to a command prompt, look under Start, Programs, Accessories. (It is wise to put
shortcuts to the Command Prompt in lots of convenient places.)
To adjust the properties of almost any object, right-click on it and choose Properties.
To select multiple objects, select the first one, then hold down Ctrl while clicking on each of the
others.
To select a whole range of objects, select the first, then hold down Shift while selecting the last.
Many Windows program display tables which you can sort by clicking on the headings (e.g.,
Name). That's how to sort files by name or type, sort e-mail messages by name or date, and so
on.
The content and opinions expressed on this Web page do not necessarily reflect the views of,
nor are they endorsed by, the University of Georgia or the University System of Georgia.
History of Unix
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Unix
OS family Unix
Available in English
License Proprietary
The history of Unix dates back to the mid-1960s when the Massachusetts Institute of
Technology, AT&T Bell Labs, and General Electric were jointly developing an experimental
time sharing operating system called Multics for the GE-645 mainframe.[1] Multics introduced
many innovations, but had many problems.
Bell Labs, frustrated by the size and complexity of Multics but not the aims, slowly pulled out of
the project. Their last researchers to leave Multics, Ken Thompson, Dennis Ritchie, Doug
McIlroy, and Joe Ossanna among others,[2] decided to redo the work on a much smaller scale.[3] In
1979, Dennis Ritchie described their vision for Unix:[3]
What we wanted to preserve was not just a good environment in which to do programming, but a
system around which a fellowship could form. We knew from experience that the essence of
communal computing, as supplied by remote-access, time-shared machines, is not just to type
programs into a terminal instead of a keypunch, but to encourage close communication.
Contents
[hide]
• 1 1969
• 2 1970s
• 3 1980s
o 3.1 Standardization and the Unix wars
• 4 1990s
• 5 2000s
• 6 See also
• 7 References
• 8 Further reading
• 9 External links
1969
In the late 1960s, Bell Labs was involved in a project with MIT and General Electric to develop
a time-sharing system, called Multiplexed Information and Computing Service (Multics),
allowing multiple users to access a mainframe simultaneously. Dissatisfied with the project's
progress, Bell Labs management ultimately withdrew.
Ken Thompson, a programmer in the Labs' computing research department, had worked on
Multics. He decided to write his own operating system. While he still had access to the Multics
environment, he wrote simulations for the new file and paging system[clarification needed] on it. He also
programmed a game called Space Travel, but it needed a more efficient and less expensive
machine to run on, and eventually he found a little-used PDP-7 at Bell Labs.[4][5] On the PDP-7, in
1969, a team of Bell Labs researchers led by Thompson and Ritchie, including Rudd Canaday,
developed a hierarchical file system, the concepts of computer processes and device files, a
command-line interpreter, and some small utility programs.[3] The resulting system, much smaller
than the envisioned Multics system, was to become Unix. In about a month's time, Thompson
had implemented a self-hosting operating system with an assembler, editor and shell, using a
GECOS machine for bootstrapping.[6]
1970s
The new operating system was initially without organizational backing, and also without a name.
In 1970, Peter G. Neumann coined the project name Unics (UNiplexed Information and
Computing Service) as a pun on Multics (Multiplexed Information and Computer Services): the
new operating system was an emasculated Multics.[7] McIlroy attributes Unix to Brian W.
Kernighan, as well as popularizing Thompson's Unix philosophy.[8]
When the Computing Sciences Research Center wanted to use Unix on a machine larger than the
PDP-7, while another department needed a word processor, Thompson and Ritchie added text
processing capabilities to Unix and received funding for a PDP-11/20.[5] For the first time in
1970, the Unix operating system was officially named and ran on the PDP-11/20. A text
formatting program called roff and a text editor were added. All three were written in PDP-11/20
assembly language. Bell Labs used this initial text processing system, consisting of Unix, roff,
and the editor, for text processing of patent applications. Roff soon evolved into troff, the first
electronic publishing program with full typesetting capability.
As the system grew in complexity and the research team wanted more users, the need for a
manual grew apparent. The UNIX Programmer's Manual was published on 3 November 1971;
commands were documented in the "man page" format that is still used, offering terse reference
information about usage as well as bugs in the software, and listing the authors of programs to
channel questions to them.[8]
After other Bell Labs departments purchased PDP-11s, they also chose to run Unix instead of
DEC's own operating system. By Version 4 it was widely used within the laboratory and a Unix
Support Group was formed, helping the operating system survive by formalizing its
distribution.[5][8]
In 1972, Unix was rewritten in the higher-level language C, contrary to the general notion at the
time that an operating system's complexity and sophistication required it to be written in
assembly language.[9][5] The C language appeared as part of Version 2. Thompson and Ritchie
were so influential on early Unix that McIlroy estimated that they wrote and debugged about
100,000 lines of code that year, stating that "[their names] may safely be assumed to be attached
to almost everything not otherwise attributed".[8] Although assembly did not disappear from the
man pages until Version 8,[8] the migration to C resulted in much more portable software,
requiring only a relatively small amount of machine-dependent code to be replaced when porting
Unix to other computing platforms.
The Unix operating system was first presented formally to the outside world at the 1973
Symposium on Operating Systems Principles, where Ritchie and Thompson delivered a paper.
This led to requests for the system, but under a 1956 consent decree in settlement of an antitrust
case, the Bell System (the parent organization of Bell Labs) was forbidden from entering any
business other than "common carrier communications services", and was required to license any
patents it had upon request.[6] Unix could not, therefore, be turned into a product. Bell Labs
instead shipped the system for the cost of media and shipping.[6] Ken Thompson quietly began
answering requests by shipping out tapes and disks, each accompanied by – according to
legend – a note signed, "Love, Ken”.[10]
In 1973, AT&T released Version 5 Unix and licensed it to educational institutions, and licensed
1975's Version 6 to companies for the first time. While commercial users were rare because of
the US$20,000 (equivalent to $89,017 in 2016) cost, the latter was the most widely used version
into the early 1980s. Anyone could purchase a license, but the terms were very restrictive;
licensees only received the source code, on an as is basis.[11] The licenses also included the
machine-dependent parts of the kernel, written in PDP-11 assembly language. Copies of the
Lions' Commentary on UNIX 6th Edition, with Source Code circulated widely, which led to
considerable use of Unix as an educational example. The first meeting of Unix users took place
in New York in 1974, attracting a few dozen people; this would later grow into the USENIX
organization. The importance of the user group stemmed from the fact that Unix was entirely
unsupported by AT&T.[6]
Versions of the Unix system were determined by editions of its user manuals;[11] for example,
"Fifth Edition UNIX" and "UNIX Version 5" have both been used to designate the same version.
The Bell Labs developers did not think in terms of "releases" of the operating system, instead
using a model of continuous development, and sometimes distributing tapes with patches
(without AT&T lawyers' approval).[6] Development expanded, adding the concept of pipes, which
led to the development of a more modular code base, and quicker development cycles. Version 5,
and especially Version 6, led to a plethora of different Unix versions both inside and outside Bell
Labs, including PWB/UNIX and the first commercial Unix, IS/1.
Unix still only ran on DEC systems.[11] As more of the operating system was rewritten in C (and
the C language extended to accommodate this), portability also increased; in 1977, Bell Labs
procured an Interdata 8/32 with the aim of porting Unix to a computer that was as different from
the PDP-11 as possible, making the operating system more machine-independent in the process.
Unix next ran as a guest operating system inside a VM/370 hypervisor at Princeton.
Simultaneously, a group at the University of Wollongong ported Unix to the similar Interdata
7/32.[12] Target machines of further Bell Labs ports for research and AT&T-internal use included
an Intel 8086-based computer (with custom-built MMU) and the UNIVAC 1100.[13][5]
In May 1975, ARPA documented the benefits of the Unix time-sharing system which "presents
several interesting capabilities" as an ARPA network mini-host in RFC 681.
In 1978, UNIX/32V was released for DEC's then new VAX system. By this time, over 600
machines were running Unix in some form. Version 7 Unix, the last version of Research Unix to
be released widely, was released in 1979. In Version 7, the number of system calls was only
around 50, although later Unix and Unix-like systems would add many more later:[14]
Version 7 of the Research UNIX System provided about 50 system calls, 4.4BSD provided about
110, and SVR4 had around 120. The exact number of system calls varies depending on the
operating system version. More recent systems have seen incredible growth in the number of
supported system calls. Linux 3.2.0 has 380 system calls and FreeBSD 8.0 has over 450.
A microprocessor port of Unix, to the LSI-11, was completed in 1978,[15] and an Intel 8086
version was reported to be "in progress" the same year.[12] The first microcomputer versions of
Unix, and Unix-like operating systems like Whitesmiths' Idris, appeared in the late 1970s.[11]
1980s
USENIX 1984 Summer speakers. USENIX was founded in 1975, focusing primarily on the
study and development of Unix and similar systems.
Bell developed multiple versions of Unix for internal use, such as CB UNIX (with improved
support for databases) and PWB/UNIX, the "Programmer's Workbench", aimed at large groups
of programmers. It advertised the latter version, as well as 32V and V7, stating that "more than
800 systems are already in use outside the Bell System" in 1980,[16] and "more than 2000" the
following year.[17] Research Unix versions 8, 9, and 10 were developed through the 1980s but
were only released to a few universities, though they did generate papers describing the new
work. This research led to the development of Plan 9 from Bell Labs, a new portable distributed
system.
By the early 1980s, thousands of people used Unix at AT&T and elsewhere, and as computer
science students moved from universities into companies they wanted to continue to use it.
Observers began to see Unix as a potential universal operating system, suitable for all computers.
Less than 20,000 lines of code – almost all in C – composed the Unix kernel as of 1983, and
more than 75% was not machine-dependent. By that year Unix or a Unix-like system was
available for at least 16 different processors and architectures from about 60 vendors; BYTE
noted that computer companies "may support other [operating] systems, but a Unix
implementation always happens to be available",[5][11][18] and that DEC and IBM supported Unix as
an alternative to their proprietary operating systems.[19]
Microcomputer Unix became commercially available in 1980, when Onyx Systems released its
Zilog Z8000-based C8002[11] and Microsoft announced its first Unix for 16-bit microcomputers
called Xenix, which the Santa Cruz Operation (SCO) ported to the 8086 processor in 1983. Other
companies began to offer commercial versions of Unix for their own minicomputers and
workstations. Many of these new Unix flavors were developed from the System V base under a
license from AT&T; others were based on BSD. One of the leading developers of BSD, Bill Joy,
went on to co-found Sun Microsystems in 1982 and created SunOS for its workstations.
AT&T announced UNIX System III – based on Version 7, and PWB – in 1981. Licensees could
sell binary sublicenses for as little as US$100 (equivalent to $263.43 in 2016), which observers
believed indicated that AT&T now viewed Unix as a commercial product.[11] This also included
support for the VAX. AT&T continued to issue licenses for older Unix versions. To end the
confusion between all its differing internal versions, AT&T combined them into UNIX System V
Release 1. This introduced a few features such as the vi editor and curses from the Berkeley
Software Distribution of Unix developed at the University of California, Berkeley Computer
Systems Research Group. This also included support for the Western Electric 3B series of
machines. AT&T provided support for System III and System V through the Unix Support
Group (USG), and these systems were sometimes referred to as USG Unix.[citation needed]
In 1983, the U.S. Department of Justice settled its second antitrust case against AT&T and broke
up the Bell System. This relieved AT&T of the 1956 consent decree that had prevented them
from turning Unix into a product. AT&T promptly rushed to commercialize Unix System V, a
move that nearly killed Unix.[10] The GNU Project was founded in the same year by Richard
Stallman.
Since the newer commercial UNIX licensing terms were not as favorable for academic use as the
older versions of Unix, the Berkeley researchers continued to develop BSD Unix as an
alternative to UNIX System III and V. Many contributions to Unix first appeared in BSD
releases, notably the C shell with job control (modelled on ITS). Perhaps the most important
aspect of the BSD development effort was the addition of TCP/IP network code to the
mainstream Unix kernel. The BSD effort produced several significant releases that contained
network code: 4.1cBSD, 4.2BSD, 4.3BSD, 4.3BSD-Tahoe ("Tahoe" being the nickname of the
Computer Consoles Inc. Power 6/32 architecture that was the first non-DEC release of the BSD
kernel), Net/1, 4.3BSD-Reno (to match the "Tahoe" naming, and that the release was something
of a gamble), Net/2, 4.4BSD, and 4.4BSD-lite. The network code found in these releases is the
ancestor of much TCP/IP network code in use today, including code that was later released in
AT&T System V UNIX and early versions of Microsoft Windows. The accompanying Berkeley
sockets API is a de facto standard for networking APIs and has been copied on many platforms.
During this period, many observers expected that UNIX, with its portability, rich capabilities,
and support from companies like DEC and IBM, was likely to become an industry-standard
operating system for microcomputers.[19][20] Citing its much smaller software library and installed
base than that of MS-DOS and the IBM PC, others expected that customers would prefer
personal computers on local area networks to Unix multiuser systems.[21] Microsoft planned to
make Xenix MS-DOS's multiuser successor;[11] by 1983 a Xenix-based Altos 586 with 512 KB
RAM and 10 MB hard drive cost US$8,000 (equivalent to $19,237 in 2016).[22] BYTE reported
that the Altos "under moderate load approaches DEC VAX performance for most tasks that a
user would normally invoke", while other computers from Sun and Masscomp were much more
expensive but equaled the VAX. The magazine added that both PC/IX and Venix on the IBM PC
outperformed Venix on the PDP-11/23.[19] uNETix, a commercial microcomputer Unix,
implemented the first Unix color windowing system.[citation needed]
In 1986, Computerworld wrote that "Until very recently, almost no one associated Unix with
corporate data processing. [...] the operating system traveled almost exclusively in academic and
technical circles ... But now — almost entirely because of strenuous efforts by AT&T — some
people are beginning to perceive Unix as a viable option for large commercial installations."
Unix reached the mainframe; while Amdahl UTS had been available for several years, IBM
started offering Unix as VM/IX. The total installed base of Unix, however, remained small at
some 230,000 machines.[23]:37,44
Despite its academic reputation – InfoWorld stated in 1989, "Until recently, Unix conjured up
visions of long-haired bearded technoids stuck in the bowels of an R&D lab, coding software
until the wee hours of the morning" – the increasing power of microcomputers in the late 1980s,
and in particular the introduction of the 32-bit Intel 80386, caused Unix to "explode" in
popularity for business applications; Xenix, 386/ix, and other Unix systems for the PC-
compatible market competed with OS/2 in terms of networking, multiuser support, multitasking,
and MS-DOS compatibility.[24]
During this time a number of vendors including Digital Equipment, Sun, Addamax and others
began building trusted versions of UNIX for high security applications, mostly designed for
military and law enforcement applications.
A problem that plagued Unix in this period was the multitude of implementations, based on
either System V, BSD, or what Poul-Henning Kamp later described as a "more or less
competently executed" combination of the two,[25] usually with home-grown extensions to the
base systems from AT&T or Berkeley.[23]:38 Xenix was effectively a third lineage, being based on
the earlier System III.[26] The rivalry between vendors was called the Unix wars; customers soon
demanded standardization.[26]
AT&T responded by issuing a standard, the System V Interface Definition (SVID, 1985), and
required conformance for operating systems to be branded "System V". In 1984, several
European computer vendors established the X/Open consortium with the goal of creating an
open system specification based on Unix (and eventually the SVID).[27] Yet another
standardization effort was the IEEE's POSIX specification (1988), designed as a compromise
API readily implemented on both BSD and System V platforms. POSIX was soon[when?] mandated
by the United States government for many of its own systems.[citation needed]
In the spring of 1988, AT&T took the standardization a step further. First, it collaborated with
SCO to merge System V and Xenix into System V/386.[26] Next, it sought collaboration with Sun
Microsystems (vendor of the 4.2BSD derivative SunOS and its Network File System) to merge
System V, BSD/SunOS and Xenix into a single unified Unix, which would become System V
Release 4. AT&T and Sun, as UNIX International, acted independently of X/Open and drew ire
from other vendors, which started the Open Software Foundation to work on their own unified
Unix, OSF/1, ushering in a new phase of the Unix wars.[26]
1990s
Unix workstations of the 1990s, including those made by DEC, HP, SGI, and Sun
The Common Desktop Environment (CDE) was widely used on Unix workstations.
The Unix wars continued into the 1990s, but turned out to be less serious of a threat than it
originally looked: AT&T and Sun went their own ways after System V.4, while OSF/1's
schedule slipped behind.[26] By 1993, most commercial vendors had changed their variants of
Unix to be based on System V with many BSD features added. The creation of the Common
Open Software Environment (COSE) initiative that year, by the major players in Unix, marked
the end of the most notorious phase of the Unix wars, and was followed by the merger of UI and
OSF in 1994. The new combined entity retained the OSF name and stopped work on OSF/1. By
that time the only vendor using it was Digital Equipment Corporation, which continued its own
development, rebranding their product Digital UNIX in early 1995. POSIX became the unifying
standard for Unix systems (and some other operating systems).[26]
Meanwhile, the BSD world saw its own developments. The group at Berkeley moved its
operating system toward POSIX compliance and released a stripped down version of its
networking code, supposedly without any code that was the property of AT&T. In 1991, a group
of BSD developers (Donn Seeley, Mike Karels, Bill Jolitz, and Trent Hein) left the University of
California to found Berkeley Software Design, Inc. (BSDi), which sold a fully functional
commercial version of BSD Unix for the Intel platform, which they advertised as free of AT&T
code. They ran into legal trouble when AT&T's Unix subsidiary sued BSDi for copyright
infringement and various other charges in relation to BSD; subsequently, the University of
California countersued.[28] Shortly after it was founded, Bill Jolitz left BSDI to pursue distribution
of 386BSD, the free software ancestor of FreeBSD, OpenBSD, and NetBSD.
Shortly after UNIX System V Release 4 was produced, AT&T sold all its rights to UNIX to
Novell. Dennis Ritchie likened this sale to the Biblical story of Esau selling his birthright for the
mess of pottage.[29] Novell developed its own version, UnixWare, merging its NetWare with
UNIX System V Release 4. Novell tried to use this as a marketing tool against Windows NT, but
their core markets suffered considerably. It also quickly settled the court battles with BSDi and
Berkeley.[28]
In 1993, Novell decided to transfer the UNIX trademark and certification rights to the X/Open
Consortium.[30] In 1996, X/Open merged with OSF, creating the Open Group. Various standards
by the Open Group now define what is and what is not a UNIX operating system, notably the
post-1998 Single UNIX Specification.
In 1995, the business of administering and supporting the existing UNIX licenses, plus rights to
further develop the System V code base, were sold by Novell to the Santa Cruz Operation.[31]
Whether Novell also sold the copyrights would later become the subject of litigation (see below).
With the legal troubles between AT&T/Novell and the University of California over, the latter
did two more releases of BSD before disbanding its Computer Systems Research Group in 1995.
The BSD code lived on, however, in its free derivatives and in what Garfinkel et al. call a second
generation of commercial Unix systems, based on BSD. The first exponent of these was BSDi's
offering, popular at internet service providers but eventually not successful enough to sustain the
company.[26]:22 The other main exponent would be Apple Computer.
In 1997, Apple sought a new foundation for its Macintosh operating system and chose
NEXTSTEP, an operating system developed by NeXT. The core operating system, which was
based on BSD and the Mach kernel, was renamed Darwin after Apple acquired it. The
deployment of Darwin in Mac OS X makes it, according to a statement made by an Apple
employee at a USENIX conference, the most widely used Unix-based system in the desktop
computer market.[citation needed]
Meanwhile, Unix had got competition from the open source Linux operating system, a
reimplementation of Unix from scratch, using parts of the GNU project that had been underway
since the mid-1980s. Work on Linux proper was begun in 1991 by Linus Torvalds; in 1998, a
confidential memo at Microsoft stated, "Linux is on track to eventually own the x86 UNIX
market," and further predicted, "I believe that Linux – moreso than NT – will be the biggest
threat to SCO in the near future."[32]
2000s
History of computing
Hardware
Software
• Software
• Unix
• Free software and open-source software
Computer science
• Artificial intelligence
• Compiler construction
• Computer science
• Operating systems
• Programming languages
• Prominent pioneers
• Software engineering
Modern concepts
• Graphical user interface
• Internet
• Personal computers
• Laptops
• Video games
• World Wide Web
Timeline of computing
• 2400 BC–1949
• 1950–1979
• 1980–1989
• 1990–1999
• 2000–2009
• 2010–2019
• more timelines ...
• Category
• v
• t
• e
In 2000, SCO sold its entire UNIX business and assets to Caldera Systems, which later changed
its name to The SCO Group.
The bursting of the dot-com bubble (2001–03) led to significant consolidation of versions of
Unix. Of the many commercial variants of Unix that were born in the 1980s, only Solaris, HP-
UX, and AIX were still doing relatively well in the market, though SGI's IRIX persisted for quite
some time. Of these, Solaris had the largest market share in 2005.[33]
In 2003, the SCO Group started legal action against various users and vendors of Linux. SCO
had alleged that Linux contained copyrighted Unix code now owned by the SCO Group. Other
allegations included trade-secret violations by IBM, or contract violations by former Santa Cruz
customers who had since converted to Linux. However, Novell disputed the SCO Group's claim
to hold copyright on the UNIX source base. According to Novell, SCO (and hence the SCO
Group) are effectively franchise operators for Novell, which also retained the core copyrights,
veto rights over future licensing activities of SCO, and 95% of the licensing revenue. The SCO
Group disagreed with this, and the dispute resulted in the SCO v. Novell lawsuit. On 10 August
2007, a major portion of the case was decided in Novell's favor (that Novell had the copyright to
UNIX, and that the SCO Group had improperly kept money that was due to Novell). The court
also ruled that "SCO is obligated to recognize Novell's waiver of SCO's claims against IBM and
Sequent". After the ruling, Novell announced they have no interest in suing people over Unix
and stated, "We don't believe there is Unix in Linux".[34][35][36] SCO successfully got the 10th
Circuit Court of Appeals to partially overturn this decision on 24 August 2009 which sent the
lawsuit back to the courts for a jury trial.[37][38][39]
On 30 March 2010, following a jury trial, Novell, and not The SCO Group, was "unanimously
[found]" to be the owner of the UNIX and UnixWare copyrights.[40] The SCO Group, through
bankruptcy trustee Edward Cahn, decided to continue the lawsuit against IBM for causing a
decline in SCO revenues.[41]
In 2005, Sun Microsystems released the bulk of its Solaris system code (based on UNIX System
V Release 4) into an open source project called OpenSolaris. New Sun OS technologies, notably
the ZFS file system, were first released as open source code via the OpenSolaris project. Soon
afterwards, OpenSolaris spawned several non-Sun distributions. In 2010, after Oracle acquired
Sun, OpenSolaris was officially discontinued, but the development of derivatives continued.
Since early 2000s, Linux is the leading Unix-like operating system, with other variants of Unix
having only a negligible market share (see Usage share of operating systems).
See also
• Book: Unix
References
1. Jump up ^ Stuart, Brian L. (2009). Principles of operating systems: design & applications. Boston,
Massachusetts: Thompson Learning. p. 23. ISBN 1-4188-3769-5.
2. Jump up ^ "In the Beginning: Unix at Bell Labs".
3. ^ Jump up to: a b c Ritchie, Dennis M. (1984). "The Evolution of the Unix Time-sharing System". AT&T Bell
Laboratories Technical Journal. 63 (6 Part 2): 1577–93. Archived from the original on 8 April 2015. As
PDF
4. Jump up ^ "The Creation of the UNIX* Operating System: The famous PDP-7 comes to the rescue". Bell-
labs.com. Archived from the original on 2014-04-02. Retrieved 2015-04-20.
5. ^ Jump up to: a b c d e f "The History of Unix". BYTE. August 1983. p. 188. Retrieved 31 January 2015.
6. ^ Jump up to: a b c d e Salus, Peter H. (2005). The Daemon, the Gnu and the Penguin. Groklaw.
7. Jump up ^ Salus, Peter H. (1994). A Quarter Century of UNIX. Addison Wesley. p. 9. ISBN 0-201-54777-
5.
8. ^ Jump up to: a b c d e McIlroy, M. D. (1987). A Research Unix reader: annotated excerpts from the
Programmer's Manual, 1971–1986 (PDF) (Technical report). CSTR. Bell Labs. 139.
9. Jump up ^ Stallings, William (2005). Operating Systems: Internals and Design Principles (5th ed.).
Pearson Education. p. 91. ISBN 8131703045.
10. ^ Jump up to: a b "Origins and History of Unix, 1969–1995". Faqs.org. Retrieved 2010-11-09.
11. ^ Jump up to: a b c d e f g h Fiedler, Ryan (October 1983). "The Unix Tutorial / Part 3: Unix in the
Microcomputer Marketplace". BYTE. p. 132. Retrieved 30 January 2015.
12. ^ Jump up to: a b Johnson, Stephen C.; Ritchie, Dennis M. (1978). "Portability of C Programs and the UNIX
System". Bell System Technical Journal. 57 (6): 2021–48. doi:10.1002/j.1538-7305.1978.tb02141.x.
13. Jump up ^ Bodenstab, D. E.; Houghton, T. F.; Kelleman, K. A.; Ronkin, G.; Schan, E. P. (1984). "UNIX
Operating System Porting Experiences". AT&T Bell Laboratories Technical Journal. 63 (8): 1769–90.
doi:10.1002/j.1538-7305.1984.tb00064.x.
14. Jump up ^ Stevens, W. Richard; Rago, Stephen A. (2013). "1.11 System Calls and Library Functions".
Advanced Programming in the UNIX Environment (3rd ed.). Addison-Wesley. p. 21. ISBN 032163800X.
15. Jump up ^ Lycklama, Heinz (1978). "UNIX Time-Sharing System: UNIX on a Microprocessor". Bell
System Technical Journal. 57 (6): 2087–2101. doi:10.1002/j.1538-7305.1978.tb02143.x.
16. Jump up ^ Bell System Software (April 1980). "(Advertisement)" (PDF). Australian Unix Users Group
Newsletter. 2 (4). p. 8.
17. Jump up ^ Ritchie, Dennis M. "Unix Advertising". former Bell Labs Computing and Mathematical
Sciences Research. Retrieved 17 February 2014.
18. Jump up ^ Tilson, Michael (October 1983). "Moving Unix to New Machines". BYTE. p. 266. Retrieved 31
January 2015.
19. ^ Jump up to: a b c Hinnant, David F. (Aug 1984). "Benchmarking UNIX Systems". BYTE. pp. 132–135,
400–409. Retrieved 23 February 2016.
Further reading
Books
• Salus, Peter H. (1994). A Quarter Century of UNIX. Addison Wesley. ISBN 0-201-54777-
5.
Television.
External links
Look up Unix in Wiktionary, the free dictionary.
Database transaction
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged
and removed. (August 2010) (Learn how and when to remove this template message)
A transaction symbolizes a unit of work performed within a database management system (or
similar system) against a database, and treated in a coherent and reliable way independent of
other transactions. A transaction generally represents any change in database. Transactions in a
database environment have two main purposes:
1. To provide reliable units of work that allow correct recovery from failures and keep a
database consistent even in cases of system failure, when execution stops (completely or
partially) and many operations upon a database remain uncompleted, with unclear status.
2. To provide isolation between programs accessing a database concurrently. If this
isolation is not provided, the programs' outcomes are possibly erroneous.
Contents
[hide]
• 1 Purpose
• 2 Transactional databases
o 2.1 In SQL
• 3 Object databases
• 4 Distributed transactions
• 5 Transactional filesystems
• 6 See also
• 7 References
• 8 Further reading
• 9 External links
Purpose
Databases and other data stores which treat the integrity of data as paramount often include the
ability to handle transactions to maintain the integrity of data. A single transaction consists of
one or more independent units of work, each reading and/or writing information to a database or
other data store. When this happens it is often important to ensure that all such processing leaves
the database or data store in a consistent state.
Examples from double-entry accounting systems often illustrate the concept of transactions. In
double-entry accounting every debit requires the recording of an associated credit. If one writes a
check for $100 to buy groceries, a transactional double-entry accounting system must record the
following two entries to cover the single transaction:
A transactional system would make both entries pass or both entries would fail. By treating the
recording of multiple entries as an atomic transactional unit of work the system maintains the
integrity of the data recorded. In other words, nobody ends up with a situation in which a debit is
recorded but no associated credit is recorded, or vice versa.
Transactional databases
A transactional database is a DBMS where write transactions on the database are able to be
rolled back if they are not completed properly (e.g. due to power or connectivity loss).
Most modern relational database management systems fall into the category of databases that
support transactions.
If no errors occurred during the execution of the transaction then the system commits the
transaction. A transaction commit operation applies all data manipulations within the scope of
the transaction and persists the results to the database. If an error occurs during the transaction,
or if the user specifies a rollback operation, the data manipulations within the transaction are not
persisted to the database. In no case can a partial transaction be committed to the database since
that would leave the database in an inconsistent state.
Internally, multi-user databases store and process transactions, often by using a transaction ID or
XID.
There are multiple varying ways for transactions to be implemented other than the simple way
documented above. Nested transactions, for example, are transactions which contain statements
within them that start new transactions (i.e. sub-transactions). Multi-level transactions are a
variant of nested transactions where the sub-transactions take place at different levels of a
layered system architecture (e.g., with one operation at the database-engine level, one operation
at the operating-system level) [2] Another type of transaction is the compensating transaction.
In SQL
Transactions are available in most SQL database implementations, though with varying levels of
robustness. (MySQL, for example, does not support transactions in the MyISAM storage engine,
which was its default storage engine before version 5.5.)
A transaction is typically started using the command BEGIN (although the SQL standard specifies
START TRANSACTION). When the system processes a COMMIT statement, the transaction ends with
successful completion. A ROLLBACK statement can also end the transaction, undoing any work
performed since BEGIN TRANSACTION. If autocommit was disabled using START TRANSACTION,
autocommit will also be re-enabled at the transaction's end.
One can set the isolation level for individual transactional operations as well as globally. At the
READ COMMITTED level, the result of any work done after a transaction has commenced, but
before it has ended, will remain invisible to other database-users until it has ended. At the lowest
level (READ UNCOMMITTED), which may occasionally be used to ensure high concurrency,
such changes will be visible.
Object databases
Relational databases traditionally comprise tables with fixed size fields and thus records. Object
databases comprise variable sized blobs (possibly incorporating a mime-type or serialized). The
fundamental similarity though is the start and the commit or rollback.
After starting a transaction, database records or objects are locked, either read-only or read-write.
Actual reads and writes can then occur. Once the user (and application) is happy, any changes
are committed or rolled-back atomically, such that at the end of the transaction there is no
inconsistency.
Distributed transactions
Transactional filesystems
The Namesys Reiser4 filesystem for Linux[3] supports transactions, and as of Microsoft Windows
Vista, the Microsoft NTFS filesystem[4] supports distributed transactions across networks.
See also
• Concurrency control
References
1. Jump up ^ A transaction is a group of operations that are atomic, consistent, isolated, and
durable (ACID).
2. Jump up ^ Beeri, C., Bernstein, P.A., and Goodman, N. A model for concurrency in nested
transactions systems. Journal of the ACM, 36(1):230-269, 1989
3. Jump up ^ namesys.com
4. Jump up ^ "MSDN Library". Retrieved 16 October 2014.[dead link]
Further reading
• Philip A. Bernstein, Eric Newcomer (2009): Principles of Transaction Processing, 2nd
Edition, Morgan Kaufmann (Elsevier), ISBN 978-1-55860-623-4
• Gerhard Weikum, Gottfried Vossen (2001), Transactional information systems: theory,
algorithms, and the practice of concurrency control and recovery, Morgan Kaufmann,
ISBN 1-55860-508-8
External links
• c2:TransactionProcessing
• https://docs.oracle.com/database/121/CNCPT/transact.htm#CNCPT016
• https://docs.oracle.com/cd/B28359_01/server.111/b28318/transact.htm