0% found this document useful (0 votes)
17 views12 pages

Linux CPU

Uploaded by

jivamoy145
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views12 pages

Linux CPU

Uploaded by

jivamoy145
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Shells usually understand a number of special characters, such as:

Ampersand (&)
Placed at the end of a command, executes the command in the
background (see also “Job control”)
Backslash (\)
Used to continue a command on the next line, for better readability of
long commands
Pipe (|)Connects stdout of one process with the stdin of the next process,
allowing you to pass data without having to store it in files as a
temporary place
PIPES AND THE UNIX PHILOSOPHY
While pipes might seem not too exciting at first glance, there’s much
more to them. I once had a nice interaction with Doug McIlroy, the
inventor of pipes. I wrote an article, “Revisiting the Unix Philosophy in
2018”, in which I drew parallels between UNIX and microservices.
Someone commented on the article, and that comment led to Doug
sending me an email (very unexpectedly, and I had to verify to believe
it) to clarify things.
Again, let’s see some of the theoretical content in action. Let’s try to figure
out how many lines an HTML file contains by downloading it using curl
and then piping the content to the wc tool:
$ curl https://example.com 2> /dev/null | \
wc -l
46
Use curl to download the content from the URL, and discard the status
that it outputs on stderr. (Note: in practice, you’d use the -s option of
curl, but we want to learn how to apply our hard-gained knowledge,
right?)
The stdout of curl is fed to stdin of wc, which counts the number of
lines with the -l option.
Now that you have a basic understanding of commands, streams, and
redirection, let’s move on to another core shell feature, the handling of
variables.
Variables
A term you will come across often in the context of shells is variables.
Whenever you don’t want to or cannot hardcode a value, you can use avariable to store and change
a value. Use cases include the following:
When you want to handle configuration items that Linux exposes—for
example, the place where the shell looks for executables captured in
the $PATH variable. This is kind of an interface where a variable might
be read/write.
When you want to interactively query the user for a value, say, in the
context of a script.
When you want to shorten input by defining a long value once—for
example, the URL of an HTTP API. This use case roughly corresponds
to a const value in a program language since you don’t change the
value after you have declared the variable.
We distinguish between two kinds of variables:
Environment variables
Shell-wide settings; list them with env.
Shell variables
Valid in the context of the current execution; list with set in bash. Shell
variables are not inherited by subprocesses.
You can, in bash, use export to create an environment variable. When you
want to access the value of a variable, put a $ in front of it, and when you
want to get rid of it, use unset.
OK, that was a lot of information. Let’s see how that looks in practice (in
bash):
$ set MY_VAR=42
$ set | grep MY_VAR
_=MY_VAR=42
$ export MY_GLOBAL_VAR="fun with vars"
$ set | grep 'MY_*'MY_GLOBAL_VAR='fun with vars'
_=MY_VAR=42
$ env | grep 'MY_*'
MY_GLOBAL_VAR=fun with vars
$ bash
$ echo $MY_GLOBAL_VAR
fun with vars
$ set | grep 'MY_*'
MY_GLOBAL_VAR='fun with vars'
$ exit
$ unset $MY_VAR
$ set | grep 'MY_*'
MY_GLOBAL_VAR='fun with vars'
Create a shell variable called MY_VAR, and assign a value of 42.
List shell variables and filter out MY_VAR. Note the _=, indicating it’s not
exported.
Create a new environment variable called MY_GLOBAL_VAR.
List shell variables and filter out all that start with MY_. We see, as
expected, both of the variables we created in the previous steps.
List environment variables. We see MY_GLOBAL_VAR, as we would hope.
Create a new shell session—that is, a child process of the current shell
session that doesn’t inherit MY_VAR.
Access the environment variable MY_GLOBAL_VAR.
List the shell variables, which gives us only MY_GLOBAL_VAR since
we’re in a child process.
Exit the child process, remove the MY_VAR shell variable, and list our
shell variables. As expected, MY_VAR is gone.
In Table 3-1 I put together common shell and environment variables. You
will find those variables almost everywhere, and they are important to
understand and to use. For any of the variables, you can have a look at the
respective value using echo $XXX, with XXX being the variable name.T
a
b
l
e
3
-
1
.
C
o
m
m
o
n
s
h
e
l
l
a
n
d
e
n
v
i
r
o
n
m
e
n
tv
a
r
i
a
b
l
e
s
VariableTypeSemantics
EDITOREnvironmentThe path to program used by default
to edit files
HOMEPOSIXThe path of the home directory of the
current user
HOSTNAMEbash shellThe name of the current host
IFSPOSIXList of characters to separate fields;
used when the shell splits words on
expansion
PATHPOSIXContains a list of directories in which
the shell looks for executable
programs (binaries or scripts)
PS1EnvironmentThe primary prompt string in use
PWDEnvironmentThe full path of the working directoryOLDPWDbash shellThe full path of the
directory before
the last cd command
RANDOMbash shellA random integer between 0 and
32767
SHELLEnvironmentContains the currently used shell
TERMEnvironmentThe terminal emulator used
UIDEnvironmentCurrent user unique ID (integer value)
USEREnvironmentCurrent user name
_bash shellLast argument to the previous
command executed in the foreground
?bash shellExit status; see “Exit status”
$bash shellThe ID of the current process (integer
value)
0bash shellThe name of the current process
Further, check out the full list of bash-specific variables, and also note that
the variables from Table 3-1 will come in handy again in the context of
“Scripting”.
Exit status
The shell communicates the completion of a command execution to the
caller using what is called the exit status. In general, it is expected that aLinux command returns a
status when it terminates. This can either be a
normal termination (happy path) or an abnormal termination (something
went wrong). A 0 exit status means that the command was successfully run,
without any errors, whereas a nonzero value between 1 and 255 signals a
failure. To query the exit status, use echo $?.
Be careful with exit status handling in a pipeline, since some shells make
only the last status available. You can work around that limitation by using
$PIPESTATUS.
Built-in commands
Shells come with a number of built-in commands. Some useful examples
are yes, echo, cat, or read (depending on the Linux distro, some of those
commands might not be built-ins but located in /usr/bin). You can use the
help command to list built-ins. Do remember, however, that everything else
is a shell-external program that you usually can find in /usr/bin (for user
commands) or in /usr/sbin (for administrative commands).
How do you know where to find an executable? Here are some ways:
$ which ls
/usr/bin/ls
$ type ls
ls is aliased to `ls --color=auto'
NOTE
One of the technical reviewers of this book rightfully pointed out that which is a non-
POSIX, external program that may not always be available. Also, they suggested using
command -v rather than which to get the program path and or shell alias/function. See
also the shellcheck docs for further details on the matter.
Job control
A feature most shells support is called job control. By default, when you
enter a command, it takes control of the screen and the keyboard, which weusually call running in
the foreground. But what if you don’t want to run
something interactively, or, in case of a server, what if there is no input
from stdin at all? Enter job control and background jobs: to launch a
process in the background, put an & at the end, or to send a foreground
process to the background, press Ctrl+Z.
The following example shows this in action, giving you a rough idea:
$ watch -n 5 "ls" &
$ jobs
Job
Group
1
3021
$ fg
Every 5.0s: ls
2021
CPU
0%
State Command
stopped watch -n 5 "ls" &
Sat Aug 28 11:34:32
Dockerfile
app.yaml
example.json
main.go
script.sh
test
By putting the & at the end, we launch the command in the background.
List all jobs.
With the fg command, we can bring a process to the foreground. If you
want to quit the watch command, use Ctrl+C.
If you want to keep a background process running, even after you close the
shell you can prepend the nohup command. Further, for a process that is
already running and wasn’t prepended with nohup, you can use disown
after the fact to achieve the same effect. Finally, if you want to get rid of a
running process, you can use the kill command with various levels of
forcefulness (see “Signals” for more details).
Rather than job control, I recommend using terminal multiplexer, as
discussed in “Terminal Multiplexer”. These programs take care of the most
common use cases (shell closes, multiple processes running and need
coordination, etc.) and also support working with remote systems.Let’s move on to discuss modern
replacements for frequently used core
commands that have been around forever.
Modern Commands
There are a handful of commands you will find yourself using over and
over again on a daily basis. These include commands for navigating
directories (cd), listing the content of a directory (ls), finding files (find),
and displaying the content of files (cat, less). Given that you are using
these commands so often, you want to be as efficient as possible—every
keystroke counts.
Modern variations exist for some of these often-used commands. Some of
them are drop-in replacements, and others extend the functionality. All of
them offer somewhat sane default values for common operations and rich
output that is generally easier to comprehend, and they usually lead to you
typing less to accomplish the same task. This reduces the friction when you
work with the shell, making it more enjoyable and improving the flow. If
you want to learn more about modern tooling, check out Appendix B. In
this context, a word of caution, especially if you’re applying this knowledge
in an enterprise environment: I have no stake in any of these tools and
purely recommend them because I have found them useful myself. A good
way to go about installing and using any of these tools is to use a version of
the tool that has been vetted by your Linux distro of choice.
Listing directory contents with exa
Whenever you want to know what a directory contains, you use ls or one
of its variants with parameters. For example, in bash I used to have l
aliased to ls -GAhltr. But there’s a better way: exa, a modern replacement
for ls, written in Rust, with built-in support for Git and tree rendering. In
this context, what would you guess is the most often used command after
you’ve listed the directory content? In my experience it’s to clear the
screen, and very often people use clear. That’s typing five characters and
then hitting ENTER. You can have the same effect much faster—simply use
Ctrl+L.Viewing file contents with bat
Let’s assume that you listed a directory’s contents and found a file you want
to inspect. You’d use cat, maybe? There’s something better I recommend
you have a look at: bat. The bat command, shown in Figure 3-3, comes
with syntax highlighting, shows nonprintable characters, supports Git, and
has an integrated pager (the page-wise viewing of files longer than what can
be displayed on the screen).
Finding content in files with rg
Traditionally, you would use grep to find something in a file. However,
there’s a modern command, rg, that is fast and powerful.
We’re going to compare rg to a find and grep combination in this
example, where we want to find YAML files that contain the string
“sample”:
$ find . -type f -name "*.yaml" -exec grep "sample" '{}' \; -print
app: sample
app: sample
./app.yaml
$ rg -t "yaml" sample
app.yaml
9:
app: sample
14:
app: sample
Use find and grep together to find a string in YAML files.
Use rg for the same task.
If you compare the commands and the results in the previous example, you
see that not only is rg easier to use but the results are more informative
(providing context, in this case the line number).Figure 3-3. Rendering of a Go file (top) and a
YAML file (bottom) by bat
JSON data processing with jqAnd now for a bonus command. This one, jq, is not an actual
replacement
but more like a specialized tool for JSON, a popular textual data format.
You find JSON in HTTP APIs and configuration files alike.
So, use jq rather than awk or sed to pick out certain values. For example,
by using a JSON generator to generate some random data, I have a 2.4 kB
JSON file example.json that looks something like this (only showing the
first record here):
[
{
"_id": "612297a64a057a3fa3a56fcf",
"latitude": -25.750679,
"longitude": 130.044327,
"friends": [
{
"id": 0,
"name": "Tara Holland"
},
{
"id": 1,
"name": "Giles Glover"
},
{
"id": 2,
"name": "Pennington Shannon"
}
],
"favoriteFruit": "strawberry"
},
...
Let’s say we’re interested in all “first” friends—that is, entry 0 in the
friends array—of people whose favorite fruit is “strawberry.” With jq you
would do the following:
$ jq 'select(.[].favoriteFruit=="strawberry") | .[].friends[0].name'
example.json
"Tara Holland"
"Christy Mullins"
"Snider Thornton"
"Jana Clay"
"Wilma King"That was some CLI fun, right? If you’re interested in finding out more
about the topic of modern commands and what other candidates there might
be for you to replace, check out the modern-unix repo, which lists
suggestions. Let’s now move our focus to some common tasks beyond
directory navigation and file content viewing and how to go about them.
Common Tasks
There are a number of things you likely find yourself doing often, and there
are certain tricks you can use to speed up your tasks in the shell. Let’s
review these common tasks and see how we can be more efficient.
Shorten often-used commands
One fundamental insight with interfaces is that commands that you are
using very often should take the least effort—they should be quick to enter.
Now apply this idea to the shell: rather than git diff --color-moved, I
type d (a single character), since I’m viewing changes in my repositories
many hundreds of times per day. Depending on the shell, there are different
ways to achieve this: in bash this is called an alias, and in Fish (“Fish
Shell”) there are abbreviations you can use.
Navigating
When you enter commands on the shell prompt, there are a number of
things you might want to do, such as navigating the line (for example,
moving the cursor to the start) or manipulating the line (say, deleting
everything left of the cursor). Table 3-2 lists common shell shortcuts.T
a
b
l
e
3
-
2
.
S
h
e
l
l
n
a
v
i
g
a
t
i
o
n
a
n
d
e
d
i
t
i
n
gs
h
o
r
t
c
u
t
s
Action
Command
Note
Move cursor to start of lineCtrl+a-
Move cursor to end of lineCtrl+e-
Move cursor forward one
characterCtrl+f-
Move cursor back one characterCtrl+b-
Move cursor forward one wordAlt+fWorks only with left
Alt
Move cursor back one wordAlt+b-
Delete current characterCtrl+d-
Delete character left of cursorCtrl+h-
Delete word left of cursorCtrl+w-Delete everything right of
cursorCtrl+k-
Delete everything left of cursorCtrl+u-
Clear screenCtrl+l-
Cancel commandCtrl+c-
UndoCtrl+_bash only
Search historyCtrl+rSome shells
Cancel searchCtrl+gSome shells
Note that not all shortcuts may be supported in all shells, and certain actions
such as history management may be implemented differently in certain
shells. In addition, you might want to know that these shortcuts are based
on Emacs editing keystrokes. Should you prefer vi, you can use set -o vi
in your .bashrc file, for example, to perform command-line editing based on
vi keystrokes. Finally, taking Table 3-2 as a starting point, try out what
your shell supports and see how you can configure it to suit your needs.
File content management
You don’t always want to fire up an editor such as vi to add a single line of
text. And sometimes you can’t do it—for example, in the context of writing
a shell script (“Scripting”).
So, how can you manipulate textual content? Let’s have a look at a few
examples:$ echo "First line" > /tmp/something
$ cat /tmp/something
First line
$ echo "Second line" >> /tmp/something && \
cat /tmp/something
First line
Second line
$ sed 's/line/LINE/' /tmp/something
First LINE
Second LINE
$ cat << 'EOF' > /tmp/another
First line
Second line
Third line
EOF
$ diff -y /tmp/something /tmp/another
First line
Second line
First line
Second line
> Third line
Create a file by redirecting the echo output.
View content of file.
Append a line to file using the >> operator and then view content.
Replace content from file using sed and output to stdout.
Create a file using the here document.
Show differences between the files we created.
Now that you know the basic file content manipulation techniques, let’s
have a look at the advanced viewing of file contents.
Viewing long files
For long files—that is, files that have more lines than the shell can display
on your screen—you can use pagers like less or bat (bat comes with a
built-in pager). With paging, a program splits the output into pages where
each page fits into what the screen can display and some commands to
navigate the pages (view next page, previous page, etc.).
Another way to deal with long files is to display only a select region of the
file, like the first few lines. There are two handy commands for this: head
and tail.For example, to display the beginning of a file:
$ for i in {1..100} ; do echo $i >> /tmp/longfile ; done
$ head -5 /tmp/longfile
1
2
3
4
5
Create a long file (100 lines here).
Display the first five lines of the long file.
Or, to get live updates of a file that is constantly growing, we could use:
$ sudo tail -f /var/log/Xorg.0.log
[ 36065.898] (II) event14 - ALPS01:00 0911:5288 Mouse: device is a pointer
[ 36065.900] (II) event15 - ALPS01:00 0911:5288 Touchpad: device is a touchpad
[ 36065.901] (II) event4 - Intel HID events: is tagged by udev as: Keyboard
[ 36065.901] (II) event4 - Intel HID events: device is a keyboard
...
Display the end of a log file using tail, with the -f option meaning to
follow, or to update automatically.
Lastly, in this section we look at dealing with date and time.
Date and time handling
The date command can be a useful way to generate unique file names. It
allows you to generate dates in various formats, including the Unix time
stamp, as well as to convert between different date and time formats.
$ date +%s
1629582883
$ date -d @1629742883 '+%m/%d/%Y:%H:%M:%S'
08/21/2021:21:54:43
Create a UNIX time stamp.
Convert a UNIX time stamp to a human-readable date.ON THE UNIX EPOCH TIME
The UNIX epoch time (or simply UNIX time) is the number of seconds
elapsed since 1970-01-01T00:00:00Z. UNIX time treats every day as
exactly 86,400 seconds long.
If you’re dealing with software that stores UNIX time as a signed 32-bit
integer, you might want to pay attention since this will cause issues on
2038-01-19, as then the counter will overflow, which is also known as
the Year 2038 problem.
You can use online converters for more advanced operations,
supporting microseconds and milliseconds resolutions.
With that we wrap up the shell basics section. By now you should have a
good understanding of what terminals and shells are and how to use them to
do basic tasks such as navigating the filesystem, finding files, and more. We
now move on to the topic of human-friendly shells.
Human-Friendly Shells
While the bash shell is likely still the most widely used shell, it is not
necessarily the most human-friendly one. It has been around since the late
1980s, and its age sometimes shows. There are a number of modern,
human-friendly shells I strongly recommend you evaluate and use instead
of bash.
We’ll first examine in detail one concrete example of a modern, human-
friendly shell called the Fish shell and then briefly discuss others, just to
make sure you have an idea about the range of choices. We wrap up this
section with a quick recommendation and conclusion in “Which Shell
Should I Use?”.
Fish ShellThe Fish shell describes itself as a smart and user-friendly command-line
shell. Let’s have a look at some basic usage first and then move on to
configuration topics.
Basic usage
For many daily tasks, you won’t notice a big difference from bash in terms
of input; most of the commands provided in Table 3-2 are valid. However,
there are two areas where fish is different from and much more convenient
than bash:
There is no explicit history management.
You simply type and you get previous executions of a command shown.
You can use the up and down key to select one (see Figure 3-4).
Autosuggestions are available for many commands.
This is shown in Figure 3-5. In addition, when you press Tab, the Fish
shell will try to complete the command, argument, or path, giving you
visual hints such as coloring your input red if it doesn’t recognize the
command.
Figure 3-4. Fish history handling in action
Figure 3-5. Fish autosuggestion in actionTable 3-3 lists some common fish commands. In this
context, note
specifically the handling of environment variables.T
a
b
l
e
3
-
3
.
F
i
s
h
s
h
e
l
l
r
e
f
e
r
e
n
c
e
Task
CommandExport environment variable KEY with
value VALset -x KEY VAL
Delete environment variable KEYset -e KEY
Inline env var KEY for command cmdenv KEY=VAL cmd
Change path length to 1set -g fish_prompt_pwd_dir_leng
th 1
Manage abbreviationsabbr
Manage functionsfunctions and funcd
Unlike other shells, fish stores the exit status of the last command in a
variable called $status instead of in $?.
If you’re coming from bash, you may also want to consult the Fish FAQ,
which addresses most of the gotchas.

You might also like