N. Tolaram - Software Development With Go. Cloud-Native Programming Using Golang With Linux and Docker (2023)
N. Tolaram - Software Development With Go. Cloud-Native Programming Using Golang With Linux and Docker (2023)
with Go
Cloud-Native Programming
using Golang with Linux
and Docker
Nanik Tolaram
Software Development with Go: Cloud-Native Programming using Golang
with Linux and Docker
Nanik Tolaram
Sydney, NSW, Australia
Acknowledgments����������������������������������������������������������������������������xvii
Introduction���������������������������������������������������������������������������������������xix
Summary������������������������������������������������������������������������������������������������������������14
v
Table of Contents
ELF Package�������������������������������������������������������������������������������������������������������22
High-Level ELF Format���������������������������������������������������������������������������������������������� 23
Dump Example���������������������������������������������������������������������������������������������������������� 24
/sys Filesystem���������������������������������������������������������������������������������������������������28
Reading AppArmor����������������������������������������������������������������������������������������������������� 29
Summary������������������������������������������������������������������������������������������������������������31
Summary������������������������������������������������������������������������������������������������������������48
vi
Table of Contents
Docker Proxy�������������������������������������������������������������������������������������������������������98
Container Attack Surface����������������������������������������������������������������������������������105
Summary����������������������������������������������������������������������������������������������������������106
gosec����������������������������������������������������������������������������������������������������������������122
Inside gosec������������������������������������������������������������������������������������������������������������ 123
Rules������������������������������������������������������������������������������������������������������������������������ 128
Summary����������������������������������������������������������������������������������������������������������130
vii
Table of Contents
Chapter 8: Scorecard������������������������������������������������������������������������131
Source Code������������������������������������������������������������������������������������������������������131
What Is Scorecard?�������������������������������������������������������������������������������������������131
Setting Up Scorecard����������������������������������������������������������������������������������������������� 133
Running Scorecard�������������������������������������������������������������������������������������������������� 137
High-Level Flow������������������������������������������������������������������������������������������������������� 139
GitHub���������������������������������������������������������������������������������������������������������������145
GitHub API���������������������������������������������������������������������������������������������������������������� 145
GitHub Explorer�������������������������������������������������������������������������������������������������������� 156
Summary����������������������������������������������������������������������������������������������������������159
UDP Networking������������������������������������������������������������������������������������������������168
UDP Client���������������������������������������������������������������������������������������������������������������� 169
UDP Server�������������������������������������������������������������������������������������������������������������� 172
Concurrent Servers�������������������������������������������������������������������������������������������������� 174
Load Testing������������������������������������������������������������������������������������������������������175
Summary����������������������������������������������������������������������������������������������������������179
viii
Table of Contents
DNS Server��������������������������������������������������������������������������������������������������������188
Running a DNS Server��������������������������������������������������������������������������������������������� 188
DNS Forwarder�������������������������������������������������������������������������������������������������������� 189
Pack and Unpack����������������������������������������������������������������������������������������������������� 193
Summary����������������������������������������������������������������������������������������������������������196
Using gopacket�������������������������������������������������������������������������������������������������205
pcap������������������������������������������������������������������������������������������������������������������������� 205
Networking Sniffer�������������������������������������������������������������������������������������������������� 206
Capturing With BPF�������������������������������������������������������������������������������������������������� 217
Summary����������������������������������������������������������������������������������������������������������222
Epoll Library������������������������������������������������������������������������������������������������������232
Summary����������������������������������������������������������������������������������������������������������235
ix
Table of Contents
Summary����������������������������������������������������������������������������������������������������������263
Summary����������������������������������������������������������������������������������������������������������290
x
Table of Contents
Summary����������������������������������������������������������������������������������������������������������306
Bubbletea����������������������������������������������������������������������������������������������������������313
Init��������������������������������������������������������������������������������������������������������������������������� 315
Update��������������������������������������������������������������������������������������������������������������������� 318
View������������������������������������������������������������������������������������������������������������������������� 319
Summary����������������������������������������������������������������������������������������������������������321
xi
Table of Contents
Index�������������������������������������������������������������������������������������������������377
xii
About the Author
Nanik Tolaram is a big proponent of open source software with over 20
years of industry experience. He has dabbled in different programming
languages like Java, JavaScript, C, and C++. He has developed different
products from the ground up while working in start-up companies. He is
a software engineer at heart, but he loves to write technical articles and
share his knowledge with others. He learned to program with Go during
the COVID-19 pandemic and hasn’t looked back.
xiii
About the Technical Reviewer
Fabio Claudio Ferracchiati is a senior consultant and a senior
analyst/developer using Microsoft technologies. He works for BluArancio
(www.bluarancio.com). He is a Microsoft Certified Solution Developer for
.NET, a Microsoft Certified Application Developer for .NET, a Microsoft
Certified Professional, and a prolific author and technical reviewer.
Over the past ten years, he’s written articles for Italian and international
magazines and coauthored more than ten books on a variety of
computer topics.
xv
Acknowledgments
Thanks to everyone on the Apress team who helped and guided me so
much. Special thanks to James Robinson-Prior who guided me through
the writing process and to Nirmal Selvaraj who made sure everything was
done correctly and things were on track.
Thanks to the technical reviewers for taking time from their busy
schedules to review my book and provide great feedback.
Finally, thanks to you, the reader, for spending time reading this book
and spreading the love of Go.
xvii
Introduction
Go has been out for more than 10 years, and open source projects were
developed using Go. The aim of this book is to show you the way to use Go
to write a variety of applications that are useful in cloud-based systems.
Deploying applications into the cloud is a normal process that
developers do every day. There are many questions that developers ask
themselves about the cloud, like
xix
CHAPTER 1
System Calls
Linux provides a lot of features and provides applications access to
everything that the operating system has access to. When discussing
system calls, most people will turn their attention to using C because
it is the most common language to use when interfacing with the
operating system.
In this chapter, you will explore what system calls are and how you can
program in Go to make system calls. By the end of this chapter, you will
have learned the following:
If you are using Go for the first time, refer to the online documentation
at https://go.dev/doc/install. The online documentation will walk you
through the steps to install Go on your local computer. Go through the Go
tutorial that the Go documentation provides at https://go.dev/doc/.
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
4
Chapter 1 System Calls
C System Call
In this section, you will briefly look at how system calls normally work
inside a C program. This will give you an idea of how system calls are done
in C compared to how they are done in Go.
You will see a simple example of using a socket to connect to a server
and read the response. The code can be found inside the chapter1/c
directory. The code creates a socket and uses it to connect to a public
website named httpbin.org and print the response it receives to the
screen. Listing 1-1 shows the sample code.
5
Chapter 1 System Calls
#include<netdb.h>
server.sin_family = AF_INET;
server.sin_port = htons(80);
6
Chapter 1 System Calls
To test the code, make sure you have a C compiler installed in your
machine. Follow the instructions outlined on the GCC website to install
the compiler and tools (https://gcc.gnu.org/). Use the following
command to compile the code:
cc sample.c -o sample
Connected
Data Send
Reply received
HTTP/1.1 200 OK
7
Chapter 1 System Calls
The code sample shows the system call that it uses to resolve the
address of httpbin.org to an IP address by using the gethostbyname
function. It also uses the connect function to use the newly created socket
to connect to the server.
In the next section, you will start exploring Go by using the standard
library to write code using system calls.
sys/unix Package
The sys/unix package is a package provided by the Go language that
provides a system-level interface to interact with the operating system. Go
can run on a variety of operating systems, which means that it provides
different interfaces to applications for different operating systems.
Complete package documentation can be found at https://pkg.go.dev/
golang.org/x/sys/unix. Figure 1-3 shows different system calls in
different operating systems, in this case between Darwin and Linux.
8
Chapter 1 System Calls
Listing 1-2 shows how to use system calls using the sys/unix package.
package main
import (
u "golang.org/x/sys/unix"
"log"
)
func main() {
c := make([]byte, 512)
9
Chapter 1 System Calls
log.Println(string(c))
}
The other system call that the application uses is to get the current
working directory using the Getcwd function.
System Call in Go
In the previous section, you looked at a simple example of using the sys/
unix package. In this section, you will explore more on system calls by
10
Chapter 1 System Calls
File: ./README.md
Size: 476 Blocks: 8
IO Block: 4096 regular file
Device: fd01h/64769d Inode: 2637168
Links: 1
Access: (0644/-rw-r--r--) Uid: (1000/ nanik)
Gid: (1000/ nanik)
Access: 2022-02-19 18:10:29.919351223 +1100 AEDT
Modify: 2022-02-19 18:10:29.919351223 +1100 AEDT
Change: 2022-02-19 18:10:29.919351223 +1100 AEDT
Birth: 2022-02-19 18:10:29.919351223 +1100 AEDT
Attrs: 0000000000000000 (-----....)
How does the application get all this information about the file? It
obtains the information from the operating system by making a system
call. Let's take a look at the code in Listing 1-3.
import (
....
"golang.org/x/sys/unix"
)
11
Chapter 1 System Calls
....
func main() {
log.SetFlags(0)
flag.Parse()
12
Chapter 1 System Calls
13
Chapter 1 System Calls
Summary
In this chapter, you learned what system calls are and how to write a
simple application to interface with the operating system by using the
sys/unix package. You dug deeper into system calls by looking at an open
source project to learn how it uses the system calls to provide statistical
information about a particular file.
In the next chapters, you will explore system calls more and you will
look at various ways to interface with the operating system using Go.
14
CHAPTER 2
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
Syscall Package
The syscall package is the standard library provided by Go that provides
function calls that interface with the log-level operating system. The
following are some of the functionalities provided by the package:
• Change directory
syscall Application
Let’s take the existing application from Chapter 1 and convert it to use
the syscall package. The app can be seen inside the chapter2/syscalls
directory. Open terminal and run the sample as follows:
go run main.go
The sample code uses system calls to get information about itself such
as the process id assigned by the operating system for itself, the parent id,
and others. The following shows how it uses the syscall package:
package main
import (
16
Chapter 2 System Calls Using Go
"log"
s "syscall"
)
func main() {
...
...
}
17
Chapter 2 System Calls Using Go
go run main.go
18
Chapter 2 System Calls Using Go
The output shows in gigabytes the total size of the drive, total amount
of disk used, and total amount of disk free. The following code snippet
shows how the disk information is obtained using the syscall package:
func main() {
var statfs = syscall.Statfs_t{}
var total uint64
var used uint64
var free uint64
err := syscall.Statfs("/", &statfs)
if err != nil {
fmt.Printf("[ERROR]: %s\n", err)
} else {
total = statfs.Blocks * uint64(statfs.Bsize)
free = statfs.Bfree * uint64(statfs.Bsize)
used = total - free
}
...
}
As seen in the above code snippet, the application uses the syscall.
Statfs function call to get information about the path. In this case, it’s the
root directory. The result is populated into the statfs variable, which is of
type Statfs_t. The Statfs_t struct declaration looks like the following:
19
Chapter 2 System Calls Using Go
Fsid Fsid
Namelen int64
Frsize int64
Flags int64
Spare [4]int64
}
go run main.go
The web server is now ready to accept connection on port 8888. Open
your browser and type in http://localhost:8888. You will get a response
in your browser: Server with syscall
The following code snippet shows the function that takes care of
starting up the server that listens on port 8888:
20
Chapter 2 System Calls Using Go
21
Chapter 2 System Calls Using Go
for {
cSock, cAddr, err := syscall.Accept(fd)
ELF Package
The standard library provides different packages that can be used to
interact with different parts of the operating system. In the previous
sections, you looked at interacting on a system level by using the different
standard library packages. In this section, you will look at the debug/elf
package.
22
Chapter 2 System Calls Using Go
23
Chapter 2 System Calls Using Go
Dump Example
In this section, you will take a look at an open source project named
GoPlay, which is hosted at https://github.com/n4ss/GoPlay. It can also
be found inside the chapter2/GoPlay directory. This is a simple app that
dumps the contents of a Go ELF executable file. You will look at how the
application uses the Go library to read the ELF file
Compile the GoPlay application to create an executable using the
following command:
go build main.go
24
Chapter 2 System Calls Using Go
....
runtime.(*cpuProfile).addNonGo
....
_cgo_init
runtime.mainPC
go.itab.syscall.Errno,error
runtime.defaultGOROOT.str
runtime.buildVersion.str
type.*
runtime.textsectionmap
....
25
Chapter 2 System Calls Using Go
Let’s start analyzing how the code works and what system calls it is
using to get what information out from the executable file.
func main() {
....
file, err := os.Stat(*filename)
....
f, err := os.Open(*filename)
....
switch *action {
....
case "dump": os.Exit(dump_elf(*filename))
}
} else {
goto Usage
}
....
}
26
Chapter 2 System Calls Using Go
The following is the struct definition of the Symbol struct. As you can
see, it contains useful information.
27
Chapter 2 System Calls Using Go
This function is used to obtain certain parts of the ELF file, which are
passed as parameters when calling the file.DynString function. For
example, when calling
dynstrs, _ = file.DynString(elf.DT_SONAME)
the code will get information about the shared library name of the file.
/sys Filesystem
In this section, you will look at a different way of reading system-level information.
You will not use a function to read system information; rather, you will use system
directories that are made available by the operating system for user applications.
28
Chapter 2 System Calls Using Go
The directory that you want to read is the /sys directory, which is a
virtual filesystem containing device drivers, device information, and other
kernel features. Figure 2-4 shows what the /sys directory contains on a
Linux machine.
Reading AppArmor
Some of the information that is provided by Linux inside the /sys
directory is related to AppArmor (short for Application Armor). What is
AppArmor? It is a kernel security module that gives system administrators
the ability to restrict application capabilities with a profile. This gives
system administrators the power to select which resources a particular
application can have access to. For example, a system administrator can
define Application A to have network access or raw socket access, while
Application B does not have access to network capabilities.
Let’s look at an example application to read AppArmor information
from the /sys filesystem, specifically whether AppArmor is enabled and
whether it is enforced. The following is the sample code that can be found
inside the chapter2/apparmor directory:
import (
"fmt"
29
Chapter 2 System Calls Using Go
...
)
const (
appArmorEnabledPath = "/sys/module/apparmor/parameters/
enabled"
appArmorModePath = "/sys/module/apparmor/parameters/mode"
)
func main() {
fmt.Println("AppArmor mode : ", appArmorMode())
fmt.Println("AppArmor is enabled : ", appArmorEnabled())
}
Since the code is accessing a system filesystem, you must run it using
root. Compile the code and run it as follows:
sudo ./apparmor
The code reads the information from the directory using the standard
library ioUtil.ReadFile, which is just like reading a file, so it’s simpler
than using the function calls that you looked at in the previous sections.
30
Chapter 2 System Calls Using Go
Summary
In this chapter, you looked at using system calls to interface with the
operating system. You looked at using the syscall standard library that
provides a lot of function calls to interface with the operating system
and wrote a sample application to print out disk space information.
You looked at how the debug/elf standard library is used to read Go
ELF file information. Lastly, you looked at the /sys filesystem to extract
information that you want to read to understand whether the operating
system supports AppArmor.
31
CHAPTER 3
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
ls /proc -la
...
-r--r--r-- 1 root root 0 Jul 17 17:56
execdomains
-r--r--r-- 1 root root 0 Jul 17 17:56 fb
-r--r--r-- 1 root root 0 Jul 17 17:55
filesystems
dr-xr-xr-x 5 root root 0 Jul 17 17:56 fs
-r--r--r-- 1 root root 0 Jul 17 17:56
interrupts
-r--r--r-- 1 root root 0 Jul 17 17:56 iomem
-r--r--r-- 1 root root 0 Jul 17 17:56 ioports
dr-xr-xr-x 59 root root 0 Jul 17 17:56 irq
-r--r--r-- 1 root root 0 Jul 17 17:56
kallsyms
-r--r--r-- 1 root root 0 Jul 17 17:56 keys
34
Chapter 3 Accessing proc File System
35
36
Table 3-1. Information from /proc/4280
Directory Content
Chapter 3
/proc/4280/ /bin/sh./goland.sh
cmdline
/proc/4280/ 14:misc:/
cgroup 13:rdma:/
11:hugetlb:/
10:net_prio:/
9:perf_event:/
Accessing proc File System
8:net_cls:/
7:freezer:/
6:devices:/
4:blkio:/
3:cpuacct:/
2:cpu:/
1:cpuset:/
0::/user.slice/user-1000.slice/[email protected]/app.slice/app-org.gnome.
Terminal.slice/vte-spawn-9c827742-8e1f-42d8-bb25-79119712b0d8.scope
(continued)
Table 3-1. (continued)
Directory Content
37
Accessing proc File System
Chapter 3 Accessing proc File System
As you can see from the table, there is much information that can be
extracted that is relevant to the process id 4280. This information gives us
better visibility about the application, resources the application uses, user
and group information, and more.
go run main.go
38
Chapter 3 Accessing proc File System
func main() {
sampler := &sampler{
rate: 1 * time.Second,
}
...
for {
select {
case sampleSet := <-sampler.sample:
...
fmt.Printf("total = %v KB, free = %v KB, used =
%v KB\n",
s.total, s.free, s.used)
}
}
}
On startup, the code initializes the Sampler struct and goes into a loop
waiting on the data to be made available from SampleSetChan. Once the
data arrives, it prints out the memory information into the console.
The data sampling code that collects the data and sends it to the
channel is seen below. The StartSampling function spins off a Go routine
that calls GetMemSample to extract the memory information and sleep after
sending the data to the SampleSetChan channel.
39
Chapter 3 Accessing proc File System
go func() {
for {
var ss sample
ss.memorySample = getMemorySample()
s.sample <- ss
time.Sleep(s.rate)
}
}()
...
}
reader := bufio.NewReader(bytes.NewBuffer(contents))
for {
line, _, err := reader.ReadLine()
if err == io.EOF {
break
}
...
if ok && len(fields) == 3 {
...
switch fieldName {
case "total:":
samp.total = val
40
Chapter 3 Accessing proc File System
case "free:":
samp.free = val
}
}
}
...
}
MemTotal: 32320240 kB
MemFree: 927132 kB
MemAvailable: 5961720 kB
...
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
...
41
Chapter 3 Accessing proc File System
Table 3-2 explains the meaning of the different fields shown in the raw
information above
42
Chapter 3 Accessing proc File System
Now that you have a good idea of what the different values mean, let’s
take a look at how to extract this information using Go. The sample code is
inside the chapter3/sockstat directory. Open terminal and run the code
using the following command:
go run main.go
Let’s explore the code to understand what it is doing. When the app
starts up, it opens the /proc/net/sockstat directory. On success, the code
reads and parses it to the format suitable for displaying to the console.
const (
...
netstat = "/proc/net/sockstat"
)
...
func main() {
fs, err := os.Open(netstat)
...
m := make(map[string]int64)
for {
line, err := readLine(reader)
if bytes.HasPrefix(line, []byte(sockets)) ||
bytes.HasPrefix(line, []byte(tcp)) ||
43
Chapter 3 Accessing proc File System
bytes.HasPrefix(line, []byte((udp))) {
idx := bytes.Index(line, []byte((colon)))
...
}
...
}
...
}
• How will you parse the raw data properly and handle
data parsing issues?
Code Sample
Open your terminal and change to the chapter3/jandreprocfs directory
and run the code using the following command:
go run main.go
44
Chapter 3 Accessing proc File System
The following code snippet uses the jandre/procfs library to read the
information:
package main
import (
"github.com/jandre/procfs"
...
)
func main() {
processes, _ := procfs.Processes(false)
table := tablewriter.NewWriter(os.Stdout)
}
table.Render()
}
The sample code is simpler than the previous code that you looked at
in the previous sections. It uses the procfs.Processes(..) function call to
obtain all the current processes.
45
Chapter 3 Accessing proc File System
...
todo := len(pids)
...
for ;todo > 0; {
46
Chapter 3 Accessing proc File System
proc := <-done
todo--
if proc != nil {
processes[proc.Pid] = proc
}
}
At a high level, Figure 3-3 shows what the function is actually doing.
47
Chapter 3 Accessing proc File System
All the heavy lifting of opening and traversing through the /proc
directory including parsing the results is taken care of by the library. The
application can just focus on the output that it receives.
Summary
In this chapter, you looked at the /proc file system and learned about the
system information that applications have access to. You looked at sample
code to read information from inside the /proc directory that is related to
the network and memory on the device. You also learned that the bulk of
the code that needs to be written when extracting system information is in
terms of reading and parsing the information. You also looked at an open
source library that can provide functionality in reading the /proc directory
that performs all the heavy lifting, leaving you to focus on writing simpler
code to read all the system information that you need.
48
CHAPTER 4
Simple Containers
In this chapter, you will look at using Go to explore the container world.
You will look at different container-related projects to get a better
understanding about containers and some of the technologies they
use. There are many different aspects of containers such as security,
troubleshooting, and scaling container registries. This chapter will give you
an understanding of the following topics:
Linux Namespace
In this section, you will look at namespaces, which are key components in
running containers on your local or cloud environment. Namespaces are
features that are only available in the Linux kernel, so everything that you
will read here is relevant to the Linux operating system.
A namespace is a feature provided by the Linux kernel for applications
to use, so what actually is it? It is used to create an isolated environment for
processes that you want to run with their own resources.
You can create namespaces using tools that are already available in
the Linux system. One of the tools you are going to experiment with is
called unshare. It is a tool that allows users to create namespaces and run
applications inside that namespace.
Before you run unshare, let’s take a look my local host machine
compared to when I run the app using unshare. We will compare the
following:
52
Chapter 4 Simple Containers
ps au
53
Chapter 4 Simple Containers
ip link
As you can see, there are many processes running in the local host
machine and there are many network interfaces.
Run the following command to create a namespace and run bash
inside the namespace as the application:
54
Chapter 4 Simple Containers
Inside the new namespace, as seen in Figure 4-2, it will only display
two processes and one network interface (local interface). This shows that
the namespace is isolating access to the host machine.
You have looked at using unshare to create namespaces and run bash
as an application isolated in its own namespace. Now that you have a basic
understanding of namespaces, you will explore another piece of the puzzle
called cgroups in the next section.
cgroups
cgroups stands for control groups, which is a feature provided by the
Linux kernel. Namespaces, which we discussed in the previous section,
go hand in hand with cgroups. Let’s take a look at what cgroups contains.
cgroups gives users the ability to limit certain resources such as the CPU
and memory network allocated for a particular process or processes. Host
machines resources are finite, and if you want to run multiple processes
in separate namespaces, you want to allocate resources across different
namespaces.
55
Chapter 4 Simple Containers
List the directories inside the newly created directory using the
following command:
56
Chapter 4 Simple Containers
The directories that you see are actually the configurations that you
can set values relevant to the resources that you want to allocate for a
particular process. Let’s take a look at an example.
You will run a tool called stress (https://linux.die.net/man/1/
stress), which you need to install to your local machine. If you are using
Ubuntu, you can use the command
Open a terminal and run the stress tool as follows. The application will
run for 60 seconds using one core and consuming 100% of CPU usage.
Open another terminal and run the following command to obtain the
process id of the stress application:
top
57
Chapter 4 Simple Containers
Now insert the value of the process id into the cgroups directory as
follows:
The command allocates 20% of the CPU usage for all processes inside
the example cgroups, and for this example, the stress application process
id is marked as part of the example cgroups. If you have your terminal
running top open, you will see that the stress application will now only
consume 20% instead of 100%.
This example shows that by applying cgroups to processes, you can
restrict the amount of resource it is consuming based on how you want to
allocate it.
You looked at cgroups (control groups) in this section and learned how
to allocate resources to processes. In the next section, you will learn about
rootfs, which you must understand because it is a crucial component in
understanding containers.
rootfs
In this section, you will explore rootfs and how it is applied in containers.
First, let’s understand what rootfs actually is. rootfs stands for root
filesystem, which simply means it is the filesystem containing all the basic
necessary files required to boot the operating system. Without the correct
rootfs, the operating system will not boot up and no application can run.
rootfs is required so that the operating system can allow other file
systems to be mounted, which includes configuration, essential startup
processes and data, and other filesystems that are located in other disk
partitions. The following shows the minimal directories found in a rootfs:
/bin
58
Chapter 4 Simple Containers
/sbin
/etc
/root
/lib
/lib/modules
/dev
/tmp
/boot
/mnt
/proc
/usr
/var,
/home
59
Chapter 4 Simple Containers
gunzip ./alpine-minirootfs-3.15.4-x86_64.tar.gz
tar -xvf ./alpine-minirootfs-3.15.4-x86_64.tar
60
Chapter 4 Simple Containers
.
├── bin
│ ├── arch -> /bin/busybox
...
├── dev
├── etc
...
│ ├── modprobe.d
...
├── home
...
├── sbin
│ ├── acpid -> /bin/busybox
│ ├── adjtimex -> /bin/busybox
...
61
Chapter 4 Simple Containers
├── srv
├── sys
├── tmp
├── usr
│ ├── bin
│ │ ├── [ -> /bin/busybox
│ │ ├── [[ -> /bin/busybox
...
│ │ └── yes -> /bin/busybox
│ ├── lib
│ │ ├── engines-1.1
...
│ │ └── modules-load.d
│ ├── local
│ │ ├── bin
...
│ ├── man
│ ├── misc
│ └── udhcpc
│ └── default.script
├── var
│ ├── cache
│ ├── empty
│ ├── lib
Now that you have a good idea of what rootfs is all about and what it
contains, you will explore further in the next section how to put everything
together into rootfs and run an application like how it normally runs as a
container.
62
Chapter 4 Simple Containers
Gontainer Project
So far you have looked at how to create the different things that are
required to run an application in isolation: namespaces, cgroups and
configuring rootfs. In this section, you will look at a sample app that will
put everything together and run an application inside its own namespace.
In other words, you are going to run the application as a container.
The code can be checked out from https://github.com/nanikjava/
gontainer.
Make sure you download and extract the rootfs as explained in section
“rootFS.” Once the rootfs has been extracted to your local machine, change
the directory to the gotainer directory and compile the project using the
following command:
go build
Once compiled, you will get an executable called gotainer. Run the
application using the following command:
You will get the prompt /usr # and you’ll able to execute any normal
Linux commands. Figure 4-5 shows some of the commands executed
inside gotainer.
63
Chapter 4 Simple Containers
Let’s take a look at the code to understand how the whole thing works.
There is only one file called gontainer.go. As you saw earlier, the way you
run the app is by supplying the argument run sh, which is processed by
the main() function shown here:
func main() {
// outline cleanup tasks
wg.Add(1)
...
64
Chapter 4 Simple Containers
...
}
The function run() that takes care of running the application specified
with the parameter run is shown here:
func run() {
defer cleanup()
infof("run as [%d] : running %v", os.Getpid(), args[1:])
65
Chapter 4 Simple Containers
You can see that the code is using /proc/self/exe, so what is this?
The Linux manual at https://man7.org/linux/man-pages/man5/
proc.5.html says
/proc/self
When a process accesses this magic symbolic link, it
resolves to the process's own /proc/[pid] directory.
/proc/[pid]/exe
Under Linux 2.2 and later, this file is a symbolic link
containing the actual pathname of the executed command.
This symbolic link can be dereferenced normally;
attempting to open it will open the executable.
Let’s explore what the arguments passed to the application are telling
the application to do. The init() function declares the following flags that
it can receive as arguments:
func init() {
pflag.StringVar(&chroot, "chrt", "", "Where to chroot to.
Should contain a linux filesystem. Alpine is recommended.
GONTAINER_FS environment is default if not set")
pflag.StringVar(&chdir, "chdr", "/usr", "Initial chdir
executed when running container")
66
Chapter 4 Simple Containers
Table 4-1 explains the mapping of the argument passed via lst.
The only parameter not shown in the table is the child parameter,
which is not processed. The child parameter will be processed by the
main() function by executing the function child() in goroutine, as shown
in the following code snippet:
func main() {
// outline cleanup tasks
...
// actual program
switch args[0] {
...
case "child":
go child()
...
}
67
Chapter 4 Simple Containers
The child() function does all the heavy lifting of running the new
process in a container-like environment. The following shows the code of
the child() function:
func child() {
defer cleanup()
infof("child as [%d]: chrt: %s, chdir:%s", os.Getpid(),
chroot, chdir)
infof("running %v", args[1:])
must(syscall.Sethostname([]byte("container")))
must(syscall.Chroot(chroot), "error in 'chroot ", chroot+"'")
syscall.Mkdir(chdir, 0600)
cntcmd.Stdin = os.Stdin
...
68
Chapter 4 Simple Containers
Table 4-2 explains what each section of code is doing. Ignore the must
function call as this is an internal function call that checks the return value
of each system call.
The following code snippet specifies to the operating system to use the
standard in/out and error for the application that is executed:
...
cntcmd.Stdin = os.Stdin
cntcmd.Stdout = os.Stdout
cntcmd.Stderr = os.Stderr
...
69
Chapter 4 Simple Containers
Summary
In this chapter, you explored the different parts required to run an
application inside a container: namespaces, cgroups, and rootfs. You
experimented with the different available Linux tools to create namespaces
and configured resources for particular namespaces.
You also explored rootfs, which is a key component to run the
operating system, thus allowing applications to run. Finally, you looked at
a sample project that shows how to use the different components together
inside Go by using the Alpine rootfs.
70
CHAPTER 5
Containers
with Networking
In Chapter 4, you learned about the different features of the Linux kernel
used for containers. You also explored namespaces and how they help
applications isolate from other processes. In this chapter, you will focus
solely on the network namespace and understand how it works and how to
configure it.
The network namespace allows applications that run on their own
namespaces to have a network interface that allows running processes to
send and receive data to the host or to the Internet. In this chapter, you will
learn how to do the following:
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
Network Namespace
In Chapter 4, you looked at namespaces, which are used to create a
virtual isolation for an application, which is one of the key ingredients
in running applications inside a container. The network namespace is
another isolation feature that applications need because it allows them to
communicate with the host or the Internet.
Why is the network namespace important?
Looking at Figure 5-1, you can see that there are two different
applications running on a single host in different namespaces and each of
the namespaces has their own network namespace.
The applications are allowed to talk to each other, but they are
not allowed to talk to the host and vice versa. This not only makes the
applications more secure, but also it makes the application easier to
maintain because it does not need to worry about services outside
the host.
Using a network namespace requires a few things to be configured
properly in order for the application to use it. Figure 5-2 shows the
different things that are needed.
72
Chapter 5 Containers with Networking
Network In your computer, you normally access servers that are running locally
(lo) using localhost. This inside the network namespace is also configured
the same; it is known as lo.
Network This is known as a peer name, and it is configured for the namespace
(peer0) that will communicate with traffic outside the namespace. As shown in
Figure 5-2, it communicates with veth0.
veth0 This is called a virtual ethernet and it is configured in the host computer.
The virtual ethernet, or in this case veth0, communicates between the
host and the namespace.
br0 This is a virtual switch. It’s also known as a bridge. Any network
attached to the bridge can communicate with the others. In this case,
there is only one virtual ethernet (veth0) but if there was another virtual
ethernet, they could communicate with each other.
Now that you have a good understanding of the different things that
need to be configured in a network namespace, in the next section you will
explore using a Linux tool to play around with network namespaces.
73
Chapter 5 Containers with Networking
The script will set up the network namespaces to allow them to access
each other but they cannot communicate with any external services. The
script can be found inside the chapter5/ns directory. Change to this
directory and execute it as follows (make sure you run it as root):
sudo ./script.sh
...
64: virt0: <NO-CARRIER,BROADCAST,MULTICAST,UP,LOWER_UP> mtu
1500 qdisc pfifo_fast state DOWN mode DEFAULT group default
qlen 1000
...
66: virt1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
pfifo_fast state UNKNOWN mode DEFAULT group default qlen 1000
...
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.052 ms
...
74
Chapter 5 Containers with Networking
75
Chapter 5 Containers with Networking
The script creates two different namespaces called ns1 and ns2,
assigning virtual networks to both of them, as explained in the previous
section. The virtual networks are assigned IP addresses 10.0.0.10 and
10.0.0.11, and both networks are connected to each other via a bridge that
is assigned IP address 10.0.0.1.
Let’s go through the script to understand what it is doing. The
following snippet creates two network namespaces labeled ns1 and ns2:
Once the namespace has been set up, it will set up a local network
interface inside the namespace.
Now, you need to create a network bridge and assign 10.0.0.1 as its IP
address.
Once the bridge has been set up, the script will link the virtual
networks to the network namespaces and also link them to the bridge. This
will link all the different virtual networks together through the bridge. The
script will assign the different IP address to the virtual networks.
76
Chapter 5 Containers with Networking
The last step that the script will do is route traffic between the bridge.
This will allow traffic to flow through the ns1 and ns2 namespaces.
Once the script has run successfully, you will see the routing
information using the following command:
77
Chapter 5 Containers with Networking
You will see the output shown below. The output shows that bridge br0
has been registered into the routing table to allow traffic through.
After executing the script, you can remove the br0 routing information
by using the following command. Replace the value 1 with the chain
number you obtained when running the above command to print out the
routing information.
78
Chapter 5 Containers with Networking
go build -o cnetwork
79
Chapter 5 Containers with Networking
You will see a prompt (/#) to enter a command inside the container.
Try using the ifconfig command that will print out the configured
network interface.
/ # ifconfig
As you can see, the virtual ethernet network has been configured with
IP address 172.29.69.160. The bridge configured on the host looks like the
following when you run ifconfig on the host:
80
Chapter 5 Containers with Networking
...
gocker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.29.0.1 netmask 255.255.0.0 broadcast
172.29.255.255
inet6 fe80::5851:6bff:fe0e:1768 prefixlen 64 scopeid
0x20<link>
ether ce:cc:2c:e2:9e:97 txqueuelen 1000 (Ethernet)
RX packets 61 bytes 4156 (4.1 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 110 bytes 15864 (15.8 KB)
TX errors 0 dropped 0 overruns 0 carrier
0 collisions 0
...
veth0_7ea0e6: flags=4163<UP,BROADCAST,RUNNING,MULTICA
ST> mtu 1500
inet6 fe80::e8a3:faff:fed2:2ee9 prefixlen 64 scopeid
0x20<link>
ether ea:a3:fa:d2:2e:e9 txqueuelen 1000 (Ethernet)
RX packets 11 bytes 866 (866.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 46 bytes 7050 (7.0 KB)
TX errors 0 dropped 0 overruns 0 carrier
0 collisions 0
...
The gocker0 bridge is configured with IP 172.29.0.1 and you can ping it
from the container.
Let’s test the network communication between the container and the
host. Open terminal and run the following command:
81
Chapter 5 Containers with Networking
ip addr show
Once you get the IP address of your container, run the following
command in the container:
nc -l -p 4000
nc <container_ip_address> 4000
Let’s take a look at the code to understand how the application is able
to do all this inside Go. The application performs a two-step execution
process. The first step is setting up the bridge and virtual networks, and the
second step is setting up the network namespaces, setting up the different
configurations of the virtual networks, and executing the container inside
the namespace.
82
Chapter 5 Containers with Networking
Let’s take a look at the first step of creating the bridge and virtual
networks, as shown here:
83
Chapter 5 Containers with Networking
return nil
}
84
Chapter 5 Containers with Networking
...
prepareAndExecuteContainer(mem, swap, pids, cpus,
containerID, imageShaHex, args)
...
}
cmd := &exec.Cmd{
Path: "/proc/self/exe",
Args: []string{"/proc/self/exe", "setup-netns",
containerID},
...
}
cmd.Run()
cmd = &exec.Cmd{
Path: "/proc/self/exe",
Args: []string{"/proc/self/exe", "setup-veth",
containerID},
...
}
cmd.Run()
...
opts = append(opts, "--img="+imageShaHex)
args := append([]string{containerID}, cmdArgs...)
args = append(opts, args...)
args = append([]string{"child-mode"}, args...)
85
Chapter 5 Containers with Networking
86
Chapter 5 Containers with Networking
Once all the network setup is done, it calls itself again, passing in
child-mode as the parameter, which is performed by the following code
snippet:
...
case "child-mode":
fs := flag.FlagSet{}
87
Chapter 5 Containers with Networking
fs.ParseErrorsWhitelist.UnknownFlags = true
...
execContainerCommand(*mem, *swap, *pids, *cpus, fs.Args()[0],
*image, fs.Args()[1:])
...
Once all setup is done, the final step is to set up the container by calling
execContainerCommand(..) to allow the user to execute the command
inside the container.
In this section, you learned the different steps involved in setting
up virtual networks for a container. The sample application used in this
section performs operations such as downloading images, setting up
rootfs, setting up network namespaces, and configuring all the different
virtual networks required for a container.
Summary
In this chapter, you learned about virtual networks that are used
inside containers. You went through the steps of configuring network
namespaces along with virtual networks manually using a Linux tool
called ip. You looked at configuring iptables to allow communication to
happen between the different network namespaces.
After understanding how to configure a network namespace with
virtual networks, you looked at a Go example of how to configure virtual
networks in a container. You went through the different functions that
perform different tasks that are required to configure the virtual networks
for a container.
88
CHAPTER 6
Docker Security
This chapter, you will look at seccomp profiles, one of the security features
provided by Docker, which use the seccomp feature built into the Linux
kernel. Standalone Go applications can also implement seccomp security
without using Docker, and you will look at how to do this using the
seccomp library.
You will also look at how Docker communicates using sockets by
writing a proxy that listens to Docker communication. This is super useful
to know because it gives you a better idea of how to secure Docker in your
infrastructure.
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
seccomp Profiles
seccomp is short for secure computing mode. It is a feature that is available
inside the Linux operating system. Linux as an operating system provides
this feature out of the box, which means that it is ready to be used. What
is it actually? It is a security feature that allows applications to make only
certain system calls, and this can be configured per application. As a
developer, you can specify what kind of restriction you want to put in place
so, for example, application A can only make system calls to read and write
text files but it cannot make any other system calls, while application B can
only make network system calls but can’t read or write files. You will look
at how to do this in the application and how to make restrictions when
running the application as a Docker container.
This kind of restriction provides more security for your infrastructure
because you don't want an application to run on your infrastructure
without any restrictions. seccomp, when used with Docker containers,
provides more layers of security for the host operating system because it
can be configured to allow certain system call access to applications that
are currently running inside the container.
In order to use seccomp, first you must check whether your operating
system supports it. Open your terminal and run the following command:
If your operating supports seccomp, you will get the following output:
CONFIG_SECCOMP=y
90
Chapter 6 Docker Security
libseccomp
In order to use the seccomp security feature inside the application, you
must install the library. In this case, the library is called libseccomp
(https://github.com/seccomp/libseccomp). Not all distros install the
libseccomp by default, so you need to install it using your operating system
package manager. In Ubuntu, you can install it by using the following
command:
Now that the default seccomp library has been installed, you can start
using it in your application. Run the sample application that is inside the
chapter6/seccomp/libseccomp directory as follows:
go run main.go
package main
...
func main() {
...
91
Chapter 6 Docker Security
...
}
...
wd, err := syscall.Getwd()
if err != nil {
...
}
...
}
What’s so special about the code? There is nothing special in what the
code is doing. What’s special is the way you configured seccomp inside the
sample code. The code uses a Go library called libseccomp-golang, which
can be found at github.com/seccomp/libseccomp-golang.
The libseccomp-golang library is a Go binding library for the native
seccomp library, which you installed in the previous section. You can think
of the library as a wrapper to the C seccomp library that can be used inside
the Go program. The library is used inside an application to configure
itself, specifying what system calls it is allowed to make.
So why do you want to do this? Well, say you are working in a multiple-
team environment and you want to make sure that the code written can
only perform system calls that are configured internally. This will remove
the possibility of introducing code that makes system calls that are not
allowed in the configuration. Doing so will introduce an error and crash
the application.
Looking at the snippet sample code, you can see the following
allowable system calls, declared as string of an array in the whitelist
variable:
var (
whitelist = []string{"getcwd", "exit_group",
"rt_sigreturn", "mkdirat", "write"})
92
Chapter 6 Docker Security
The listed system calls are the system calls that are required by the
application. You will see later what happens if the code makes a system call
that is not configured. The function configureSeccomp() is responsible for
registering the defined system calls with the library.
The first thing the function does is create a new filter by calling
seccomp.NewFilter(..), passing in the action (seccomp.ActErrno)
as parameter. The parameter specifies the action to be taken when the
application calls system calls that are not allowed. In this case, you want it
to return an error number.
Once it creates a new filter, it will loop through the whitelist system
calls by first obtaining the correct system call id calling seccomp.
GetSyscallFromName(..) and registering the id to the library using the
filter.AddRule(..) function. The parameter seccomp.ActAllow
93
Chapter 6 Docker Security
specifies that the id is the system calls the application is allowed to make.
On completion of the configureSeccomp() function, the application is
configured to allow only the calls that have been white-listed.
The system calls that the application makes are simple. Create a file
using the following snippet:
func main() {
...
if err := syscall.Mkdir(dirPath, 0600); err != nil {
return
}
...
}
Get the current working directory using the following system call:
func main() {
...
wd, err := syscall.Getwd()
if err != nil {
...
}
...
}
The question that pops up now is, what will happen if the application
makes a system call that it is not configured for? Let’s modify the code a bit.
Change the whitelist variable as follows:
var (
whitelist = []string{
"exit_group", "rt_sigreturn", "mkdirat", "write",
}
...
)
94
Chapter 6 Docker Security
This removed getcwd from the list. Now run the application. You will
get an error as follows:
...
2022/07/05 22:53:06 Failed getting current working directory:
invalid argument -
The code fails to make the system call to get the current working
directory and returns an error. You can see that removing the registered
system call from the list stops the application from functioning properly. In
the next section, you will look at using seccomp for applications that run as
containers using Docker.
Docker seccomp
Docker provides seccomp security for applications running in a container
without having to add security inside the code. This is done by specifying
the seccomp file when running the container. Open the file chapter6/
dockerseccomp/seccomp.json to see what it looks like:
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": [
"SCMP_ARCH_X86_64"
],
"syscalls": [
{
"names": [
"arch_prctl",
...
"getcwd"
],
"action": "SCMP_ACT_ALLOW"
95
Chapter 6 Docker Security
}
]
}
The syscalls section outlines the different system calls that are
permitted inside the container. Let’s build a docker container using the
Dockerfile inside the chapter6/dockerseccomp directory. Open your
terminal and change the directory to chapter6/dockerseccomp and run
the following command:
This will build the sample main.go inside that directory and package it
into a container. Executing docker images shows the following image from
your local repository:
REPOSITORY TAG IMAGE ID
CREATED SIZE
...
docker-seccomp latest 4cebeb0b7fce
47 hours ago 21.3MB
...
gcr.io/distroless/base-debian10 latest a5880de4abab
52 years ago 19.2MB
docker run docker-seccomp:latest
You will get the same output as when you run the sample in a terminal:
96
Chapter 6 Docker Security
This will execute the container and you will get the same output as
previously. The reason why you are able to run the container without any
problem even after adding seccomp is because the seccomp.json contains
all the necessary permitted syscalls for the container.
Let’s remove some syscalls from seccomp.json. You have another file
called problem_seccomp.json that has removed mkdirat and getcwd from
the allowable syscall list. Run the following from your terminal:
The container will not run successfully, and you will get the
following output:
97
Chapter 6 Docker Security
You have successfully run the container, applying restricted syscalls for
the application.
In the next section, you will look at building a Docker proxy to listen to
the Docker communication to understand how Docker actually works in
terms of receiving a command and responding to it.
Docker Proxy
Docker comprises two main components: the client tool, which is
normally called docker when you run from your terminal, and the server
where it runs as a server/daemon and listens for incoming commands.
The Docker client communicates with the server using what is known as
socket, which is an endpoint that passes data between different processes.
Docker uses what is known as a non-networked socket, which is mostly
used for local machine communication and is called a Unix domain socket
(or IPC socket).
Docker by default uses Unix socket /var/run/docker.sock to
communicate between client and server, as shown in Figure 6-1.
98
Chapter 6 Docker Security
In this section, you will look at sample code of how to intercept the
communication between Docker client and server. You will step through
the code to understand what it is actually doing and how it is performed.
The code is inside the chapter6/docker-proxy directory. Run it on your
terminal as follows:
go run main.go
DOCKER_HOST=unix:///tmp/docker.sock docker ps
On the terminal that is running the proxy, you will see the Docker
output in JSON format. On my local machine, the output look as follows:
99
Chapter 6 Docker Security
{
"Id": "56f68f7cafb7e5f8b1b1f6263ac6b26f4d47b7a0653684221
2d577ddf1910a11",
"Names": [
"/redis"
],
"Image": "redis",
"ImageID": "sha256:bba24acba395b778d9522a1adf5f0d6bba3
e6094b2d298e71ab08828b880a01b",
"Command": "docker-entrypoint.sh redis-server",
"Created": 1657331859,
...
},
{
"Id": "2ab2942c2591dcd8eba883a1d57f1183a1d99bafb60be8f
17edf8794e9295e53",
"Names": [
"/postgres"
],
"Image": "postgres",
"ImageID": "sha256:1ee973e26c6564a04b427993f47091cd3ae
4d5156fbd46d331b17a8e7ab45d39",
"Command": "docker-entrypoint.sh postgres",
"Created": 1657331853,
...
}
]
The proxy prints out the request from the Docker client and the
response from the Docker server into the console. The Docker command
line still prints out as normal and the output look as follows:
100
Chapter 6 Docker Security
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
56f68f7cafb7 redis "docker-entrypoint.s..." 4 hours ago
Up 4 hours 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis
2ab2942c2591 postgres "docker-entrypoint.s..." 4 hours ago
Up 4 hours 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp postgres
As you can see, the response that the Docker client receives is in the
JSON format and it contains a lot of information. Now let’s dig into the
code to understand how things work internally.
Figure 6-2 shows the command flow from client to proxy to Docker
server. The communication from the Docker client passes through the
proxy before reaching the Docker daemon.
The following snippet shows the code that listens to the socket /tmp/
docker.sock:
func main() {
in := flag.String("in", proxySocket, "Proxy docker socket")
...
101
Chapter 6 Docker Security
func main() {
...
dhandler := &handler{dsocket}
...
err = http.Serve(sock, dhandler)
...
}
102
Chapter 6 Docker Security
103
Chapter 6 Docker Security
The resp variable now contains the response from the original Docker
socket and it will extract the relevant information and forward it back to
the caller response stored inside the response object, as shown in the
following code snippet:
reader := bufio.NewReader(resp.Body)
for {
line, _, err := reader.ReadLine()
...
// write the response back to the caller
response.Write(line)
...
}
}
In the next section, you will look at how to configure your Dockerfile to
minimize the container attack surface.
104
Chapter 6 Docker Security
Once it’s successfully built, you will get output in your terminal
as shown:
Run the newly created Docker image using the following command:
105
Chapter 6 Docker Security
FROM scratch
...
ENTRYPOINT ["/sample"]
In using the scratch image, you have minimized the attack surface
of your container because this image does not have a lot of applications
installed like other Docker images (example: Ubuntu, Debian, etc.).
Summary
In this chapter, you learned about Docker security. The first thing you
looked at is seccomp and why it is useful. You looked at the sample code
and how to restrict a Go application using sec. You looked at setting up
libseccomp, which allows you to apply restrictions to your application as to
what system calls it can make.
The next thing you looked at using the libseecomp-golang library in
your application and how to apply system call restrictions inside your
code. Applying restriction inside code is good, but it will be hard to keep
changing this code once it is running in production, so you looked at using
seccomp profiles when running Docker containers.
106
Chapter 6 Docker Security
107
CHAPTER 7
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
Let’s take a quick peek at what AST looks like in comparison to the
original code. Figure 7-2 shows the comparison between the original Go
code and when it is converted into AST during the compilation process.
112
Chapter 7 Gosec and AST
To the normal eye, the AST looks like a bunch of text, but for the
compiler it is very helpful because the data structure allows it to go
through different parts of the code to check for errors, warnings, and many
other things.
Go provides a built-in module that makes it easy for applications to
convert source code into AST, and this module is used by tools like golanci-
lint (github.com/golangci/golangci-lint) for reading and linting Go
source code.
What does the AST data structure look like? Figure 7-3 shows a brief
view of the AST structure.
113
Chapter 7 Gosec and AST
114
Chapter 7 Gosec and AST
There are many real world use cases that benefit from using AST:
Modules
The modules that you will be using in this chapter are go/parser and go/
ast. The godocs can be found respectively at https://pkg.go.dev/go/
parser and https://pkg.go.dev/go/ast. Each module provide different
functions, as explained here:
In the next section, it will be clearer how the AST works when you look
at different examples.
115
Chapter 7 Gosec and AST
Sample Code
You will explore different samples in this section using the different Go
AST modules. The examples will give you a good idea of how to use the
different AST modules and what can be done with the AST results.
Inspecting
Run the code inside the chapter7/samplecode/inspecting folder as
follows:
go run main.go
2:9: id: p
3:7: id: c
3:11: bl: 1.0
4:5: id: X
4:9: id: f
4:11: bl: 3.14
4:17: bl: 2
4:21: id: c
The code creates an AST data structure for the code that is provided
when calling the AST function and filters out the declared constant and
variables. Let’s go through the sample code to understand what each part
of the code does.
The code declares a variable named src that contains the source code.
It’s simple Go code containing const and var declarations. Successfully
parsing the source code will return a type of ast.File. The ast.File
contains the AST data structure of the code that the code will use to
traverse through.
package main
116
Chapter 7 Gosec and AST
import (
...
)
func main() {
src := `
package p
const c = 1.0
var X = f(3.14)*2 + c
`
fset := token.NewFileSet()
f, err := parser.ParseFile(fset, "", src, 0)
...
}
117
Chapter 7 Gosec and AST
package main
import (
...
)
func main() {
...
118
Chapter 7 Gosec and AST
Parsing a File
The sample code in this section creates an AST data structure of the
main.go that prints out the different module names that are imported, the
function names declared in the code, and the line number for the return
statement. The code can be found inside chapter7/samplecode/parsing
directory. Run the sample in terminal as follows:
go run main.go
package main
119
Chapter 7 Gosec and AST
import (
...
)
func main() {
...
f, err := parser.ParseFile(fset, "./main.go", nil, 0)
...
ast.Inspect(f, func(n ast.Node) bool {
ret, ok := n.(*ast.ReturnStmt)
if ok {
...
}
return true
})
}
package main
import (
...
)
func main() {
...
f, err := parser.ParseFile(fset, "./main.go", nil, 0)
...
log.Println("Imports:")
120
Chapter 7 Gosec and AST
The returned value from ParseFile is ast.File and one of the fields
in that structure is Imports, which contains all the imports declared in the
source code. The code range loops through the Imports field and prints
out the import name to the console. The code also prints out the declared
function name, which is done by the following code:
func main() {
...
for _, f := range f.Decls {
fn, ok := f.(*ast.FuncDecl)
...
log.Println(" ", fn.Name.Name)
}
}
The Decls field contains all the declarations found in the source code
and it filters out only the ast.FuncDecl type containing the function
declaration.
You have looked at different AST example code and should now have
a better understanding how to use it and what information you can get out
of it. In the next section, you will look at how AST is used in an open source
security project.
121
Chapter 7 Gosec and AST
gosec
The gosec project is an open source tool (https://github.com/securego/
gosec) that provides security static code analysis. The tool provides a set
of secure code best practices for the Go language, and it scans your source
code to check if there is any code that breaks those rules.
Use the following command to install it if you are using Go 1.16
and above:
go install github.com/securego/gosec/v2/cmd/gosec@latest
gosec ./...
The tool will scan your sample code recursively and print out the
message on the console.
Summary:
Gosec : dev
Files : 3
Lines : 105
Nosec : 0
Issues : 1
122
Chapter 7 Gosec and AST
The tool scans through all the .go files inside the directory recursively
and, after completing the parsing and scanning process, prints out the
final result. In my directory, it found one issue, which is labeled as G104.
The tool is able to perform the code analysis by using the go/ast module
similar to these examples.
Inside gosec
Figure 7-4 shows at a high level how gosec works.
123
Chapter 7 Gosec and AST
The tool loads up rules (step 1) that have been defined internally.
These rules define functions that are called to check the code being
processed. This is discussed in detail in the next section.
124
Chapter 7 Gosec and AST
Once the rules have been loaded, it proceeds to process the directory
given as parameter and recursively gets all the .go files that are found (step 4).
This is performed by the following code (helpers.go):
result := []string{}
for path := range paths {
result = append(result, path)
}
return result, nil
}
125
Chapter 7 Gosec and AST
126
Chapter 7 Gosec and AST
The last step, once all the filenames are collected, is to loop through
the files and call ast.Walk.
The ast.Walk is called with two parameters: gosec and file. The
gosec is the receiver that will be called by the AST module, while the file
parameter passes the file information to AST.
The gosec receiver implements the Visit(..) function that will be
called by AST module when nodes are obtained. The Visit(..) function
of the tool can be seen here:
127
Chapter 7 Gosec and AST
...
issue, err := rule.Match(n, gosec.context)
if err != nil {
...
}
if issue != nil {
...
}
}
return gosec
}
The Visit(..) function calls the rules that were loaded in step 2 by
calling the Match(..) function, passing in the ast.Node. The rule source
checks whether the ast.Node fulfills certain conditions for that particular
rule or not.
The last step, 7, is to print out the report it obtains from the different
rules executed.
Rules
The tool defines rules that are basically Go code that validates the ast.
Node to check if it fulfills certain conditions. The function that generates
the rules is seen here (inside rulelist.go):
128
Chapter 7 Gosec and AST
package rules
import (
...
)
...
return &credentials{
129
Chapter 7 Gosec and AST
pattern: regexp.MustCompile(pattern),
entropyThreshold: entropyThreshold,
...
MetaData: gosec.MetaData{
ID: id,
What: "Potential hardcoded credentials",
Confidence: gosec.Low,
Severity: gosec.High,
},
}, []ast.Node{(*ast.AssignStmt)(nil), (*ast.ValueSpec)(nil),
(*ast.BinaryExpr)(nil)}
}
Summary
In this chapter, you looked at what an abstract syntax tree is and what it
looks like. Go provides modules that make it easy for applications to work
with the AST data structure. This opens up the possibility of writing tools
like static code analysers like the open source project gosec.
The sample code provided for this chapter shows how to use AST for
simple things like calculating the number of global variables and printing
out the package name from the import declaration. You also looked in
depth at the gosec tool to understand how it uses AST to provide secure
code analysis by going through the different parts of the source code.
130
CHAPTER 8
Scorecard
In this chapter, you will look at an open source security tool called
Scorecard. Scorecard provides security metrics for projects you are
interested in. The metrics will give you visibility on the security concerns
that you need to be aware of regarding the projects.
You will learn how to create GitHub tokens using your GitHub account.
The tokens are needed by the tool to extract public GitHub repository
information. You will walk through the steps of installing and using the
tool. To understand the tool better, you will look at the high-level flow of
how the tool works and also at how it uses the GitHub API.
One of the key takeaways of this chapter is how to use the GitHub API
and the information that can be extracted from repositories hosted on
GitHub. You will learn how to use GraphQL to query repository data from
GitHub using an open source library.
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
What Is Scorecard?
Scorecard is an open source project that analyzes your project’s
dependencies and gives ratings about them. The tool performs several
132
Chapter 8 Scorecard
In the next section, you will look at setting up the GitHub token key so
that you can use it to scan the GitHub repository of your choice.
Setting Up Scorecard
Scorecard requires a GitHub token key to scan the repository. The reason
behind this is the rate limit imposed by GitHub for unauthenticated
requests. Let’s walk through the following steps to create a token key
in GitHub.
133
Chapter 8 Scorecard
134
Chapter 8 Scorecard
135
Chapter 8 Scorecard
136
Chapter 8 Scorecard
In the next section, you will use the token you generated to build and
run Scorecard.
Running Scorecard
Download the tool from the project GitHub repository. For this chapter,
you’ll use v4.4.0; the binary can be downloaded from https://github.
com/ossf/scorecard/releases/tag/v4.4.0. Once you download the
archive file, unzip it to a directory on your local machine.
Execute Scorecard to check it’s working.
/directory/scorecard help
Usage:
./scorecard --repo=<repo_url> [--checks=check1,...]
[--show-details]
or ./scorecard --{npm,pypi,rubgems}=<package_name>
[--checks=check1,...] [--show-details] [flags]
./scorecard [command]
...
Flags:
...
Now that Scorecard is working on your machine, let's use the token you
generated in the previous section to scan a repository. For this example,
137
Chapter 8 Scorecard
GITHUB_AUTH_TOKEN=<github_token> /directory_of_scorecard/
scorecard --repo=github.com/ossf/scorecard
Replace <github_token> with your GitHub token. The tool will take a
bit of time to run because it is scanning and doing checks on the GitHub
repository. Once complete, you will see output something like Figure 8-9.
You have successfully run the tool to scan a GitHub repository and
received an output with a high score of 8.0. A higher score indicates that
the repository is doing all the right things as per the predefined checks in
the tool.
In the next section, you will further explore the tool to understand how
it works and go through code for different parts of the tool.
138
Chapter 8 Scorecard
High-Level Flow
In this section, you will go in depth to understand what the tool is doing
and look at code from the different parts of the tool. In digging through the
code, you will uncover new things that can be used when designing your
own application. First, let’s take a high-level look at the process of the tool,
as shown in Figure 8-10.
Use this diagram as a reference when you read the different parts
of the application along with the code. The first thing that the tool does
when it starts up is check whether it is able to use the provided token to
access GitHub. It is hard-coded to test GitHub connectivity by accessing
the github.com/google/oss-fuzz repository (step 2). This is shown in the
following code snippet (checker/client.go):
func GetClients(...) (
139
Chapter 8 Scorecard
...
) {
...
140
Chapter 8 Scorecard
client.repo = repo
client.repourl = &repoURL{
owner: repo.Owner.GetLogin(),
...
commitSHA: commitSHA,
}
client.contributors.init(client.ctx, client.repourl)
...
client.webhook.init(client.ctx, client.repourl)
client.languages.init(client.ctx, client.repourl)
return nil
}
Figure 8-11 outlines the subset of GitHub handlers that use the
different GitHub connections.
141
Chapter 8 Scorecard
...
return ret, nil
}
142
Chapter 8 Scorecard
func runEnabledChecks(...
resultsCh chan checker.CheckResult,
) {
...
wg := sync.WaitGroup{}
for checkName, checkFn := range checksToRun {
checkName := checkName
checkFn := checkFn
wg.Add(1)
go func() {
defer wg.Done()
runner := checker.NewRunner(
checkName,
repo.URI(),
&request,
)
The final step of the tool is collecting, formatting, and scoring the
results (step 8). The output depends on the configuration as it can be
configured to be displayed on the console (default) or to a file. The code
snippet is shown here (scorecard/cmd/root.go):
143
Chapter 8 Scorecard
...
)
if err != nil {
log.Panic(err)
}
repoResult.Metadata = append(repoResult.Metadata,
o.Metadata...)
...
resultsErr := pkg.FormatResults(
o,
&repoResult,
checkDocs,
pol,
)
...
}
One thing that you learn from the tool is the usage of the GitHub
API. The tool is used extensively by the GitHub API to perform checks
by downloading information about the repository and checking that
information using the predefined security checks. You are now going to
take a look at how to use the GitHub API to do some GitHub exploration.
144
Chapter 8 Scorecard
GitHub
Anyone who works with software knows about GitHub and has used it one
way or another. You can find most kinds of open source software in GitHub
and it is hosted freely. It has become the go-to destination for anyone who
dabbles in software.
GitHub provides an API that allows external tools to interact with the
services. The API opens up unlimited potential for developers to access the
GitHub service to build tools that can provide value for their organization.
This allows the proliferation of third-party solutions (free and paid) to be
made available to the general public. The Scorecard project in this chapter
is one of the tools made possible because of the GitHub API.
GitHub API
There are two kinds of GitHub APIs: REST and GraphQL (https://docs.
github.com/en/graphql). There are different projects that implement
both APIs, which you will look at a bit later.
The REST-based API offers access like a normal HTTP call. For
example, using your own browser you can type in the following address:
https://api.github.com/users/test
{
"login": "test",
"id": 383316,
"node_id": "MDQ6VXNlcjM4MzMxNg==",
"avatar_url": "https://avatars.githubusercontent.com/
u/383316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/test",
145
Chapter 8 Scorecard
"html_url": "https://github.com/test",
...
"created_at": "2010-09-01T10:39:12Z",
"updated_at": "2020-04-24T20:58:44Z"
}
https://api.github.com/orgs/golang/repos
The address will send the list of repositories that are listed under a
particular organization hosted publicly on GitHub. In the example, you
want to get the list of repositories hosted under the Golang organization.
You will get the following response:
[
{
"id": 1914329,
"node_id": "MDEwOlJlcG9zaXRvcnkxOTE0MzI5",
"name": "gddo",
"full_name": "golang/gddo",
"private": false,
"owner": {
"login": "golang",
"id": 4314092,
...
},
"html_url": "https://github.com/golang/gddo",
"description": "Go Doc Dot Org",
"fork": false,
146
Chapter 8 Scorecard
...
"license": {
...
},
...
"permissions": {
...
}
},
{ ... }
]
The response is in JSON format. The information you are seeing is the
same when you visit the Golang project page at https://github.com/
golang. The GitHub documentation at https://docs.github.com/en/
rest provides a complete list of REST endpoints that are accessible.
Using the API in a Go application requires you to convert the different
endpoints to a function that you can use in your application, which is time
consuming, so for this you can use a Go open source library from https://
github.com/google/go-github. Let’s run the example of using this
library, which can be found inside the chapter8/simple folder. Open your
terminal and run it as follows:
go run main.go
2022/07/16 18:43:43 {
"id": 23096959,
"node_id": "MDEwOlJlcG9zaXRvcnkyMzA5Njk1OQ==",
"owner": {
"login": "golang",
"id": 4314092,
147
Chapter 8 Scorecard
...
},
"name": "go",
"full_name": "golang/go",
"description": "The Go programming language",
"homepage": "https://go.dev",
...
"organization": {
"login": "golang",
"id": 4314092,
...
},
"topics": [
"go",
...
],
...
"license": {
...
},
...
}
package main
import (
...
"github.com/google/go-github/v38/github"
)
148
Chapter 8 Scorecard
func main() {
client := github.NewClient(&http.Client{})
ctx := context.Background()
repo, _, err := client.Repositories.Get(ctx, "golang", "go")
...
log.Println(string(r))
}
149
Chapter 8 Scorecard
GraphQL is a query language for APIs and a runtime for fulfilling those
queries with your existing data. GraphQL provides a complete and
understandable description of the data in your API, gives clients the
power to ask for exactly what they need and nothing more, makes it
easier to evolve APIs over time, and enables powerful developer tools.
Normally, when using REST API in order to get different kinds of data,
you need to get it from different endpoints. Once all of the data is collected,
you need to construct them into one structure. GraphQL makes it simple:
you just have to define what repository data you want, and it will return the
collection of data you requested as one single collection.
This will become clearer when you look at the sample application
provided. Open your terminal and run the sample inside chapter8/
graphql. Run it as follows:
You need to use the GitHub token you created previously in the section
“Setting Up Scorecard.” On a successful run, you will get the following (the
output will differ because the data is obtained from GitHub in real time,
which will have changed by the time you run this sample):
150
Chapter 8 Scorecard
151
Chapter 8 Scorecard
The output shows the information that is obtained from GitHub from
the http://github.com/golang/go repository as the first 10 issues, first 10
comments, and 10 first labels. This kind of information is very useful and
you will see as you walk through the code, which is performed easily by
using the GraphQL API.
The main part of the GraphQL API is the query that the sample passes
to the GitHub endpoint, which looks like the following:
152
Chapter 8 Scorecard
node {
title
}
}
}
commitComments(first: 10) {
totalCount
edges {
node {
author {
url
login
}
}
}
}
}
}
• createdAt
• forkCount
153
Chapter 8 Scorecard
154
Chapter 8 Scorecard
}
} `graphql:"commitComments(first: $commitcount)"`
} `graphql:"repository(owner: $owner, name: $name) "`
RateLimit struct {
Cost *int
}
}
The strict definition uses data types that are defined in the library (e.g.,
githubv4.String, githubv4.Int, etc.).
Once you have defined the GraphQL definition, you use the GraphQL
library. In this case, you use the open source library hosted in https://
github.com/shurcooL/githubv4, as shown here:
func main() {
...
data := new(graphqlData)
vars := map[string]interface{}{
"owner": githubv4.String("golang"),
"name": githubv4.String("go"),
"labelcount": githubv4.Int(10),
"issuescount": githubv4.Int(10),
"commitcount": githubv4.Int(10),
}
if err := graphClient.Query(context.Background(), data,
vars); err != nil {
log.Fatalf(err.Error())
}
log.Println("Total number of fork : ", data.Repository.
ForkCount)
...
}
155
Chapter 8 Scorecard
The code initializes the graphqlData struct that will be populated with
the information received from GitHub by the library and then it makes the
call to GitHub using the graphClient.Query(..) function, passing in the
newly created struct and variables defined. The variables defined in vars
contain the value that will be passed to GitHub as the parameter of the
GraphQL.
Once the .Query(..) function returns successfully, you can use the
returned data populated inside the data variable and print it out to the
console.
In the next section, you will look at how to use GitHub Explorer to work
with GraphQL.
GitHub Explorer
GitHub Explorer is a web-based tool provided by GitHub to allow
developers to query GitHub repositories for information. The tool is
available from https://docs.github.com/en/graphql/overview/
explorer. You must sign in with your GitHub account before using the
tool. Once access has been granted, you will see Explorer, as shown in
Figure 8-13.
156
Chapter 8 Scorecard
Once you are logged in, try the following GraphQL and click the run
button.
{
repository(owner: "golang", name: "go") {
createdAt
diskUsage
name
}
}
157
Chapter 8 Scorecard
"data": {
"repository": {
"createdAt": "2014-08-19T04:33:40Z",
"diskUsage": 310019,
"name": "go"
}
}
}
Explorer provides quick tips of what data you can add to the query.
This can be shown when you create a new line inside the query and hit Alt
+ Enter. It will display a scrollable tooltip like in Figure 8-14.
For more reading on the different data that can be extracted using
GraphQL, refer to the queries documentation at https://docs.github.
com/en/graphql/reference/queries.
158
Chapter 8 Scorecard
Summary
In this chapter, you looked at an open source project called Scorecard
that provides security metrics for projects hosted on GitHub. The project
measures the security of a project on a scale of 0-10 and this can also
be used for projects stored locally. The major benefit of the tool is the
public availability of data for projects that have been scanned by the tool.
This data is useful for developers because it gives them information and
insights on the security metrics of projects they are planning to use.
You also looked at how the tool works and learned how to use the
GitHub API to extract repository information to perform predefined
security checks.
You learned in detail about the different availability of the GitHub
APIs, which are REST and GraphQL. You looked at the sample code to
understand how to use each of these APIs to extract information from a
GitHub repository. Finally, you explore the GitHub Explorer to understand
how to construct GraphQL queries for performing query operations
on GitHub.
159
CHAPTER 9
Simple Networking
In this chapter, you will learn how to write networking code using Go. You
will understand how to write client and server code for the TCP and UDP
protocols. You will also look at writing a network server that can process
requests concurrently using goroutines. By the end of the chapter, you will
know how to do the following:
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
TCP Networking
In this section, you will explore creating TCP applications using the
standard Go network library. The code that you will write is both TCP
client and server.
TCP Client
Let’s start by writing a TCP client that connects to a particular HTTP
server, in this case google.com, and prints out the response from the
server. The code can be found inside the chapter9/tcp/simple directory.
Run it as follows:
go run main.go
When the code runs, it will try to connect to the google.com server
and print out the web page returned to the console, as shown in the
output here:
HTTP/1.0 200 OK
Date: Sun, 05 Dec 2021 10:27:46 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
P3P: CP="This is not a P3P policy! See g.co/p3phelp for
more info."
Server: gws
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN
Set-Cookie: 1P_JAR=2021-12-05-10; expires=Tue, 04-Jan-2022
10:27:46 GMT; path=/; domain=.google.com; Secure
Set-Cookie:
...
Accept-Ranges: none
Vary: Accept-Encoding
<!doctype html>
...
164
Chapter 9 Simple Networking
The app uses the net package from the standard library and it uses a
TCP connection specified in the following code:
package main
...
const (
host = "google.com"
port = "80"
)
func main() {
t := net.JoinHostPort(host, port)
...
}
func main() {
...
165
Chapter 9 Simple Networking
...
func main() {
...
connReader := bufio.NewReader(conn)
scanner := bufio.NewScanner(connReader)
for scanner.Scan() {
fmt.Printf("%s\n", scanner.Text())
}
Now that you understand how to write a TCP client, in the next section
you will learn how to write a TCP server.
TCP Server
In this section, you will write a TCP server that listens to port 3333 on
your local machine. The server will print out what it received and send a
response back. The code is inside the tcp/server directory, and it can be
run as follows:
166
Chapter 9 Simple Networking
go run main.go
nc localhost 3333
Once connected, enter any text and press Enter. You will get a response
back. The following is an example. I typed in This is a test and it came back
with a response of Message received of length : 15.
This is a test
Message received of length : 15
Let’s take a look at the code. The first thing you will look at how
the code waits and listens on port 3333, as shown in the following code
snippet:
func main() {
t := net.JoinHostPort("localhost", "3333")
l, err := net.Listen("tcp", t)
...
for {
conn, err := l.Accept()
if err != nil {
log.Println("Error accepting: ", err.Error())
os.Exit(1)
}
go handleRequest(conn)
}
}
167
Chapter 9 Simple Networking
The code uses the Accept function of the Listener object, which is
returned when calling the net.Listen(..) function. The Accept function
waits until it receives a connection.
When the client is connected successfully, the code proceeds by calling
the handleRequest function in a separate goroutine. Having requests
processed in a separate goroutine allows the application to process
requests concurrently.
The handling of the request and the sending of the response is taken
care of inside the handleRequest function, as shown in the following
snippet:
The code reads the data sent by the client using the Read(..) function
of the connection and writes the response back using the Write(..)
function of the same connection.
Because the code uses a goroutine, the TCP server is able to process
multiple client requests without any blocking issues.
UDP Networking
In this section, you will look at writing network applications using the UDP
protocol.
168
Chapter 9 Simple Networking
UDP Client
In this section, you will write a simple UDP application that communicates
with a quote-of-the-day(qotd) server that returns a string quote and prints
it out to the console. The following link provides more information about
the qotd protocol and the available public servers: www.gkbrk.com/wiki/
qotd_protocol/. The sample code connects to the server djxms.net that
listens on port 17.
The code can be found inside the chapter9/udp/simple directory, and
it can be run as follows:
go run main.go
Every time you run the application you will get different quotes. In my
case, one was the following:
The library does a lookup to ensure that the provided domain is valid,
and this is done by doing a DNS lookup. On encountering error, it will
return a non-nil for the err variable.
169
Chapter 9 Simple Networking
170
Chapter 9 Simple Networking
171
Chapter 9 Simple Networking
In this section, you learned how to connect a UDP server using the
standard library. In the next section, you will learn more on how to write a
UDP server.
UDP Server
In this section, you will explore further and write a UDP server using the
standard library. The server listens on port 3000 and prints out what is
sent by the client. The code can be found inside the chapter9/udp/server
directory, and it can be run as follows:
go run main.go
nc -u localhost 3000
Once the nc tool runs, enter any text and you will see it printed in the
server’s terminal. Here is an example of how it looked on my machine:
172
Chapter 9 Simple Networking
Let’s explore how the code works. The following snippet sets up the
UDP server using the net.ListenUDP function:
...
func main() {
conn, err := net.ListenUDP("udp", &net.UDPAddr{
Port: 3000,
IP: net.ParseIP("0.0.0.0"),
})
...
}
The function call returns a UDPConn struct that is used to read and write
to the client. After the code successfully creates a UDP server connection,
it starts listening to read data from it, as shown here:
...
func main() {
...
for {
message := make([]byte, 512)
l, u, err := conn.ReadFromUDP(message[:])
...
log.Printf("Received: %s from %s\n", data, u)
}
}
173
Chapter 9 Simple Networking
Concurrent Servers
In the previous section, you wrote a UDP server but one of the things that
is lacking is its ability to process multiple UDP client requests. Writing a
UDP server that can process multiple requests is different from normal
TCP. The way to structure the application is to spin off multiple goroutines
to listen on the same connection and let each goroutine take care of
processing the request. The code can be found inside the udp/concurrent
directory. Let’s take a look at what it is doing differently compared to the
previous UDP server implementation.
The following snippet shows the code spinning off multiple goroutines
to listen to the UDP connection:
...
func main() {
addr := net.UDPAddr{
Port: 3333,
}
connection, err := net.ListenUDP("udp", &addr)
...
for i := 0; i < runtime.NumCPU(); i++ {
...
go listen(id, connection, quit)
}
...
}
174
Chapter 9 Simple Networking
...
}
...
}
Load Testing
In this section, you will look at using load testing to test the network server
that you wrote in the previous sections. You will be using an open source
load testing tool called fortio. which can be downloaded from https://
github.com/fortio/fortio; for this book, use version v1.21.1.
Using the load testing tool, you will see the timing difference between
code that is designed to handle requests without using goroutines vs. code
that is designed to handle requests using goroutines. For this exercise,
you will use the UDP server that is inside the chapter9/udp/loadtesting
directory. You will compare between the UDP server that uses goroutines
inside the chapter9/udp/loadtesting/concurrent directory and
the UDP server that does not use goroutines inside c hapter9/udp/
loadtesting/server.
175
Chapter 9 Simple Networking
The only difference between the code that you use for load testing with
the code discussed in the previous section is the addition of the time.
Sleep(..) function. This is added to simulate or mock a process that is
doing something to the request before sending a response back. Here is
the code:
func main() {
...
for {
...
//pretend the code is doing some request processing for
10milliseconds
time.Sleep(10 * time.Millisecond)
...
}
}
176
Chapter 9 Simple Networking
The tool makes 200 calls to a server running locally on port 3000. You
will see results something like the following:
...
00:00:44 I udprunner.go:223> Starting udp test for
udp://0.0.0.0:3333/ with 4 threads at 8.0 qps
Starting at 8 qps with 4 thread(s) [gomax 12] : exactly 200, 50
calls each (total 200 + 0)
...
Aggregated Function Time : count 200 avg 0.011425742 +/-
0.005649 min 0.010250676 max 0.054895756 sum 2.2851485
# range, mid point, percentile, count
>= 0.0102507 <= 0.011 , 0.0106253 , 94.50, 189
> 0.011 <= 0.012 , 0.0115 , 98.00, 7
> 0.045 <= 0.05 , 0.0475 , 99.00, 2
> 0.05 <= 0.0548958 , 0.0524479 , 100.00, 2
# target 50% 0.0106453
# target 75% 0.0108446
# target 90% 0.0109641
# target 99% 0.05
# target 99.9% 0.0544062
Sockets used: 200 (for perfect no error run, would be 4)
Total Bytes sent: 4800, received: 200
udp short read : 200 (100.0 %)
All done 200 calls (plus 0 warmup) 11.426 ms avg, 8.0 qps
The final result is that the average time it takes to process is 11.426 ms.
Now let’s compare this with the server code that does not use goroutines,
which is inside the chapter9/udp/loadtesting/server directory. Once
you run the UDP server, use the same command to run forti. You will see
results that looks like the following:
177
Chapter 9 Simple Networking
...
00:00:07 I udprunner.go:223> Starting udp test for
udp://0.0.0.0:3000/ with 4 threads at 8.0 qps
Starting at 8 qps with 4 thread(s) [gomax 12] : exactly 200, 50
calls each (total 200 + 0)
...
Aggregated Function Time : count 200 avg 0.026354093 +/-
0.01187 min 0.010296825 max 0.054235708 sum 5.27081864
# range, mid point, percentile, count
>= 0.0102968 <= 0.011 , 0.0106484 , 24.50, 49
> 0.011 <= 0.012 , 0.0115 , 25.00, 1
> 0.02 <= 0.025 , 0.0225 , 50.00, 50
> 0.03 <= 0.035 , 0.0325 , 73.50, 47
> 0.035 <= 0.04 , 0.0375 , 74.00, 1
> 0.04 <= 0.045 , 0.0425 , 98.50, 49
> 0.045 <= 0.05 , 0.0475 , 99.00, 1
> 0.05 <= 0.0542357 , 0.0521179 , 100.00, 2
# target 50% 0.025
# target 75% 0.0402041
# target 90% 0.0432653
# target 99% 0.05
# target 99.9% 0.0538121
Sockets used: 200 (for perfect no error run, would be 4)
Total Bytes sent: 4800, received: 200
udp short read : 200 (100.0 %)
All done 200 calls (plus 0 warmup) 26.354 ms avg, 8.0 qps
178
Chapter 9 Simple Networking
Summary
In this chapter, you learned how to create network applications using TCP
and UDP. You learned how to write client and server for both protocols.
You learned how to write an application that can process multiple requests
concurrently using goroutines.
This is an important step to understand because it is the foundation
of how to write network applications that can process huge amounts of
traffic. This chapter is a stepping-stone for the upcoming chapter where
you will look at different styles of writing network applications in Linux.
179
CHAPTER 10
System Networking
In the previous chapter, you wrote TCP and UDP applications using the
standard library. In this chapter, you will use this knowledge to build
system network tools. The objective of writing these tools is to gain a
better understanding of how easy it is to so using the capability of the Go
standard library. This surfaces the fact that the standard library provides a
lot of capabilities, enabling developers to build all kinds of network-related
applications.
In this chapter, you will get a good understanding of the following:
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
Ping Utility
In this section, you will write an application that provides ping-like
functionality. The code can be found inside the chapter10/ping folder.
go build -o pinggoogle .
sudo ./pinggoogle
Code Walkthrough
You are going to dive into the sample code to understand how the whole
thing works. The application starts off by calling the Ping() function to ping
a single domain. In this example, it will ping for the golang.org domain.
func main() {
addr := "golang.org"
dst, dur, err := Ping(addr)
182
Chapter 10 System Networking
183
Chapter 10 System Networking
Now that you have opened a local socket connection for ICMP and
resolved the IP address of the destination domain, the next step is to
initialize the ICMP packet and send it off to the destination, as shown in
the following code snippets:
184
Chapter 10 System Networking
const (
ICMPTypeEchoReply ICMPType = 0 // Echo Reply
ICMPTypeDestinationUnreachable ICMPType = 3 // Destination
Unreachable
ICMPTypeRedirect ICMPType = 5 // Redirect
ICMPTypeEcho ICMPType = 8 // Echo
ICMPTypeRouterAdvertisement ICMPType = 9 // Router
Advertisement
ICMPTypeRouterSolicitation ICMPType = 10 // Router
Solicitation
ICMPTypeTimeExceeded ICMPType = 11 // Time Exceeded
ICMPTypeParameterProblem ICMPType = 12 // Parameter
Problem
ICMPTypeTimestamp ICMPType = 13 // Timestamp
ICMPTypeTimestampReply ICMPType = 14 // Timestamp
Reply
ICMPTypePhoturis ICMPType = 40 // Photuris
ICMPTypeExtendedEchoRequest ICMPType = 42 // Extended
Echo Request
ICMPTypeExtendedEchoReply ICMPType = 43 // Extended
Echo Reply
)
Once the type has been defined, the next field that needs to contain
information is the Body field. Here you use icmp.Echo, which will contain
echo requests:
185
Chapter 10 System Networking
ID int // identifier
Seq int // sequence number
Data []byte // data
}
...
// Marshal the data
b, err := m.Marshal(nil)
if err != nil {
return dst, 0, err
}
...
The last step is to read and parse the response message obtained from
the server, as shown here:
186
Chapter 10 System Networking
switch rm.Type {
case ipv4.ICMPTypeEchoReply:
return dst, duration, nil
default:
return dst, 0, fmt.Errorf("got %+v from %v; want echo reply",
rm, peer)
}
In this section, you learned to open and use local socket connections
to send and receive data when using ICMP provided in the standard
library. You also learned how to parse and print the response like how a
ping utility normally does.
187
Chapter 10 System Networking
DNS Server
Using the knowledge from the previous chapter on writing a UDP server,
you will write a DNS server. The aim of this section is not to write a full-
blown DNS server, but rather to show how to use UDP to write it. The DNS
server is a DNS forwarder that uses other publicly available DNS servers
to perform the DNS lookup functionality, or you can think of it as a DNS
server proxy.
./dns
You get the following message when the app starts up successfully:
The DNS server is now ready to serve DNS requests on port 8090.
To test the DNS server, use dig as follows:
You get DNS output from dig, something like the following:
188
Chapter 10 System Networking
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;golang.org. IN A
;; ANSWER SECTION:
golang.org. 294 IN A 142.250.71.81
Now that you have successfully run and used the DNS server, in the
next section you will look at how to write the code.
DNS Forwarder
In this section, you will use a DNS forwarder that is based on UDP to
forward the query to an external DNS server and use the response to report
back to the client. In your code, you’ll use Google’s public DNS server
8.8.8.8 to perform the query.
The first thing the code will do is to create a local UDP server that
listens on port 8090, as shown here:
func main() {
dnsConfig := DNSConfig{
189
Chapter 10 System Networking
...
port: 8090,
}
Once it successfully opens port 8090, the next thing it will do is to open
a connection to the external DNS server and start the server.
func main() {
dnsConfig := DNSConfig{
dnsForwarder: "8.8.8.8:53",
...
}
...
dnsFwdConn, err := net.Dial("udp", dnsConfig.dnsForwarder)
...
dnsServer := dns.NewServer(conn, dns.
NewUDPResolver(dnsFwdConn))
...
dnsServer.Start()
}
The local UDP server waits for incoming DNS requests. Once it
receives an incoming UDP request, it is processed by handleRequest().
You saw in the previous section that the way to read a UDP request is to
call the ReadFromUDP(..) function, as shown here:
190
Chapter 10 System Networking
...
}
191
Chapter 10 System Networking
The code successfully unpacks the data from the incoming request.
The next step is to send the same request to the DNS forwarder
and process the response to be forwarded back to the client. The
ResolveDNS(..) function sends the newly created dnsmessage.Message
struct to the DNS forwarder and processes the received response.
192
Chapter 10 System Networking
193
Chapter 10 System Networking
Answers []Resource
Authorities []Resource
Additionals []Resource
}
194
Chapter 10 System Networking
Figure 10-2 shows the response received from the DNS forwarder when
the bytes are unpacked. As you can see, the Answers field is populated with
the answer to the query.
195
Chapter 10 System Networking
Summary
In this chapter, you learn more details about using UDP. One of the
features of the IP stack is to check the availability of a server using
the ICMP protocol. You also learned about using UDP to write a DNS
forwarder server that uses the net/dns package standard library to process
DNS requests and responses. You now have a better understanding of the
196
Chapter 10 System Networking
features of the standard library than the capability that is provided; at the
same time, it shows how versatile the libraries are in allowing us to develop
useful network tools.
197
CHAPTER 11
Google gopacket
In the previous chapter, you learned about building networking tools using
the Go standard library. In this chapter, you will go further and investigate
an open source library network library from Google called gopacket.
The library source code can be found at https://github.com/google/
gopacket and the library documentation can be found at https://pkg.
go.dev/github.com/google/gopacket. The source branch that you will be
looking at in this chapter is the master branch.
gopacket provides low-level network packet manipulation that cannot
be found inside the standard library. It provides developers with a simple
API to manipulate different network layers’ information obtained from the
network interface. In this chapter, you will learn
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
gopacket
In this section, you will explore gopacket and learn about the main part of
the library to understand how it works. This library provides the capability
to write applications that need to capture and analyze network traffic. The
library does the heavy lifting of communicating with the kernel to obtain
all the network data and parse it and make it available to applications.
gopacket uses a packet capture Linux library that has been part of the
Linux toolbox for a long time called libpcap. More information can be
found at www.tcpdump.org/index.html.
The libpcap library provides functionality to grab network packets
from the network cards, which in turn are parsed and converted to the
relevant protocols that are easily used by applications. gopacket provides
two major types of data structures that applications can work with, namely
Packet and Layer, which will be explored more in detail next.
Layer
In this section, you will look at the Layer interface. This interface is the
main interface in the library that holds data in regard to the raw network
data. The interface looks like the following:
200
Chapter 11 Google gopacket
import (
...
)
var (
LayerTypeARP = gopacket.
RegisterLayerType(10, gopacket.LayerTypeMetadata{Name: "ARP",
Decoder: gopacket.DecodeFunc(decodeARP)})
LayerTypeCiscoDiscovery = gopacket.
RegisterLayerType(11, gopacket.LayerTypeMetadata{Name:
"CiscoDiscovery", Decoder: gopacket.DecodeFunc(decodeCiscoDi
scovery)})
LayerTypeEthernetCTP = gopacket.
RegisterLayerType(12, gopacket.LayerTypeMetadata{Name:
"EthernetCTP", Decoder: gopacket.DecodeFunc(decodeEthe
rnetCTP)})
...
LayerTypeIPv4 = gopacket.
RegisterLayerType(20, gopacket.LayerTypeMetadata{Name: "IPv4",
Decoder:
...
)
201
Chapter 11 Google gopacket
Different protocols using the Layer interface can be found inside the
layers directory, shown in Figure 11-1.
202
Chapter 11 Google gopacket
TCP Layer
Let’s take a look at an example of a TCP protocol implementation that can
be found inside the layers/tcp.go file. The TCP struct declaration that
contains the TCP protocol information is shown here:
The following code shows the function DecodeFromBytes that reads the
raw bytes and converts them into a TCP struct:
203
Chapter 11 Google gopacket
tcp.Seq = binary.BigEndian.Uint32(data[4:8])
tcp.Ack = binary.BigEndian.Uint32(data[8:12])
...
...
}
Going through each of the protocol source files, you will see the
implementation of the different protocols that are supported by the library.
Packet
Packet is the primary type that your application will be working with. The
data that has been read from the low-level libpcap library will end up here
in a form that is easier to understand by the developer. Let’s take a look at
the Packet struct, which is defined inside the packet.go file:
204
Chapter 11 Google gopacket
The struct holds different functions that return the different types
of Layer that you looked at in the previous section. To understand a bit
better, let’s take a peek at the ApplicationLayer type that is returned by
the ApplicationLayer() function, which is defined inside the same file,
packet.go.
Using gopacket
In this section, you will look at examples of how to use gopacket. They
will give you ideas of how to use the library and also show the library
capabilities in reading network protocols.
pcap
Let’s take a moment to understand pcap. It stands for packet capture.
Linux has tools that allow a developer or sysadmin to perform network
troubleshooting, and one of those tools is a packet capture tool. The
packet capture tools allow Linux root users to capture network traffic in the
machine.
The traffic data can be saved into a file and later read to be analyzed.
This kind of capability is super useful for performing auditing plus security
and network troubleshooting in a cloud or local environment. In this
chapter, you will capture and analyze the pcap file.
205
Chapter 11 Google gopacket
Installing libpcap
The code relies on a Linux library called libpcap (www.tcpdump.org/
manpages/pcap.3pcap.html). This library is the main library that helps
in performing network captures. Make sure you have the library installed
on your local Linux machine. Use the following command to install the
library:
Networking Sniffer
For this section example, you will look at an example of a network sniffer
application using the library. The sample application can be found inside
the chapter11/gopacket/sniffer folder. The sample code will sniff out
your local network and print out the following:
• IPv4 information
• DNS information
• TCP information
• UDP information
Before running the application, make sure you change the following
line of code to use the correct network interface that exists in your
machine:
const (
iface = "enp7s0"
...
)
206
Chapter 11 Google gopacket
go build -o sniffer
207
Chapter 11 Google gopacket
sudo ./sniffer
Once the app runs, you will see output like the following:
208
Chapter 11 Google gopacket
Code Walkthrough
Let’s take a look step by step at the different parts of the app to understand
how it uses gopacket . The following code shows the process of initializing
the library to sniff the network traffic using the network interface specified:
func main() {
f, _ := os.Create(fName)
...
handle, err := pcap.OpenLive(iface, sLen, true, -1)
if err != nil {
log.Fatal(err)
}
...
}
func main() {
f, _ := os.Create(fName)
...
pSource := gopacket.NewPacketSource(handle, handle.
LinkType())
for packet := range pSource.Packets() {
printPacketInfo(packet)
...
209
Chapter 11 Google gopacket
}
}
210
Chapter 11 Google gopacket
...
}
./go-cp-analyzer -r <directory_to_test.pcap>/filename.pcap
+--------------------------------+----------------------+
| Packet Distribution | |
+--------------------------------+----------------------+
| <= 66 | 6474 |
| <= 128 | 5831 |
| <= 256 | 858 |
211
Chapter 11 Google gopacket
212
Chapter 11 Google gopacket
This shows that the raw captured files are compatible with other raw
network analyzers that are available.
213
Chapter 11 Google gopacket
| | \__ \ | | | () |
|_| |___/ |_| \__/
User uid: 1000
User gid: 1000
-------------------------------------
....
[cont-init.d] done.
[services.d] starting services
[services.d] done.
[guac-init] Auto start not set, application start on login
guacd[429]: INFO: Guacamole proxy daemon (guacd) version
1.1.0 started
guacd[429]: INFO: Listening on host 0.0.0.0, port 4822
Starting guacamole-lite websocket server
listening on *:3000
...
214
Chapter 11 Google gopacket
Open the test.pcap file by selecting File ➤ Open and you will see
a screen like Figure 11-3. Select the test.pcap file from the selection of
available files.
215
Chapter 11 Google gopacket
Wireshark will successfully read the test.pcap file and will open it, as
shown in Figure 11-4.
216
Chapter 11 Google gopacket
217
Chapter 11 Google gopacket
go build -o httponly .
Run the code with root. Replace <network_device> with your local
network device.
After a successful run, you will see output that looks like the following.
You can see that it only prints TCP traffic connecting to an external server
on port 80.
218
Chapter 11 Google gopacket
Let’s take a look at how the code uses BPF to filter the network capture.
The following snippet shows what you learned in the previous section:
how to perform packet capture using the gopacket OpenLive function:
if *fname != "" {
...
} else {
log.Printf("Starting capture on interface %q", *iface)
handle, err = pcap.OpenLive(*iface, int32(*snaplen), true,
pcap.BlockForever)
}
...
Next, the code calls the SetBPFFilter function to specify the network
filter that you want to apply.
func main() {
...
if err := handle.SetBPFFilter(*filter); err != nil {
log.Fatal(err)
}
...
}
219
Chapter 11 Google gopacket
220
Chapter 11 Google gopacket
The main job of the run() function is to assemble and parsed the raw
bytes into a more readable format to print out, as shown:
221
Chapter 11 Google gopacket
Summary
In this chapter, you learned about capturing raw networks using the
open source gopacket project. The library provides a lot of functionality
made available through its simple public API. You learned how to write
applications using the library and use the information provided in the
different structures.
You looked at BPF (Berkeley Packet Filter) and learned to use it inside
your code to filter network captures using gopacket. Using BPF allows
an application to process only the network capture that it is interested in
rather than spending time processing all incoming traffic. This makes it
easier to develop apps targeted for specific traffic.
222
CHAPTER 12
Epoll Library
Building an application that processes a huge amount of network
processing requires a special way of handling connections in a
distributed or cloud environment. Applications running on Linux are
able to do this thanks to the scalable I/O event notification mechanism
that was introduced in version 2.5.44. In this chapter, you will look at
epoll. According to the documentation at https://linux.die.net/
man/7/epoll,
The epoll API performs a similar task to poll: monitoring multiple file
descriptors to see if I/O is possible on any of them.
You will start by looking at what epoll is and then move on to writing a
simple application and finish off looking at the Go epoll library and how it
works and also how to use it in an application.
On completion of this chapter, you will understand the following:
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
Understanding epoll
In this section, you will start by looking at what epoll is all about from
a system perspective. When you open a socket in Linux, you are given a
file descriptor (or FD for short), which is a non-negative value. When the
user application wants to perform an I/O operation to the socket, it passes
the FD to the kernel. The epoll mechanism is event-driven, so the user
application is notified when an I/O operation happens.
As shown in Figure 12-1, epoll is actually a data structure inside Linux
that is provided to multiplex I/O operations on multiple file descriptors.
Linux provides system calls for user applications to register, modify, or
delete FDs from the data structure. Another thing to note is that epoll
has Linux-specific features, which means applications can only be run on
Linux kernel-based operating systems.
224
Chapter 12 Epoll Library
The following are the system calls used by applications to work with
the data structure. In the later sections, you will look closely at how you are
going to use them in application and also inside an epoll library.
225
Chapter 12 Epoll Library
epoll in Golang
In this section, you will write a simple application that uses epoll. The
app is an echo server that receives connections and sends responses to the
value that is sent to it.
Run the code inside the chapter12/epolling/epollecho folder. Open
your terminal to run the following command:
226
Chapter 12 Epoll Library
go run main.go
Once the app runs, open another terminal and use the nc (network
connect) tool to connect to the application. Type in something in the
console and press Enter. This will be sent to the server.
nc 127.0.0.1 9999
The sample app will respond by sending the string that was sent by the
client. Before diving into the code, let’s take a look at how epoll is used in
an application.
Epoll Registration
As you can see in Figure 12-2, the application creates a listener on port
9999 to listen for incoming connections. When a client connects to this
port, the application spins off a goroutine to handle the client connection.
227
Chapter 12 Epoll Library
Now, let’s take a more detailed look at how the whole thing works
inside the app. The following snippet shows the application creating a
socket listener using the syscall.Socket system call and binding it to port
9999 using syscall.Bind:
...
fd, err := syscall.Socket(syscall.AF_INET, syscall.O_
NONBLOCK|syscall.SOCK_STREAM, 0)
if err != nil {
fmt.Println("Socket err : ", err)
os.Exit(1)
}
defer syscall.Close(fd)
// listener
err = syscall.Listen(fd, 10)
...
...
228
Chapter 12 Epoll Library
...
epfd, e := syscall.EpollCreate1(0)
if e != nil {
...
}
...
Epoll Wait
The last step after registering is to call syscall.EpollWait to wait for an
incoming event from the kernel, which is wrapped inside a for {} loop as
shown in the following snippet. The -1 parameter passed as the timeout
229
Chapter 12 Epoll Library
for {
n, err := syscall.EpollWait(epfd, events[:], -1)
...
}
for {
n, err := syscall.EpollWait(epfd, events[:], -1)
...
// go through the events
for ev := 0; ev < n; ev++ {
...
}
}
The event received contains the event type generated by the system
and the file descriptor that it is for. This information is used by the code
to check for a new client connection. This is done by checking whether
the file descriptor it received is the same as the listener; if it is, then it will
accept the connection by calling syscall.Accept using the listener FD.
Once it gets a new FD for the client connection, it will also be
registered by the code into epoll using EpollCtl with EPOLL_CTL_ADD flag.
Once completed, both listener FD and client connection FD are registered
inside epoll and the application can multiplex I/O operations for both.
for {
n, err := syscall.EpollWait(epfd, events[:], -1)
...
// go through the events
230
Chapter 12 Epoll Library
As a final step, when the code detects that the FD received from the
event is not the same as the listener FD, it will spin off a goroutine to
handle the connection, which will echo back data received from the client.
231
Chapter 12 Epoll Library
Epoll Library
You looked at what epoll is all about and created an app that uses it.
Writing an app that uses epoll requires writing a lot of repetitive code
that takes care of accepting connections, reading requests, registering file
descriptors, and more.
Using an open source library can help in writing better applications
because the library takes care of the heavy lifting required for epoll. In
this section, you will look at netpoll (http://github.com/cloudwego/
netpoll). You will create an application using the library and see how the
library takes care of epoll internally.
The code can be found inside the chapter12/epolling/netpoll
folder. It is an echo server that sends requests received as a response to
the user.
import (
...
"github.com/cloudwego/netpoll"
)
func main() {
listener, err := netpoll.CreateListener("tcp",
"127.0.0.1:8000")
if err != nil {
panic("Failure to create listener")
}
232
Chapter 12 Epoll Library
233
Chapter 12 Epoll Library
You can see that the code written using the netpoll library is easier to
read than the code that you looked at in the previous section. A lot of the
heavy lifting is performed by the library; it also provides more features and
stability when writing high-performance networking code. Let’s take a
look at how netpoll works behind the scenes. Figure 12-3 shows at a high
level the different components of netpoll.
The library creates more than one epoll and it uses the number of
CPUs as the total number of epolls it will create. Internally, it uses a
load balancing strategy to decide which epoll a file descriptor will be
registered to.
The library will register to the epoll when it receives a new connection
or when the netpoll server runs for the first time, and it decides which
epoll to use by using either a random or round-robin load balance
mechanism, as shown in Figure 12-4. The load balancer type can be
modified in an app using the following function call:
netpoll.SetLoadBalance(netpoll.Random)
netpoll.SetLoadBalance(netpoll.RoundRobin)
234
Chapter 12 Epoll Library
Summary
In this chapter, you looked at different ways of writing applications using
epoll. Using your previous learning from Chapter 2 about system calls,
you build an epoll-based application using the standard library. You
learned that designing and writing epoll network applications is different
from normal networking applications. You dove into an epoll library and
learned how to use it to write a network application. Also, you looked at
how the library works internally.
235
CHAPTER 13
Vulnerability Scanner
The proliferation of cloud providers enables organizations to deploy
applications that are affordable at scale. Deploying applications at scale
is one thing, but securing applications and resources is another thing and
this has become a headache for organizations everywhere. Security is a
big topic, and it covers a lot of different aspects. In this chapter, you will
look at one of the tools that helped in identifying vulnerabilities in the
infrastructure.
You are going to look at a tool for detecting vulnerabilities inside Linux.
The primary focus of the chapter is to understand how and where to use
this tool and also to take a closer look at the source code to understand
better how the tool works. In this chapter, you will learn
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
Vulnerability Scanners
Vulnerability scanners are tools that are used to search and report for
known vulnerabilities that exist in your IT infrastructure. Every organization
has an IT infrastructure that it manages in-house or in the cloud. In this
infrastructure is a variety of applications, networks, and other things running,
which requires constant supervision when it comes to security. Every day
we read news of new vulnerabilities uncovered or exploited that can cause
damage to organizations and sometimes to an extended community.
Tools like vulnerability scanners use a lot of interesting technology stacks
that are useful to learn from, and this is the intention of this chapter. You will
look at an open source project named Vuls (https://github.com/future-
architect/vuls), which is written in Go, and look at how it implements
some of the functionality it provides in Go. The objective is to apply this
knowledge in your own project or use it as a knowledge base to understand
how this kind of tool works. Please remember that this chapter is by no means
a go-to chapter for installing or using Vuls or for vulnerability scanners.
The reason for choosing Vuls for this chapter is the fact that the project
is heavily maintained and updated by the community and it has a high star
rating. The project uses a database of information from different sources
rather than relying on its own source, making it up to date in terms of
detecting vulnerabilities.
Some of the key features that Vuls provides are
240
Chapter 13 Vulnerability Scanner
In the next section, you will download the source code, compile it, and
use it to understand how it works.
Using Vuls
In this section, you will explore Vuls and do the following:
Make sure you have your GOPATH directory set up to the correct
folder where you want to store your Go modules (in my case, my GOPATH
points to /home/nanik/Gopath). Once the command has successfully
run, it downloads the source code inside the src/github.com/future-
architect/vuls directory inside GOPATH, like so:
...
drwxrwxr-x 2 nanik nanik 4096 Jun 26 15:48 detector
-rw-rw-r-- 1 nanik nanik 596 Jun 26 15:48 Dockerfile
-rw-rw-r-- 1 nanik nanik 55 Jun 26 15:48 .dockerignore
...
241
Chapter 13 Vulnerability Scanner
The code is all set and ready to be built. Use the make command to
build it.
make build
The compilation process starts and all the related modules are
downloaded. Once compilation completes, you get an executable file
called Vuls. Run the application as follows:
./vuls
Subcommands:
commands list all command names
flags describe all known top-level flags
help describe subcommands and their syntax
242
Chapter 13 Vulnerability Scanner
Running Scan
Vuls require a configuration file in the .toml format. For this section, you
can use the configuration file found inside the chapter13 directory called
config.toml, which is as follows:
[servers.localhost]
host = "localhost"
port = "local"
Vuls runs with the configuration specified with the –config parameter
and stores the report inside the directory specified by the –results-dir
parameter. You get verbose output that looks like the following:
244
Chapter 13 Vulnerability Scanner
{
"jsonVersion": 4,
"lang": "",
"serverUUID": "",
"serverName": "192-168-1-3",
"family": "pop",
"release": "22.04",
...
"ipv4Addrs": [
"192.168.1.3"
],
245
Chapter 13 Vulnerability Scanner
"ipv6Addrs": [
...
],
"scannedAt": "2022-06-26T18:40:16.045650086+10:00",
"scanMode": "fast mode",
"...
"scannedVia": "remote",
"scannedIpv4Addrs": [
...
],
"scannedIpv6Addrs": [
...
],
"reportedAt": "0001-01-01T00:00:00Z",
"reportedVersion": "",
"reportedRevision": "",
"reportedBy": "",
"errors": [],
...
"release": "5.17.5-76051705-generic",
"version": "",
"rebootRequired": false
},
"packages": {
...
}
},
"config": {
"scan": {
"debug": true,
"logDir": "/var/log/vuls",
246
Chapter 13 Vulnerability Scanner
"logJSON": false,
"resultsDir": "/home/nanik/go/src/github.com/
future-architect/vuls/result",
"default": {},
"servers": {
"192-168-1-3": {
...
}
},
"cveDict": {
...
},
"ovalDict": {
...
},
...
},
"report": {
"logJSON": false,
...
}
}
}
In the next section, you will explore some of the features provided
by Vuls.
247
Chapter 13 Vulnerability Scanner
Port Scan
A port scan is a way to perform an operation to determine which ports are
open in a network. A ports is like a number that is picked by an application
to listen to. For example, HTTP servers listen to port 80 while FTP servers
listen to port 21. A list of standard port numbers that are followed in
different operating system can be seen at www.iana.org/assignments/
service-names-port-numbers/service-names-port-numbers.xhtml.
Looking at Vuls source code (scanner/base.go), you can see the
following function that performs a network scan:
package scanner
import (
...
nmap "github.com/Ullaakut/nmap/v2"
)
listenIPPorts := []string{}
248
Chapter 13 Vulnerability Scanner
...
scanner, err := nmap.NewScanner(nmap.
WithBinaryPath(portScanConf.ScannerBinPath))
...
return listenIPPorts, nil
}
The code uses the open source nmap library from github.com/
Ullaakut/nmap to perform the scanning operation. Before getting into the
details on how nmap is performed in the library, let’s get an understanding
of what nmap is first. The tool nmap is a command-line tool that is used
for network exploration and security auditing. It is used for gathering real-
time information about the network, detecting which ports are open in a
network environment, checking which IP addresses are activated in the
network, and more.
Make sure you have the nmap tool install on your local machine. If
you are using a Debian-based Linux distro, use the following command to
install it:
nmap
249
Chapter 13 Vulnerability Scanner
...
EXAMPLES:
nmap -v -A scanme.nmap.org
nmap -v -sn 192.168.0.0/16 10.0.0.0/8
nmap -v -iR 10000 -Pn -p 80
Let’s take a look at the sample code that is provided inside the
chapter13/nmap directory and run it as follows:
go run main.go
The application runs and scans your local machine for an open port. In
my machine, the output looks like the following:
Host "127.0.0.1":
Port 22/tcp open ssh
Port 631/tcp open ipp
Port 5432/tcp open postgresql
Nmap done: 1 hosts up scanned in 0.020000 seconds
The code detects three open ports, which are related to the ssh, ipp,
and postgresql applications. You will get different results depending on
what ports are open on your local machine.
The code snippet that uses the nmap library is as follows:
package main
import (
...
"github.com/Ullaakut/nmap/v2"
)
func main() {
...
250
Chapter 13 Vulnerability Scanner
...
The function uses the Go os/exec package to check for the existence
of the nmap tool. Once the library has been initialized successfully, it calls
the Run() function to perform the scan operation.
package main
import (
...
"github.com/Ullaakut/nmap/v2"
251
Chapter 13 Vulnerability Scanner
func main() {
...
...
}
Args string `xml:"args,attr" json:"args"`
ProfileName string `xml:"profile_name,attr"
json:"profile_name"`
Scanner string `xml:"scanner,attr"
json:"scanner"`
StartStr string `xml:"startstr,attr"
json:"start_str"`
Version string `xml:"version,attr"
json:"version"`
252
Chapter 13 Vulnerability Scanner
XMLOutputVersion string `xml:"xmloutputversion,attr"
json:"xml_output_version"`
Debugging Debugging `xml:"debugging" json:"debugging"`
Stats Stats `xml:"runstats" json:"run_stats"`
ScanInfo ScanInfo `xml:"scaninfo" json:"scan_info"`
Start Timestamp `xml:"start,attr" json:"start"`
Verbose Verbose `xml:"verbose" json:"verbose"`
Hosts []Host `xml:"host" json:"hosts"`
PostScripts []Script `xml:"postscript>script"
json:"post_scripts"`
PreScripts []Script `xml:"prescript>script"
json:"pre_scripts"`
Targets []Target `xml:"target" json:"targets"`
TaskBegin []Task `xml:"taskbegin"
json:"task_begin"`
TaskProgress []TaskProgress `xml:"taskprogress"
json:"task_progress"`
TaskEnd []Task `xml:"taskend"
json:"task_end"`
NmapErrors []string
rawXML []byte
}
The raw output from nmap, the library in the XML format, looks like
the following:
253
Chapter 13 Vulnerability Scanner
254
Chapter 13 Vulnerability Scanner
Exec
The next feature that is used quite often inside Vuls is executing an external
tool to perform some operation as part of the scanning process. The
following are some of the commands that Vuls uses for getting network
IP information, getting kernel information, updating the index of package
manager, and many others.
The commands used are different for different operating systems, but
the way it is run is the same using the os/exec package.
Take a look at the sample app that is inside the chapter13/exec folder
and run the sample in your terminal as follows:
go run main.go
255
Chapter 13 Vulnerability Scanner
The sample app uses the os/exec package to execute commands and
print the output to the console. The following code snippet shows the
function that uses the os/exec package:
package main
import (
..
ex "os/exec"
)
func main() {
...
Run("ip link")
...
Run("noexist")
...
Run("uname -r")
}
256
Chapter 13 Vulnerability Scanner
/bin/sh -c ip link
The app specifies that the output is stored into the variable because it
will be printed out into the console:
SQLite
In this section, you will learn how to use SQLite databases. In particular,
you will learn how to use the sqlite3 library to read and write databases.
SQLite is a lightweight and self-contained SQL database that allows
applications to read and store information. Applications use normal SQL
syntax to perform different kinds of data manipulation such as inserting,
updating, and deleting data. The lightweight and portable nature of
SQLite makes it an attractive proposition to use in a project that doesn't
require a centralized database. Mobile phones such as Android use SQLite
databases that applications can use for their mobile apps.
Internally, Vuls uses SQLite extensively for storing data that it
downloads from different sources. You will look at sample applications
using SQLite. Sample code for this section can be found inside the
chapter13/sqlite directory. Let’s run the sample application as follows
from your terminal:
257
Chapter 13 Vulnerability Scanner
go run main.go
The sample code creates a new database called local.db and creates
a new table called currencies. It also inserts a little data into it and prints
out the newly inserted data into the console.
The following snippet shows the code that initialize the database:
package main
import (
...
_ "github.com/mattn/go-sqlite3"
...
)
...
func main() {
...
dbHandle = InitDB(dbname)
...
}
258
Chapter 13 Vulnerability Scanner
The InitDB function creates the new database using sql.Open, passing
in sqlite3 as the parameter. The sqlite3 parameter is used as a reference
by the database/sql module to look up the appropriate driver for this. If
successful, it will return the sql.DB struct stored inside the db variable
The sql.DB struct is declared in the database/sql module as follows:
type DB struct {
...
connector driver.Connector
...
closed bool
...
stop func()
}
Once the database has been created successfully, the code creates
a table called currencies, which is performed by the following
InitTable(..) function:
...
259
Chapter 13 Vulnerability Scanner
The function executes the CREATE TABLE.. SQL command using the
db.Exec(..) function. The function db.Exec(..) is used to execute the
query against a database without returning any rows. The returned value
is of type Result, which is not used in the InitTable(..) function. The
Result struct is declared as follows in the database/sql module:
func main() {
...
records := []Record{}
260
Chapter 13 Vulnerability Scanner
d := strconv.Itoa(i)
rec := Record{Id: d, Name: curNames[r]}
records = append(records, rec)
}
...
}
The code creates an array of the Record struct and populates it, where
the populated array is passed in as a parameter to the InsertData(..)
function as follows:
261
Chapter 13 Vulnerability Scanner
The function uses the INSERT INTO statement, which is used inside
the Prepare(..) function. This function is used to create prepared
statements that can be executed in isolation later. The SQL statement uses
a parameter placeholder for the values (the placeholder is marked by the
? symbol), which are included as part of the parameter when executing
using the Exec(..) function. The value is obtained from the Id and Name of
the Record struct.
Now that the data has been inserted into the table, the code completes
the execution by reading the data from the table and printing it out to the
console as follows:
262
Chapter 13 Vulnerability Scanner
Scan(..) function to copy fields from each row and read into the values
passed in the parameter. In the code example, the fields are read into
item.Id and item.Name.
The number of parameters passed to Scan(..) must match with the
number of fields read from the table. The Rows struct that is returned when
using the Query(..) function is defined inside the database/sql module.
Summary
In this chapter, you looked at an open source security project called Vuls,
which provides vulnerability scanning capability. You learned about Vuls
by checking out the code and performing a scan operation on your local
machine.
Vuls provides a lot of functionality. In learning how Vuls works,
you learned about port scanning, executing external command-line
applications from Go, and writing code that performs database operations
using SQLite.
263
CHAPTER 14
CrowdSec
In this chapter, you will look at an open source security tool called
CrowdSec (https://github.com/crowdsecurity/crowdsec). There are
few reasons why this tool is interesting to study:
The chapter is broken down into the installation part and the learning
part. In the installation part, you will look at installing CrowdSec to
understand how it works. In the learning section, you will look deeply into
how CrowdSec implements something that you can learn from by looking
at sample code.
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
CrowdSec Project
The documentation at https://doc.crowdsec.net/docs/intro explain
it nicely:
Using CrowdSec
I will not go through the complete installation process of CrowdSec.
Rather, I will cover the steps of a bare minimum installation that will
allow you to understand what you need for the section “Learning From
CrowdSec.” The objective of this installation is to get to a point to see the
community data that is collected by a central server replicated to a local
database.
Create an empty directory to do the following steps. In my local
installation, I created a new directory under /home/nanik/GolandPojects/
crowdsec. Follow these steps:
266
Chapter 14 CrowdSec
wget https://github.com/crowdsecurity/crowdsec/
releases/download/v1.4.1/crowdsec-release.tgz
└── crowdsec-v1.4.1
├── cmd
├── config
├── plugins
├── test_env.ps1
├── test_env.sh
└── wizard.sh
./test_env.sh
Let the script run. It will take a bit of time because it’s downloading a
few things. You will see output that looks like the following:
267
Chapter 14 CrowdSec
268
Chapter 14 CrowdSec
nanik@nanik:~/GolandProjects/crowdsec/crowdsec-v1.4.1$ tree -L
2 ./tests/
./tests/
├── config
│ ├── acquis.yaml
│ ├── collections
│ ├── crowdsec-cli
│ ├── hub
...
│ ├── scenarios
│ └── simulation.yaml
├── crowdsec
├── cscli
├── data
│ ├── crowdsec.db
│ ├── GeoLite2-ASN.mmdb
│ └── GeoLite2-City.mmdb
├── dev.yaml
├── logs
└── plugins
├── notification-email
...
└── notification-splunk
269
Chapter 14 CrowdSec
crowdsec.db
CrowdSec stores data inside a SQLite database called crowdsec.db. The
database contains a number of tables, shown in Figure 14-1.
The test environment does not populate any data when the database
is created, so you need to set up your environment so that it will sync from
a central server. To do this, you need to register first with the CrowdSec
server using the cscli tool, as outlined in the doc at https://docs.
crowdsec.net/docs/cscli/cscli_capi_register/. Open terminal and
change to the tests directory, and execute the following command:
270
Chapter 14 CrowdSec
Using the cscli command tool, you must register to a central server.
online_api_credentials.yaml is populated with the registration details,
which look like the following:
url: https://api.crowdsec.net/
login: <login_details>
password: <password>
You are now ready to populate your database with the central server.
Use the following command:
./crowdsec -c ./dev.yaml
...
INFO[27-07-2022 16:16:45] Crowdsec v1.4.1-linux-e1954adc325ba
a9e3420c324caabd50b7074dd77
WARN[27-07-2022 16:16:45] prometheus is enabled, but the listen
address is empty, using '127.0.0.1'
WARN[27-07-2022 16:16:45] prometheus is enabled, but the listen
port is empty, using '6060'
INFO[27-07-2022 16:16:45] Loading prometheus collectors
INFO[27-07-2022 16:16:45] Loading CAPI pusher
INFO[27-07-2022 16:16:45] CrowdSec Local API listening on
127.0.0.1:8081
271
Chapter 14 CrowdSec
Notice the last log message that says added 8761 entries, which means
that it has added 8761 entries into your database. If you are not getting this
message, rerun the crowdsec command.
Looking into the decisions table, you will the populated data, as
shown in Figure 14-2
272
Chapter 14 CrowdSec
You have learned briefly how to set up CrowdSec and you have seen
the data it uses. In the next section, you will look at parts of CrowdSec that
are interesting. You will look at how certain things are implemented inside
CrowdSec and then look at a simpler code sample of how to do it.
273
Chapter 14 CrowdSec
go run main.go
274
Chapter 14 CrowdSec
func main() {
signalChan := make(chan os.Signal, 1)
signal.Notify(signalChan,
syscall.SIGHUP,
275
Chapter 14 CrowdSec
syscall.SIGTERM,
syscall.SIGINT)
...
go func() {
for {
s := <-signalChan
switch s {
case syscall.SIGHUP, syscall.SIGINT, syscall.SIGTERM:
...
}
}
}()
...
}
The code listens to all these signals to ensure that if any of them are
detected, it will do its job to shut itself down properly.
The signalChan variable is a channel that accepts os.Signal and it is
passed as parameter when calling signal.Notify(). The goroutine takes
care of handling the signal received from the library in a for{} loop (step
2). Receiving a signal (step 6) means that there is an interruption, so the
code must take the necessary steps to start the shutdown process (step 7).
276
Chapter 14 CrowdSec
Now that the code is ready to receive the system event and it knows
what it is supposed to when it receives it, let’s take a look at how other
modules/goroutines are informed about this. The sample code spawns two
goroutines, as shown here:
func main() {
...
wg.Add(2)
go loop100Times(stop, &wg)
go loop1000Times(stop, &wg)
wg.Wait()
log.Println("Complete!")
}
func main() {
...
go func() {
for {
...
switch s {
case syscall.SIGHUP, syscall.SIGINT, syscall.SIGTERM:
...
close(stop)
...
277
Chapter 14 CrowdSec
}
}
}()
...
}
The close(stop) function closes the channel, and any part of the
application that is checking for this channel will detect there is activity
happening on the channel and will act on it. The checking of the stop
channel can be seen in the following code snippet:
278
Chapter 14 CrowdSec
Keep on doing the for loop, and on every loop check do the following:
• Is there any value to read from the stop channel? if there is
something, processes must stop.
• Otherwise, just print to the console and increment the
counter.
package main
import (
...
)
func main() {
...
var wg sync.WaitGroup
...
wg.Add(2)
go loop100Times(stop, &wg)
go loop1000Times(stop, &wg)
wg.Wait()
279
Chapter 14 CrowdSec
log.Println("Complete!")
}
280
Chapter 14 CrowdSec
In Figure 14-4, the apiReady channel is the central part of the service
coordination when CrowdSec starts up. The diagram shows that the
apiServer.Run function sends a signal to the apiReady channel, which
allows the other service, servePrometheus, to run the server listening on
port 6060.
The following code snippet shows the StartRunSvc function running
servePrometheus as a goroutine and passing in the apiReady channel, and
it also pass the same channel when the Serve function is called:
package main
import (
"os"
...
)
281
Chapter 14 CrowdSec
The apiReady channel is set only when the CrowdSec API server
has been run successfully, as shown in the following code snippet. The
serveAPIServer function spawns off another goroutine when calling
282
Chapter 14 CrowdSec
...
})
}
return nil
}
283
Chapter 14 CrowdSec
and serviceB. Open up terminal and make sure you are in the correct
chapter14/services directory and run the code as follows:
go run main.go
func main() {
serviceBDone := make(chan bool, 1)
alldone := make(chan bool, 1)
go serviceB(serviceBDone)
go serviceA(serviceBDone, alldone)
<-alldone
}
There are two channels created by the sample app. Let’s take a look the
function of each channel:
284
Chapter 14 CrowdSec
//2nd service
func serviceA(serviceBDone chan bool, finish chan bool) {
<-serviceBDone
...
log.Println("..Done with serviceA")
finish <- true
}
GeoIP Database
CrowdSec uses a GeoIP database that contains geographical information
of an IP address. This database is downloaded as part of setting up the test
environment discussed in the “Using CrowdSec” section.
In this section, you will look into this database and learn how to
read the data from the database. One of the use cases for this database
is the ability to build a security tool for your infrastructure to label each
incoming IP, which is useful to monitor and understand the incoming
traffic to your infrastructure. The GeoIP database comes from the following
website: https://dev.maxmind.com/geoip/geolite2-free-geolocation-
data?lang=en#databases. Have a read through the website to get an
understanding of the licensing
The sample code is inside the chapter14/geoip/city folder, but
before running it, you need to specify the location of the GeoIP database
that the code will use. If you followed the “Using CrowdSec” section, you
285
Chapter 14 CrowdSec
package main
...
func main() {
db, err := maxminddb.Open("/home/nanik/GolandProjects/
cloudprogramminggo/chapter14/geoip/city/GeoLite2-City.mmdb")
...
}
Once the file location has been specified, open terminal and run the
sample as follows:
go run main.go
286
Chapter 14 CrowdSec
The code reads the database to get all IP addresses in the 2.0.0.0 IP
range and prints all the IP addresses found in that range along with other
country- and continent-related information. Let’s go through the code and
understand how it uses the database.
The data is stored in a single file, which is efficiently packed together,
so in order to read the database, you must to use another library. Use the
github.com/oschwald/maxminddb-golang library. The documentation of
the library can be found at https://pkg.go.dev/github.com/oschwald/
maxminddb-golang.
The library provides a function to convert the data into a struct. In the
sample code, you create your own struct to represent the data that will
be read.
package main
...
287
Chapter 14 CrowdSec
TimeZone string `json:"time_zone"`
} `json:"location"`
RegisteredCountry struct {
GeoNameID int `json:"geoname_id"`
IsoCode string `json:"iso_code"`
Names map[string]interface{} `json:"names"`
} `json:"registered_country"`
}
func main() {
...
}
package main
import (
...
)
...
func main() {
...
_, network, err := net.ParseCIDR("2.0.0.0/8")
...
for networks.Next() {
var rec interface{}
r := GeoCityRecord{}
ip, err := networks.Network(&rec)
...
}
288
Chapter 14 CrowdSec
package main
...
func main() {
...
for networks.Next() {
var rec interface{}
r := GeoCityRecord{}
...
j, _ := json.Marshal(rec)
Once the JSON has been unmarshalled back to the r variable, the code
prints out the information into the console.
289
Chapter 14 CrowdSec
Summary
In this chapter, you not only looked at the crowd source nature of data
collection used by CrowdSec and how the community benefits from it, you
also learned how to use it in your application.
You learned how to use channels to inform applications when system
signals are sent by the operating system. You also looked at using channels
to handle service dependencies during startup. Lastly, you looked at how
to read a GeoIP database, which is useful to know when you want to use
the information in your infrastructure for logging or monitoring IP traffic
purposes.
290
CHAPTER 15
ANSI and UI
In this chapter, you will learn about writing command-line applications
that have a user interface (UI). You will look at adding text styling, such
as italic or bold text, text of a different color, a UI that uses a spinner, and
more. This kind of user interface is possible by using ANSI escape codes,
which contain code to do certain things in the terminal.
You will also look at an open source library that makes it easy to write a
user interface that takes care of all the heavy lifting of writing the different
ANSI escape codes to do fancy UI tricks. In this chapter, you will learn
about the following:
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
Figure 15-1 shows the output that you will see on your screen.
294
Chapter 15 ANSI and UI
Both Bash scripts use ANSI code to select color. For Figure 15-2, the
ANSI code is the following:
\e[38;5;228m
295
Chapter 15 ANSI and UI
\e Escape character
38;5 ANSI code specifying foreground color
228 Color code for bright yellow
In this section, you learned about ANSI codes and how to use them
to print text with different colors by writing Bash scripts. This lays the
foundation for the next section where you are going to use ANSI code to
write different kinds of terminal-based user interfaces inside Go.
ANSI-Based UI
In the previous section, you looked at ANSI codes and how to use them
in Bash. In this section, you are going to use the ANSI codes inside a Go
application. You will use ANSI code to set text color, style the text such as
italic, and more.
Color Table
Open your terminal and run the code inside the chapter15/ansi folder.
go run main.go
296
Chapter 15 ANSI and UI
The code prints the text Aa combined with the foreground and
background color. The color values are set using escape code, which is
obtained from the fg and bg variables, as shown in the following snippet:
...
for _, fg := range fgColors {
fmt.Printf("%2s ", fg)
...
if len(fg) > 0 {
...
fmt.Printf("\x1b[%sm Aa \x1b[0m", bg)
}
}
}
297
Chapter 15 ANSI and UI
[31;40m Aa [0m
• [0m: Reset
298
Chapter 15 ANSI and UI
In the next section, you will look at examples of how to use ANSI code
to format text on a screen.
Styling Text
ANSI code is also available to style text such as italic, superscript, and
more. Let’s take a look at the sample code inside the chapter15/textstyle
folder, which will print output like Figure 15-5.
package main
import "fmt"
const (
299
Chapter 15 ANSI and UI
Underline = "\x1b[4m"
UnderlineOff = "\x1b[24m"
Italics = "\x1b[3m"
ItalicsOff = "\x1b[23m"
)
...
In this section, you used different ANSI codes to format text in the
console with different colors and formats. Going through the sample code,
it is obvious that writing a command-line application that uses ANSI codes
is quite laborious because you need to specify the different ANSI codes
that are required in the application.
In the next section, you will look at some open source projects that take
care of the different aspects of command-line user interface development
to make writing code easier.
Gookit
This library provides a simple API for applications to print text in different
foreground and background colors. It also provides text styling such as
italics, superscript, etc. The following is the link to the library project:
https://github.com/gookit/color.
Run the sample code inside the chapter15/gookit folder as shown:
go run main.go
300
Chapter 15 ANSI and UI
...
func main() {
color.Warn = &color.Theme{"warning", color.Style{color.
BgDefault, color.FgWhite}}
...
color.Style{color.FgDefault, color.BgDefault, color.
OpStrikethrough}.Println("Strikethrough style")
color.Style{color.FgDefault, color.BgDefault, color.OpBold}.
Println("Bold style")
...
}
Calling color.Style.Println prints the text that you want using the
foreground and background colors specified. For example,
301
Chapter 15 ANSI and UI
const (
FgBlack Color = iota + 30
FgRed
FgGreen
FgYellow
...
)
const (
FgDarkGray Color = iota + 90
FgLightRed
FgLightGreen
...
)
const (
BgBlack Color = iota + 40
BgRed
...
)
const (
BgDarkGray Color = iota + 100
BgLightRed
...
)
const (
OpReset Color = iota
OpBold
OpFuzzy
OpItalic
...
)
302
Chapter 15 ANSI and UI
The library uses the same ANSI codes to format the color and text
styling as you saw in the previous section. The following code snippet is
from the file color.go:
const (
SettingTpl = "\x1b[%sm"
FullColorTpl = "\x1b[%sm%s\x1b[0m"
)
Spinner
This library provides progress indicators for command-line applications.
Progress indicators are mostly found in mobile applications or in graphical
user interfaces like browsers. Progress indicators are used to indicate to
the user that the application is processing the user's request. The library
project’s home is https://github.com/briandowns/spinner. Open your
terminal and run the code inside the chapter15/spinner folder as follows:
go run main.go
Figure 15-7 shows the output you will see when running the sample
code. It prints the words Processing request with a red bar moving back and
forth as the spinner.
func main() {
s := spinner.New(spinner.CharSets[35], 100*time.Millisecond)
s.Color("red")
303
Chapter 15 ANSI and UI
The library renders the spinner to the screen by printing through each
character byte in the array specified after a certain amount of delay. By doing
this, it gives the illusion of animation when seeing it printed on the screen.
In Figure 15-8, you can see in the debugging window how the different
characters that will form the spinner are stored inside the Spinner struct,
allowing the library to render them individually. This way, when the library
renders the different characters, it looks like an animation.
304
Chapter 15 ANSI and UI
go func() {
for {
for i := 0; i < len(s.chars); i++ {
select {
...
default:
...
if runtime.GOOS == "windows" {
...
} else {
outColor = fmt.Sprintf("\r%s%s%s", s.Prefix,
s.color(s.chars[i]), s.Suffix)
}
...
fmt.Fprint(s.Writer, outColor)
...
time.Sleep(delay)
}
}
}
}()
}
The function fires off a goroutine and endlessly loops the animation on
the screen until the stop() function is called by the application.
305
Chapter 15 ANSI and UI
Summary
In this chapter, you learn about ANSI codes and how they are useful for
creating user interfaces in terminals. The available ANSI codes allow you to
write text in color and apply different formatting to the text printed on the
screen. You learned that the ANSI codes can be used inside a Bash script
and inside Go code.
You explored deeper into the usage of ANSI codes by looking
at different open source libraries that provide richer user interface
functionality for terminal-based applications. The libraries you looked
at provide text-based formatting such as color and styles and progress
indicators.
306
CHAPTER 16
TUI Framework
You saw in Chapter 15 that ANSI codes contain a different variety of
code that can be used to develop text-based user interfaces. You also saw
examples of using ANSI codes and learned what the different codes mean.
There are a number of user interface libraries for Go that take care of user
interface operations, thereby making development easier and faster. In this
chapter, you will look at these libraries and explore in detail how they work
internally.
In this chapter, you will look at two libraries. The first library is a simple
library called uiprogress that allows an application to create a text-based
progress bar. The other is called bubbletea and it is a more comprehensive
library that allows an application to create different kinds of text-based UIs
such as text input, boxes, spinners, and more.
By the end of this chapter, you will learn the following:
uiprogress
In this section, you will look at the uiprogress library, which is hosted at
https://github.com/gosuri/uiprogress. The library provides a progress
bar user interface, as shown in Figure 16-1. The application uses the
library to create a progress bar as a feedback mechanism to show that an
operation is currently in progress.
Check the project out from GitHub to your local environment and
run the sample application that is provided inside the example/simple
directory.
go run main.go
func main() {
uiprogress.Start() // start rendering
bar := uiprogress.AddBar(100) // Add a new bar
for bar.Incr() {
time.Sleep(time.Millisecond * 20)
}
}
308
Chapter 16 TUI Framework
Code Flow
You will use this sample application as the basis to do a walk-through of
the library. Figure 16-3 shows how the application interacts with the library
and shows what is actually happening behind the scenes inside the library.
p.mtx.Lock()
interval := p.RefreshInterval
p.mtx.Unlock()
select {
309
Chapter 16 TUI Framework
case <-time.After(interval):
p.print()
case <-p.tdone:
p.print()
close(p.tdone)
return
}
}
}
Updating Progress
Upon expiry of the 10 milliseconds interval, the library updates each of
the registered progress bars using the print() function running in the
background. The code snippet of running the print() function is as
follows:
310
Chapter 16 TUI Framework
The print() function loops through the Bars slice and calls the
String() function, which in turn calls the Bytes() function. The Bytes()
function performs calculations to get the correct value for the progress bar
and prints this with a suffix and prefix.
pb := buf.Bytes()
if completedWidth > 0 && completedWidth < b.Width {
pb[completedWidth-1] = b.Head
}
...
return pb
}
311
Chapter 16 TUI Framework
PrependElapsed() prefixes the progress bar with the time it has taken
to complete so far.
func main() {
...
for bar.Incr() {
time.Sleep(time.Millisecond * 20)
}
}
The code will look as long as the bar.Incr() returns true and will
sleep for 20 milliseconds before incrementing again.
From your code perspective, the library takes care of updating and
managing the progress bar, allowing your application to focus on its main
task. All the application needs to do is just inform the library about the new
value of the bar by calling the Incr() or Decr() function.
312
Chapter 16 TUI Framework
In the next section, you will look at a more comprehensive library that
provides a better user interface for an application.
Bubbletea
In the previous section, you saw the uiprogress progress bar library and
looked at how it works internally. In this section, you will take a look at
another user interface framework called bubbletea. The code can be
checked out from https://github.com/charmbracelet/bubbletea.
Run the sample application inside the examples/tui-daemon-combo
folder as follows:
go run main.go
313
Chapter 16 TUI Framework
In the next few sections, you will use the tui-daemon-combo sample
code to work out how the code flows inside the library.
Using bubbletea is quite straightforward, as shown here:
func main() {
...
p := tea.NewProgram(newModel(), opts...)
if err := p.Start(); err != nil {
fmt.Println("Error starting Bubble Tea program:", err)
os.Exit(1)
}
}
314
Chapter 16 TUI Framework
Now you have defined the different functions that will be called by the
library when constructing and updating the UI. Next, you will look at how
each of these functions are used by the library.
Init
The Init() function is the first function called by bubbletea after calling
the Start() function. You saw that Init() must return a Cmd type, which is
declared as the function type shown here:
315
Chapter 16 TUI Framework
316
Chapter 16 TUI Framework
317
Chapter 16 TUI Framework
Update
The Update function is called to update the state of the user interface. In
the sample app, it is defined as follows:
318
Chapter 16 TUI Framework
...
m.spinner, cmd = m.spinner.Update(msg)
...
case processFinishedMsg:
...
m.results = append(m.results[1:], res)
...
default:
return m, nil
}
}
View
The last function, View(), is called by the library to update the user
interface. The application is given the freedom to update the user interface
as it sees fit. This flexibility allows the application to render a user interface
that suits its needs.
This does not mean that the application needs to know how to draw
the user interface. This is taken care of by the functions available for each
user interface. Here is the View() function:
319
Chapter 16 TUI Framework
...
if m.quitting {
s += "\n"
}
return indent.String(s, 1)
}
The app combines all the user interfaces that it needs to display to
the user by extracting the different values from the different variables.
For example, it extract the results array values to show it to the user. The
results array is populated in the Update function when it receives the
processFinishedMsg message type.
The function returns a string containing the user interface that will be
rendered by the library to the terminal.
Figure 16-7 shows at a high level the different goroutines that are
spun off by the library and that take care of the different parts of the user
interfaces such as user input using the keyboard, mouse, terminal resizing,
and more.
The architecture is like a pub/sub model where the central goroutine
process all the different messages and calls the relevant functions
internally to perform the operations.
320
Chapter 16 TUI Framework
Summary
In this chapter, you look at two different terminal-based user interface
frameworks that provide APIs for developers to build command-line user
interfaces. You looked at sample applications of how to use the frameworks
to build simple command-line user interfaces.
You looked at the internals of the frameworks to understand how they
work. Knowing this gives you better insight into how to troubleshoot issues
when using these kinds of frameworks. And understanding the complexity
of these frameworks helps you build your own asynchronous applications.
321
CHAPTER 17
systemd
In this chapter, you will look at systemd, what it is, and how to write Go
applications to interact with it. systemd is an important piece of software
inside the Linux system, and it is too big to be covered entirely in this
chapter.
You will look at an open source systemd Go library that is available and
how to use it to access systemd. In this chapter, you will do the following:
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
systemd
systemd is a suite of applications that are used in Linux systems to get them
up and running. It provides more than just starting the core Linux systems
run; it also starts a number of programs such as network stack, user logins,
the logging server, and more. It uses socket and D-Bus activation to start
up services, on-demand starting of background applications, and more.
D-Bus stands for Desktop Bus. It is a specification that is used for an
inter-process communication mechanism, allowing different processes to
communicate with one another on the same machine. Implementation of
D-Bus consists of server components and a client library. For systemd, the
implementation is known as sd-bus, and in a later section, you will look at
using the D-Bus client library to communicate with the server component.
Socket activation is a mechanism in systemd to listen to a network port
or Unix socket. When connected from an external source, it will trigger
the running of a server application. This is useful in situations when a
resource-hungry application needs to run only when it is needed and not
during the time when the Linux system is started up.
systemd Units
Files that are used for systemd are called units. It is a standard way to
represent resources managed by systemd. System-related systemd unit
files can be found inside /lib/systemd/system, which looks this:
...
-rw-r--r-- 1 root root 389 Nov 18 2021 apt-daily-
upgrade.service
-rw-r--r-- 1 root root 184 Nov 18 2021 apt-daily-
upgrade.timer
lrwxrwxrwx 1 root root 14 Apr 25 23:23 [email protected] ->
[email protected]
-rw-r--r-- 1 root root 1044 Jul 7 2021 avahi-
daemon.service
-rw-r--r-- 1 root root 870 Jul 7 2021 avahi-daemon.socket
-rw-r--r-- 1 root root 927 Apr 25 23:23 basic.target
-rw-r--r-- 1 root root 1159 Apr 18 2020 binfmt-
support.service
326
Chapter 17 systemd
...
lrwxrwxrwx 1 root root 40 Mar 4 04:53 dbus-org.
freedesktop.ModemManager1.service -> /lib/systemd/system/
ModemManager.service
lrwxrwxrwx 1 root root 53 Mar 4 04:51 dbus-org.freedesktop.
nm-dispatcher.service -> /lib/systemd/system/NetworkManager-
dispatcher.service
327
Chapter 17 systemd
328
Chapter 17 systemd
You looked at systemd and what it is used for. In the next section, you
will look at using the provided tools to look at the services provided by
systemd using systemctl.
systemctl
systemctl is the main tool used to communicate with systemd that is
running in your local machine. Type in the following command in your
terminal:
systemctl
Without any parameter, it will list all the services that are currently
registered with the system, as shown in Figure 17-1.
Let’s take a peek at the services that are currently running on local
machines. You will look at the systemd-journal.service, which is
running a systemd logging service. Open your terminal and use the
following command:
329
Chapter 17 systemd
TriggeredBy: • systemd-journald-dev-log.socket
• systemd-journald-audit.socket
• systemd-journald.socket
Docs: man:systemd-journald.service(8)
man:journald.conf(5)
Main PID: 370 (systemd-journal)
Status: "Processing requests..."
Tasks: 1 (limit: 9294)
Memory: 54.2M
CPU: 3.263s
CGroup: /system.slice/systemd-journald.service
└─370 /lib/systemd/systemd-journald
The output shows information about the service such as the amount of
memory the service is using, the process ID (PID), location of the .service
file, and whether the service is active or not.
To stop a service, use the command systemctl stop. As an example,
let’s try to stop cups.service (a service used to provide printing services in
Linux). Use the following command in your terminal to check the status:
330
Chapter 17 systemd
TriggeredBy: • cups.socket
• cups.path
Docs: man:cupsd(8)
Main PID: 39757 (cupsd)
Status: "Scheduler is running..."
Tasks: 1 (limit: 9294)
Memory: 2.8M
CPU: 51ms
CGroup: /system.slice/cups.service
└─39757 /usr/sbin/cupsd -l
If you check the status again using the same systemctl status cups.
service command, you will see output that looks like the following:
331
Chapter 17 systemd
Using systemctl allows you to take a look at the status of the registered
service in systemd. In the next section, you will write a simple server
application and control it using systemctl.
go run main.go
The application is working now. Let's create the executable file that
you will use to run it as a systemd service. Compile the application using
the following command. Make sure you are in the chapter17/httpservice
directory.
go build -o httpservice
332
Chapter 17 systemd
333
Chapter 17 systemd
Now you can access the application by pointing your browser to http://
localhost:8111.
You have successfully deployed your sample app. It is configured to
start up when you boot up your machine. In next section, you will look at
using a Go library to write a system application.
go-systemd Library
You learned early in this chapter that the D-Bus specification contains a
client library. The client library allows applications to interact with system.
The client library that you are going to take a look at is for a Go application
called go-systemd. The library can be found at http:/github.com/
coreos/go-systemd.
334
Chapter 17 systemd
For this chapter, you will look at code samples using the library to
write to journal logs, list services available on local machines, and query
machines.
Querying Services
The sample code for this section can be found inside the chapter17/
listservices directory. The sample code queryies from systemd all the
services that are registered, similar to how systemctl list-units works.
Open your terminal and make sure you are inside the chapter17/
listservices directory. Build the application as follows:
go build -o listservices
sudo ./listservices
...
Name : sys-module-fuse.device, LoadState : loaded, ActiveState
: active, Substate : plugged
335
Chapter 17 systemd
import (
...
)
func main() {
...
336
Chapter 17 systemd
import (
...
)
func main() {
...
...
for _, j := range js {
fmt.Println(fmt.Sprintf("Name : %s, LoadState : %s,
ActiveState : %s, Substate : %s", j.Name, j.LoadState,
j.ActiveState, j.SubState))
}
c.Close()
}
The library takes care of all the heavy lifting of connecting to systemd,
sending requests, and converting requests to a format that it passes to the
application.
337
Chapter 17 systemd
Journal
Another example you will look at is using the library to write log messages
to the journal that provides a logging service. To access the logging service,
you can use the journalctl command line.
journalctl -r
The output looks like following on my local machine (it will look
different in your machine):
...
Jun 25 00:06:43 nanik sshd[2567]: pam_unix(sshd:session):
session opened for user nanik(uid=1000) by (uid=0)
...
Jun 25 00:00:32 nanik kernel: audit: type=1400
audit(1656079232.440:30): apparmor="DENIED" operation="capable"
profile="/usr/sbin/cups-browsed" pid=2527 comm="cups-browsed"
capability=23 c>
Jun 25 00:00:32 nanik audit[2527]: AVC apparmor="DENIED"
operation="capable" profile="/usr/sbin/cups-browsed" pid=2527
comm="cups-browsed" capability=23 capname="sys_nice"
Jun 25 00:00:32 nanik systemd[1]: Finished Rotate log files.
...
The parameter -r shows the latest log message on the top. Now you
know how to look at the journal logging service. Let's run your sample
application to write log messages into it.
Open terminal and make sure you are inside the chapter17/journal
directory. Run the sample using the following command:
go run main.go
338
Chapter 17 systemd
package main
import (
j "github.com/coreos/go-systemd/v22/journal"
)
func main() {
j.Print(j.PriErr, "This log message is from Go application")
}
The Print(..) function prints the message This log message is from
Go application with the error priority. This is normally printed in red
when you view it using journalctl. The following is a list of the different
priorities available from the library:
const (
PriEmerg Priority = iota
PriAlert
339
Chapter 17 systemd
PriCrit
PriErr
PriWarning
PriNotice
PriInfo
PriDebug
)
The following priorities are assigned the red color: PriErr, PriCrit,
PriAlert, and PriEmerg. PriNotice and PriWarning are highlighted, and
PriDebug is in lighter grey. One of the interesting priorities is PriEmerg,
which broadcasts the log message to all open terminals in the local
machine.
In the next section, you will look at an advanced feature of systemd,
which is registering and running a machine or container.
Machines
One advanced feature that systemd provides is the ability to run virtual
machines or containers in local machines. This feature does not come by
default; there is extra installation of services and steps performed in order
to use this feature. This feature is made available by installing a package
called systemd-container. Let’s understand what this package is all about.
The systemd-container package contains a number of tools,
particularly the tool called systemd-nspawn. This tool is similar to chroot
(which I discussed in Chapter 4) but provides more advanced features
such as virtualizing the file system hierarchy, process tree, and various IPC
subsystems. Basically, it allows you to run a lightweight container with its
own rootfs.
The following steps will walk you through in installing the package and
configuring it.
340
Chapter 17 systemd
341
Chapter 17 systemd
└─2744 /lib/systemd/systemd-machined
If this way does not work for your Linux system, use the following
command:
wget https://cloud-images.ubuntu.com/trusty/current/trusty-
server-cloudimg-amd64-root.tar.gz
Let’s check to make sure that the image has been downloaded
successfully by using the following command:
machinectl list-images
1 images listed.
342
Chapter 17 systemd
Finally, now that you have the image downloaded and stored locally,
you can run it using the following command:
343
Chapter 17 systemd
go run main.go
package main
import (
m "github.com/coreos/go-systemd/v22/machine1"
...
)
func main() {
conn, err := m.New()
...
s, err := conn.ListImages()
...
344
Chapter 17 systemd
Summary
In this chapter, you learned about systemd and its functions in the Linux
operating system. You explored the different tools that are available to
allow you to interact with systemd. You looked at Go code samples that
show how to interact with systemd using the go-systemd library.
go-systemd provides a different capability to interact with system.
One of the advanced features you looked at was interacting with the
systemd-machine service that provides virtual machine and container
registration capability.
345
CHAPTER 18
cadvisor
In this chapter, you will look at an open source project called cAdvisor,
which stands for Container Advisor. The complete source code can be
found at https://github.com/google/cadvisor. This chapter uses
version v0.39.3 of the project. The project is used to collect resource usage
and performance data on running containers. cAdvisor supports Docker
containers, and this is specifically what you are going to look at in this
chapter.
The reason for choosing this project is to explore further the topics we
discussed in previous chapters, such as
• Using cgroups
Source Code
The source code for this chapter is available from the https://github.
com/Apress/Software-Development-Go repository.
Running cAdvisor
This section walks through how to check out cAdvisor source code to run it
locally. Let’s start by checking out the code using the following command:
Build the project by changing into the cmd directory and running the
following command:
go build -o cadvisor
You will get an executable file called cadvisor. Let’s run the project
using the following command to print out the different parameters it
can accept:
./cadvisor –help
-add_dir_header
348
Chapter 18 cadvisor
I will not go through all the different parameters that cAdvisor has. You
are just going to use whatever default value it assigns. cAdvisor requires
root access to run it, so do so as follows:
sudo ./cadvisor -v 9
349
Chapter 18 cadvisor
350
Chapter 18 cadvisor
To see the containers that are running locally, click the Docker
Containers link on the main page. You will see a different container UI, like
the one shown in Figure 18-2. My local machine has a Postgres container
running, so you are seeing a Postgres container. You will see all the
different containers that are running on your local machine.
In the next section, you will explore further the cAdvisor UI and
concepts that are related to the project.
351
Chapter 18 cadvisor
352
Chapter 18 cadvisor
Click the system.slice link and you will see something like
Figure 18-4, which shows the different services running on the local
machine.
Figure 18-5 shows gauges of the percentage of memory and disk usage.
cAdvisor also shows the different processes that are currently running
in your system. Figure 18-6 shows information about the process name,
CPU usage, memory usage, running time, and other information.
353
Chapter 18 cadvisor
After clicking the Docker Containers link, you will be shown the list
of containers that you can look into. In my case, as shown in Figure 18-8,
there is a Postgres container currently running on my local machine.
Clicking the Postgres container will show the different metrics related
to the container, as shown in Figure 18-9.
354
Chapter 18 cadvisor
355
Chapter 18 cadvisor
In the next section, you will dive into the internals of cAdvisor and
learn how it is able to do all these things in the code.
Architecture
In this section and the next, you will look at the internals of cAdvisor and how
the different components work. cAdvisor supports different containers, but for
this chapter you will focus on the code that is relevant to Docker only. Let’s take
a look at the high-level component view of cAdvisor shown in Figure 18-10.
356
Chapter 18 cadvisor
In the next few sections, you will look at different parts of cAdvisor and
how they work.
Initialization
Like any other Go application, the entry point of cAdvisor is main.go.
func main() {
...
memoryStorage, err := NewMemoryStorage()
if err != nil {
klog.Fatalf("Failed to initialize storage driver:
%s", err)
357
Chapter 18 cadvisor
}
...
resourceManager, err := manager.New(memoryStorage, sysFs,
housekeepingConfig, includedMetrics, &collectorHttpClient,
strings.Split(*rawCgroupPrefixWhiteList, ","), *perfEvents)
...
cadvisorhttp.RegisterPrometheusHandler(mux, resourceManager,
*prometheusEndpoint, containerLabelFunc, includedMetrics)
...
rootMux := http.NewServeMux()
...
}
358
Chapter 18 cadvisor
There are two different HTTP handlers that are initialized by cAdvisor:
API-based HTTP handlers that are used by the web user interface and
metric HTTP handlers that report metric information in raw format.
The following snippet shows the main handlers registration that register
the different paths that are made available (inside cmd/internal/http/
handlers.go):
359
Chapter 18 cadvisor
}
mux.Handle("/", http.RedirectHandler(urlBasePrefix+pages.
ContainersPage, http.StatusTemporaryRedirect))
...
return nil
}
The API handlers expose the /api path. To test this handler, make sure
you have cAdvisor running and open your browser and enter the URL
http://localhost:8080/api/v1.0/containers. You will see something
like Figure 18-11.
360
Chapter 18 cadvisor
Manager
Manager is the main component of cAdvisor. It takes care of the
initialization, maintenance, and reporting of different metrics for the
containers it manages. The interfaces are declared as follows:
361
Chapter 18 cadvisor
Start() error
Stop() error
GetContainerInfo(containerName string, query *info.
ContainerInfoRequest) (*info.ContainerInfo, error)
GetContainerInfoV2(containerName string, options
v2.RequestOptions) (map[string]v2.ContainerInfo, error)
SubcontainersInfo(containerName string, query *info.
ContainerInfoRequest) ([]*info.ContainerInfo, error)
AllDockerContainers(query *info.ContainerInfoRequest)
(map[string]info.ContainerInfo, error)
DockerContainer(dockerName string, query *info.
ContainerInfoRequest) (info.ContainerInfo, error)
GetContainerSpec(containerName string, options
v2.RequestOptions) (map[string]v2.ContainerSpec, error)
GetDerivedStats(containerName string, options
v2.RequestOptions) (map[string]v2.DerivedStats, error)
GetRequestedContainersInfo(containerName string,
options v2.RequestOptions) (map[string]*info.
ContainerInfo, error)
Exists(containerName string) bool
GetMachineInfo() (*info.MachineInfo, error)
GetVersionInfo() (*info.VersionInfo, error)
GetFsInfoByFsUUID(uuid string) (v2.FsInfo, error)
GetDirFsInfo(dir string) (v2.FsInfo, error)
GetFsInfo(label string) ([]v2.FsInfo, error)
GetProcessList(containerName string, options
v2.RequestOptions) ([]v2.ProcessInfo, error)
WatchForEvents(request *events.Request) (*events.
EventChannel, error)
GetPastEvents(request *events.Request) ([]*info.
Event, error)
CloseEventChannel(watchID int)
362
Chapter 18 cadvisor
package docker
import (
...
)
363
Chapter 18 cadvisor
364
Chapter 18 cadvisor
return nil
}
365
Chapter 18 cadvisor
}
...
return nil
}
366
Chapter 18 cadvisor
err := m.detectSubcontainers("/")
...
return nil
}
367
Chapter 18 cadvisor
Monitoring Filesystem
cAdvisor uses the inotify API that is provided by the Linux kernel
(https://linux.die.net/man/7/inotify). This API allows applications
to monitor file systems events, such as if any files are deleted or created.
Figure 18-13 shows how cAdvisor uses the inotify events.
In the previous section, you learned that cAdvisor monitors and listens
for events for /sys/fs/cgroup and its subdirectories. This is how cAdvisor
knows what Docker containers are created or deleted from memory. Let’s
take a look at how it uses inotify for this purpose.
The code uses the inotify library that listens to events
coming in from the kernel. The cAdvisor code uses a goroutine to
process the inotify events. This goroutine is created as part of
368
Chapter 18 cadvisor
go func() {
for {
select {
case event := <-w.watcher.Event():
err := w.processEvent(event, events)
if err != nil {
...
}
case err := <-w.watcher.Error():
...
case <-w.stopWatcher:
err := w.watcher.Close()
...
}
}
}()
return nil
}
369
Chapter 18 cadvisor
...
switch eventType {
case watcher.ContainerAdd:
alreadyWatched, err := w.watchDirectory(events, event.
Name, containerName)
...
case watcher.ContainerDelete:
// Container was deleted, stop watching for it.
lastWatched, err := w.watcher.RemoveWatch(containerName,
event.Name)
...
default:
return fmt.Errorf("unknown event type %v", eventType)
}
370
Chapter 18 cadvisor
EventType: eventType,
Name: containerName,
WatchSource: watcher.Raw,
}
return nil
}
371
Chapter 18 cadvisor
The primary code that does the collection of machine information can
be seen in the following snippet (machine/info.go):
372
Chapter 18 cadvisor
MemTotal: 16078860 kB
MemFree: 698260 kB
...
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 901628 kB
DirectMap2M: 15566848 kB
DirectMap1G: 0 kB
...
373
Chapter 18 cadvisor
Now let’s look at how cAdvisor reads information using the /sys
directory. The function GetNetworkDevices(..) (utils/sysinfo/
sysinfo.go) shown in the code snippets calls another function to get the
information from /sys/class/net.
const (
...
netDir = "/sys/class/net"
...
)
374
Chapter 18 cadvisor
...
}
return dirs, nil
}
The function extracts and parses the information, which looks like the
following in raw format:
Client Library
In the repository inside the chapter18 folder, there are examples of how
to use the cAdvisor client library to communicate with cAdvisor. The
examples show how to use the client library to get container information,
event streaming from cAdvisor, and so on.
Summary
In this chapter, you learned about installing and running cAdvisor to
monitor metrics of your local machine and Docker containers. The tool
provides a lot of information that shows the performance of the different
containers that are running on a machine. This chapter discussed how
cAdvisor collects metric information for containers and local machines
using the knowledge you learned in previous chapters.
375
Chapter 18 cadvisor
376
Index
A in-band signaling, 294
terminal-based
Abstract syntax tree (AST)
applications, 294
built-in module, 113
API-based handlers, 360
code stages, 111, 112
API-based HTTP handlers, 359
data structure, 130
apiReady channel, 281, 282
definition, 111
Application Armor
function and filters, 116
(AppArmor), 29, 31
vs. Go code, 112, 113
ApplicationLayer() function,
modules, 115
205, 211
sample code, 116
ast.BasicLint, 118, 119
inspection, 116–118
ast.File, 116, 117, 121
parsing file, 119–121
ast.FuncDecl function, 121
structure, 113, 114
ast.Ident, 118, 119
use cases, 115
ast.Inspect(..) function, 117–120
Accept function, 168
ast.Node, 114, 118, 128
ANSI-based UI
color table, 296–299
color text output, 297 B
foreground and background Berkeley Packet Filter (BPF),
mapping, 299 217–222
style text, 299, 300 BigQuery, 132
ANSI codes, 296, 300, 306 Bubbletea
ANSI escape code application functions, 314
bash output, 295 centralized process,
bash script, 294–296 messages, 321
code description, 296 initialization, internal execution
color output, 295 flow, 318
378
INDEX
379
INDEX
380
INDEX
381
INDEX
382
INDEX
M
L machinectl command-line
libpcap, 200, 206 tool, 342
libseccomp Machines, 340–342, 344
command, 91 main() function, 358
configureSeccomp() Manager, 361, 363–367, 371
function, 93, 94 Match(..) function, 128, 130
libseccomp-golang library, 92 Memory information, 38, 39
multiple-team environment, 92 Mini Root Filesystem, 59, 60
sample application, 91 Monitoring filesystem, 368–371
383
INDEX
384
INDEX
385
INDEX
S SIGTERM, 276
Socket activation, 326
Sampler struct, 39
Spinner, 303–306
SampleSetChan, 39
spinner.Start() function, 305
Scorecard
SQLite database, 270
BigQuery, 132
Standard library, 22
execution, 137, 138
StartRunSvc function, 281
high-level flow, 139–144
StartSampling function, 39
open source security tool, 131
Statfs_t struct declaration, 19
openssf project, 132, 133
Statx function system, 12
project analysis, 131
Stop variable, 277
public dataset accessing, 132
sync.WaitGroup, 279
security metrics, 131, 159
setting up, 133–137 /sys directory, 374
sd-bus, 326 AppArmor, 29, 30
seccomp virtual filesystem, 29
command, 90 syscall package
Docker containers, 90 application, 16, 17
installing, package manager, 90 checking disk space, 18, 19
libseccomp, 91–95 definition, 16
Linux, 89 functionalities, 16
restriction, 90 webserver, 20–22
Security scorecard, 137 syscall.Accept, 22
serveAPIServer function, 282 syscall.Bind, 21, 228
ServeHTTP function, 102 syscall.Socket system, 21, 228
servePrometheus function, 282 syscall.SOMAXCONN, 22
SetBPFFilter function, 219 syscall.Statfs function, 19
setup-ns, 86 System call
setup-veth, 86 codes, 10
setupVirtualEthOnHost(..) definition, 4
function, 83 in Go, 10–13
sh command, 63 Go library, 4
SIGHUP, 276 high level, 4
SIGINT, 276 in Linux vs. Darwin, 9
SignalChan variable, 276 operating systems, 4
386
INDEX
387
INDEX
388