Go Crazy A Fun Projects Based Approach To Golang Programming 1484296656 9781484296653 Compress
Go Crazy A Fun Projects Based Approach To Golang Programming 1484296656 9781484296653 Compress
Nicolas Modrzyk
Contributed by David Li, Jun Akiyama and
Tony Broyez
Go Crazy: A Fun Projects-based Approach to Golang Programming
Nicolas Modrzyk
tokyo-to suginami-ku, Japan
Introduction�����������������������������������������������������������������������������������������������������������xvii
v
Table of Contents
vi
Table of Contents
vii
Table of Contents
viii
Table of Contents
Index��������������������������������������������������������������������������������������������������������������������� 359
ix
About the Author
Nicolas Modrzyk acts as the CTO of Karabiner Software, a
successful consulting company located in the never-asleep
Tokyo, with its mix of ancestral culture and eco-friendly,
future-oriented dynamic.
He is an active contributor to the open-source
community in various domains, including imaging, ML, AI,
and cloud computing. As an engineer and a leader, Nico
has been involved in designing large-scale applications,
managing mammoth-sized clusters of servers, sometimes
using handwritten software, and enabling world-class
leaders by pushing international boundaries.
Nico ardently focuses on making life simple. (And we all
know how difficult that is!)
He loves pushing people to challenge themselves and go beyond their comfort zones.
To learn other cultures and explore different world views, he has been living around
the planet in various countries, including France, Ireland, Japan, China, Korea, India,
and the United States. You can talk to Nico in French, English, and Japanese, and you can
get along with him in Spanish and Chinese.
Nico is the author of a few programming books, available on Amazon. He recently
picked up the saxophone to honor his grandfather and his uncle, in the hope to match
their skill with a brass instrument.
He will be ready for a jazzy jam session whenever you are.
xi
About the Technical Reviewer
David Li is the executive director of Shenzhen Open
Innovation Lab, which facilitates the collaboration between
global smart hardware entrepreneurs and the Shenzhen
Open Innovation ecosystem. Before SZOIL, he co-founded
XinCheJian, the first hackerspace in China to promote
the hacker/maker culture and open-source hardware. He
co-founded Hacked Matter, a research hub on the maker
movement and open innovation. He also co-founded Maker
Collier, an AI company focusing on motion and sports
recognition and analysis.
xiii
Acknowledgments
All the involved authors—Jun, Tony, David—as well as the technical reviewers, Mathieu
and David, of this book have gone the extra mile to match the deadlines and bring the
writing and code samples to a top-class level.
My two strong daughters, Mei and Manon—you always keep me focused and in line
with my goals.
Psy Mom, French Chef Dad, Little Bro, Artful Sis—I thank you for your love every day,
your support, and all the ideas we share together.
My partner at Karabiner, Chris Mitchell—we’ve been working together for ten years,
and I think we both made tremendous efforts to make the planet a better place. Also,
the whole Karabiner people, at work now or busy making babies, we make a pretty
impressive world team.
Abe-san—who did not participate directly in the making of this book, but we wrote
our first computer book together, and without a first one, and without his trust, I would
not be here to even talk about it.
Kanaru-san—without your Iranian lifestyle and your life changing vision, I would
probably be a monk.
Marshall—without your world encompassing vision, I could have been focusing on
the bigger picture.
Ogier—without your summertime raclette and life-long friendship, I would probably
have been 5 kilos skinnier.
Jumpei—without your strong focus on music, I could not have played in all those
beautiful Tokyo live stages. And welcome Rei-chan!
Gryffin and Melissa—I could not have survived this without your hard work
and trust.
And of course, Marcel le chat—my open-source project on imaging would not be the
same without your feline cuteness.
xv
Introduction
On a sunny drive on the busy roads of Tokyo, over the rainbow bridge and facing the
ocean, my daughter Mei and I are having one of these philosophical talks.
Among the slur of questions she had ready for me, like “what is work for?,” she was
telling me about her need to have someone monitor her and give her deadlines. While
at the time of this writing, she’s barely 20 and hasn’t started a full-blown professional
career yet, she is right in the sense that the need to have deadlines and a purpose is at the
core of many adults’ professional lives.
At the very root of a school system, you are being told what to complete, and by what
date. You do not have input regarding the what or the when. A regular office worker is
told to finish their tasks by the fifth of next month, for example, and some authors are
told to finish three chapters by the end of the month.
That de facto need of what to do and by when happens very early in your career.
I am in favor of looking at things from a different angle. You should set your own
deadlines, and you should be in control of those deadlines. You have a goal, you set
milestones to achieve that goal, and you work on walking that path to that goal.
You want to live your own life and reach your own goals, not someone else’s.
Although I am critical about many of his actions, Elon Musk does not have someone
telling him when to land a rocket on Mars. He has his own schedule. He owns his
schedule. He owns his life.
This is a book on how to own your life again. More precisely, how Go, the
programming language, can help you get your time back, manage it along your dreams,
and own your life again.
I discovered the Go programming language a few years back. At that time, to be
honest, I was more of a Clojure-loving propaganda evangelist. Anything I developed or
touched had to be in Clojure. A deployment script, a web app, a dynamically generated
API around some custom datasets, image and video processing, or applying the latest
Computer Vision algorithm in real time—it did not matter. It greatly helped my career. I
would go even further and say, my life.
xvii
Introduction
How can a programming language help make your life better, you might ask? A
programming language is at first a language, and as such its first goal is to communicate.
We tend to think that a programming language’s only goal is to deal with a computer, but
we deal with computers because we want to communicate something to other people.
Take a simple email, for example. You use a computer to write an email because it
takes less time to reach its recipient, but the goal of an email is still to convey a message
to another person.
Now let’s say you have a lot to communicate, or you want to communicate something
to many people, but with that simple personal touch that makes all the difference
between your email being ignored and it being read and acted upon.
You don’t have much time. In life in general, but also to realize a task. You can use a
computer to help you with that task and save time.
Nowadays one of the best programming languages to put in your toolbox is GoLang.
It includes all the important concepts of Clojure, and that I love in a programming
language, but it’s also in the top ten of the TIOBE index, meaning you can find a few
more programmers to help you do your job.
Don’t get me wrong, there are other great languages, but there are many things that
GoLang gets absolutely right:
–– It is simple
–– It is concise
–– It’s easy to reuse bits of code from one project to the other
–– It is cloud-ready
xviii
Introduction
This programming book will take you on the path to Ikigai, finding joy in life through
purpose.
xix
CHAPTER 1
Go to the Basics
The goal of this first chapter is to write a ChatGPT client in Go. You’ve probably heard
about ChatGPT. It is an AI-trained chatbot that generates text according to questions
you ask it.
To get to this point, you will run basic Go programs and get used to the language.
Then you will put things together into a ChatGPT client.
But you first need to set up your code editor.
1
© Nicolas Modrzyk 2023
N. Modrzyk, Go Crazy, https://doi.org/10.1007/978-1-4842-9666-0_1
Chapter 1 Go to the Basics
First Steps
As with any new skill, you need a basic setup where you feel comfortable practicing
and trying new things. While Go, the language, makes writing code easier, GoLand, the
editor, makes writing Go easier.
To kick-start this chapter, you learn how to use GoLand as your editor for writing Go.
2
Chapter 1 Go to the Basics
Once you have created a new project, a blank project window will be available.
The left side of the window shows your project file, and the right side shows your
code editor (which, at this stage, is empty). See Figure 1-2.
3
Chapter 1 Go to the Basics
You can right-click in the Project Files tab and create a new Go file, as shown in
Figure 1-3.
4
Chapter 1 Go to the Basics
5
Chapter 1 Go to the Basics
1. The green arrow allows you to simply click and run your code. You
also get an arrow when you have test cases. You will learn about
that in a few pages.
2. Try copying and pasting this line into the main() function:
5. You can click most of your code and navigate to the corresponding
section in the Go packages, whether it’s part of the core language
or an external library.
6
Chapter 1 Go to the Basics
Your first code snippet will do just that—display the Go version of your current
installation. See Listing 1-1.
package main
import (
"fmt"
"runtime"
)
func main() {
fmt.Printf("Go version: %s\n", runtime.Version())
}
7
Chapter 1 Go to the Basics
5. The one and only function is called main, and that is the entry
function. It is called first when running the program.
8
Chapter 1 Go to the Basics
You can also ask the execution to not suspend when reaching a specific breakpoint
(see Figure 1-8) and just log the variables that are accessible to the debugger.
9
Chapter 1 Go to the Basics
While writing code, I recommend using GoLand debugging mode most, if not all, the
time. That way, you avoid unnecessary logging statements in the program and can focus
on the business logic that really matters, not the logging mess.
You now know the basics to run/debug a program, so next you review basic Go
concepts that you will use to write a ChatGPT client.
10
Chapter 1 Go to the Basics
11
Chapter 1 Go to the Basics
import (
"bufio"
"fmt"
"log"
"os"
)
func main() {
for true {
fmt.Print("What is your name ? > ")
reader := bufio.NewReader(os.Stdin)
12
Chapter 1 Go to the Basics
The for loop uses true as the condition of the loop continuity check. I put it there to
make it obvious what the condition is, but it can be removed altogether.
package main
import (
"bufio"
"fmt"
"os"
)
func main() {
file, _ := os.OpenFile("hello.txt", os.O_RDONLY, 0666)
defer file.Close()
reader := bufio.NewReader(file)
for {
line, err := reader.ReadString('\n')
fmt.Printf("> %s", line)
if err != nil {
return
}
}
}
13
Chapter 1 Go to the Basics
package main
import (
"fmt"
)
func main() {
h := Message{Hello: "world"}
fmt.Printf("%s\n", h)
}
; { world}
The output could be slightly more useful if you could print out the fields as well as
the actual data. There are two ways to do this.
One way is to use +v in the formatting part of the fmt.Printf formatting and print
call. All the fields in the struct will then be printed, as shown in Listing 1-5.
14
Chapter 1 Go to the Basics
package main
import (
"fmt"
)
func main() {
h := Message{Hello: "world"}
fmt.Printf("%+v\n", h)
}
{Hello:world}
Another way, and one that is often used to send and receive custom-defined structs
via HTTP, is to marshal the object to the universal JSON format.
This is a very custom way to print or parse data. Golang makes it very easy to achieve
this, using the encoding/json package included in the core libraries.
The use of this core library is shown in Listing 1-6.
import (
"encoding/json"
"fmt"
)
15
Chapter 1 Go to the Basics
func main() {
h := Message{Hello: "world"}
AsString, _ := json.Marshal(h)
fmt.Printf("%s\n", AsString)
}
This code will print a more detailed version of the custom data:
{"Hello":"world"}
Note the quotes around “Message” and “world”, which were not present when using
simple standard formatting to string.
Important Note If a field name in your custom struct does not start with a
capital letter, the field will not be marshalled and thus not printed. This happens
both when using the standard toString marshalling and the other marshalling
techniques. Starting a field with a lowercase character indicates that the field is
not to be exported.
While the struct contains the ignored field, that field will not be exported when using
JSON marshaling because it starts with a lowercase letter.
In Golang, you can also specify metadata on fields of structs using what is called a
tag line.
This tag line is used for different things. One common use is to format the output
of the fields in JSON. That tag line can also be used to format data for persistence to
database, for example.
You write a tag line by adding a specific directive after the field’s type, using
backquotes, as shown in Listing 1-7.
16
Chapter 1 Go to the Basics
package main
import (
"encoding/json"
"fmt"
)
func main() {
h := Hello{Message: "world"}
b, _ := json.Marshal(h)
fmt.Printf("%s\n", string(b))
}
{"hellooo":"world"}
package main
import (
"encoding/json"
"io/ioutil"
)
17
Chapter 1 Go to the Basics
func main() {
data := Employee{
FirstName: "Nicolas",
LastName: "Modrzyk",
Email: "hellonico at gmail.com",
Age: 43,
MonthlySalary: []Salary{{Basic: 15000.00}, {Basic: 16000.00},
{Basic: 17000.00}},
}
{
"FirstName": "Nicolas",
"LastName": "Modrzyk",
"Email": "hellonico at gmail.com",
"Age": 43,
"MonthlySalary": [
{
"Basic": 15000
},
18
Chapter 1 Go to the Basics
{
"Basic": 16000
},
{
"Basic": 17000
}
]
}
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
)
19
Chapter 1 Go to the Basics
func main() {
jsonFile, _ := os.Open("my_salary.json")
byteValue, _ := ioutil.ReadAll(jsonFile)
var employee Employee
_ = json.Unmarshal(byteValue, &employee)
fmt.Printf("%+v", employee)
}
Remember that you can pretty-print the content by reverting to JSON, as shown in
Listing 1-11.
func main() {
jsonFile, _ := os.Open("my_salary.json")
byteValue, _ := ioutil.ReadAll(jsonFile)
var employee Employee
_ = json.Unmarshal(byteValue, &employee)
//fmt.Printf("%+v", employee)
json, _ := json.MarshalIndent(employee, "", " ")
fmt.Println(string(json))
}
20
Chapter 1 Go to the Basics
package main
import (
"fmt"
"os"
)
func main() {
programName, questions := os.Args[0], os.Args[1:]
fmt.Printf("Starting:%s", programName)
if len(questions) == 0 {
fmt.Printf("Usage:%s <question1> <question2> ...", programName)
} else {
for i, question := range questions {
fmt.Printf("Question [%d] > %s\n", i, question)
}
}
}
For more advanced parsing, you use flag (https://pkg.go.dev/flag), but I won’t
review this now.
21
Chapter 1 Go to the Basics
Figure 1-11. The place to go when looking for libraries: the pkg.go.dev website
Then enter dotenv, the library you need for this example (see Figure 1-12).
22
Chapter 1 Go to the Basics
The code that uses the godotenv library, the first one in the list, is shown in
Listing 1-13.
package main
import (
"fmt"
"github.com/joho/godotenv"
"os"
)
func main() {
godotenv.Load()
s3Bucket := os.Getenv("S3_BUCKET")
secretKey := os.Getenv("SECRET_KEY")
S3_BUCKET: s3prod
SECRET_KEY: secretprod
When you write, copy, or open Listing 1-13 in GoLand, the library will not be found
because it has not been downloaded yet (see Figure 1-13).
23
Chapter 1 Go to the Basics
In the editor, the import statement at the top of the file will be highlighted in red, and
you can right-click or press Option+Enter to get GoLand to retrieve the library for you.
The go.mod file will then be filled in with the necessary information, as shown in
Listing 1-14.
module listing-14
go 1.18
Note that you can of course add the library manually in the go.mod file.
Once the library is correctly downloaded and added to the project, running Listing 1-13
will give the following output:
This code is loading fake keys to access S3 buckets, but some very similar code will
be used for loading the API key for ChatGPT.
24
Chapter 1 Go to the Basics
package main
import (
"fmt"
"time"
)
func printNumbers() {
for i := 0; i < 10; i++ {
time.Sleep(100 * time.Millisecond)
fmt.Printf("%d", i)
}
}
func main() {
go printNumbers()
printNumbers()
}
25
Chapter 1 Go to the Basics
package main
import (
"fmt"
"time"
)
func main() {
c := make(chan int)
go printNumbers(c)
Listing 1-16 uses a Go channel to convey data between the main function and the Go
routine.
A Go channel is a mechanism for communication between Go routines. It is a typed
conduit that allows Go routines to send and receive values of a specified type, safely and
concurrently. Channels provide a way for Go routines to communicate and synchronize
their execution, without the need for locks or other synchronization mechanisms.
A channel is created using the make function and can be passed as an argument to
Go routines, allowing multiple routines to communicate. Channels can be unbuffered or
buffered. Unbuffered channels allow a single value to be sent at a time, whereas buffered
channels allow multiple values to be stored in a buffer. The same <- operator sends and
receives values to and from a channel.
26
Chapter 1 Go to the Basics
Channels are an important tool for concurrent programming in Go, and they provide
a way to structure and coordinate the behavior of Go routines, making it easier to build
concurrent systems that are correct and efficient.
The for loop in the main thread reads values passed via the channel until the Go
routines close the channel and there is nothing more to read.
Note that you can tweak the values as they are being read out of the channel, using a
switch block (see Listing 1-17).
package main
import (
"fmt"
"time"
)
func main() {
c := make(chan int)
go printNumbers(c)
27
Chapter 1 Go to the Basics
default:
fmt.Println("Received other value")
}
}
}
Note that you can also apply computations on values before the cases. For example,
Listing 1-18 determines whether the value from the channel is even or odd.
func main() {
c := make(chan int)
go printNumbers(c)
28
Chapter 1 Go to the Basics
package main
import (
"fmt"
"os"
"time"
)
func main() {
ch := make(chan string)
go func() {
time.Sleep(1 * time.Second)
ch <- fmt.Sprintf("hello")
}()
go func() {
time.Sleep(2 * time.Second)
ch <- fmt.Sprintf("world")
}()
for {
select {
case v := <-ch:
fmt.Printf("%s\n", v)
case <-time.After(3 * time.Second):
fmt.Println("waited 3 seconds")
os.Exit(0)
}
}
Helloworld
waited 3 seconds
29
Chapter 1 Go to the Basics
Try to change one of the two Go routines’ sleep time to a value greater than 3
seconds. That Go routine will not have time to send its message to the channel before the
time.After case kicks in, and the select blocks will then go into the os.Exit branch,
which will call to exit the program.
Using Go Contexts
Go routines are typically used in conjunction with another Go feature: contexts. A
context in Go is an interface used to carry deadlines, cancellations, and other request-
scoped values across API boundaries and between processes. They helps manage the
flow of data, metadata, and control signals between independent parts of a distributed
application, ensuring that they all share a common understanding of the request they
are serving.
Contexts are used to store and propagate request-scoped values, such as
authentication credentials, and to propagate information about the lifetime of a request
to the parts of the system that need to know about it.
Contexts are created using the context.WithCancel, context.WithDeadline, and
context.WithTimeout functions, and they are typically passed as the first argument to
various function calls, including for example HTTP handlers.
Listing 1-20 shows the use of contexts.
package main
import (
"context"
"fmt"
"time"
)
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
go func() {
time.Sleep(2 * time.Second)
fmt.Println("Task finished")
}()
30
Chapter 1 Go to the Basics
select {
case <- ctx.Done():
fmt.Println("Context Done")
err := ctx.Err()
if err != nil {
fmt.Printf("err: %s", err)
}
}
}
In Listing 1-18, the deadline for the context will be reached first, and since there are
no other channel operations involved, the output will be as follows:
Task finished
Context Done
err: context deadline exceeded
In this first example, the context had time to reach its deadline. The second example
asks the context to be cancelled from within the Go routine, using the cancel callback
provided when creating the context via WithTimeout (see Listing 1-21).
package main
import (
"context"
"fmt"
"time"
)
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
go func() {
time.Sleep(2 * time.Second)
fmt.Println("Task finished")
cancel()
}()
31
Chapter 1 Go to the Basics
select {
case <-ctx.Done():
fmt.Println("Context Done")
err := ctx.Err()
if err != nil {
fmt.Printf("err: %s", err)
}
}
}
In this case, the output is similar as before, but the context has been forcefully
cancelled, so the message received from calling Err() will be different:
Task finished
Context Done
err: context canceled
The usual way to use contexts is to have a common parent context to execute a group
of tasks, thus distributing each task with a main common context, and some sub-
contexts containing specific data for that task.
For example, Listing 1-22 creates a parent context with a deadline and two sub-
contexts, each with some custom data passed via ctx.Value.
The tasks themselves, defined as base functions via func, are each spawned via Go
routines.
package main
import (
"context"
"fmt"
"time"
)
32
Chapter 1 Go to the Basics
select {
case <-ctx.Done():
fmt.Println("Context done")
return
default:
i++
fmt.Printf("Running [%s]...%d\n", ctx.Value("hello"), i)
time.Sleep(500 * time.Millisecond)
}
}
}
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
<-ctx.Done()
}
Running [nico]...1
Running [world]...1
Running [world]...2
Running [nico]...2
Running [nico]...3
...
You have now seen many important Go concepts and its basic coding usage. In
fact, you have seen enough to be able to move forward and put the ChatGPT example
together.
33
Chapter 1 Go to the Basics
5. Send the prompt questions and receive the responses via client.
Completion.
You have not learned about the go-gpt3 library yet, but its flow looks like what you
have seen so far. First, you need to get an API key.
34
Chapter 1 Go to the Basics
Then click the Create New Secret Key button, as shown in Figure 1-15. Note that the
key is only shown once, after which you have to re-create a new key.
I also suggest setting up billing. It may sound scary to pay for more things, but after
a full day of usage writing this chapter, I was at less than 0.05$ of billable usage (see
Figure 1-16).
First Request
In this first example, you create a simple struct to prepare the message to send to
ChatGPT and directly load the API key from the code.
Finally, it’s time to ask some serious life questions. What about “how many cups of
coffee can the author drink per day?” See the basics for chatting in Listing 1-23.
package main
import (
"context"
"fmt"
"github.com/PullRequestInc/go-gpt3"
)
36
Chapter 1 Go to the Basics
func main() {
apiKey := "..."
ctx := context.Background()
client := gpt3.NewClient(apiKey)
request := gpt3.CompletionRequest{
Prompt: []string{"How many coffees should I drink per day?"},
}
resp, err := client.Completion(ctx, request)
if err != nil {
fmt.Printf("%s\n", err)
} else {
fmt.Printf("Answer:\n %s\n", resp.Choices[0].Text)
}
}
Individual metabolism can vary considerably, but for most men, between
three and five cups seems to be the upper "safe" limit. Women have smaller
metabolisms, so in general have room for fewer cups.
It is all about balance. Coffee is not, and cannot, be a substitute for a
healthy diet, exercise and overall lifestyle. A more typical cup of coffee
is about 80 to 100 calories, but some specialty versions can pack in 400
calories or more.
Not too bad for a first try at ChatGPT coding. Here are the next steps:
2. Write a prompt.
37
Chapter 1 Go to the Basics
38
Chapter 1 Go to the Basics
// Whether to stream back results or not. Don't set this value in the
request yourself
// as it will be overridden depending on if you use CompletionStream or
Completion methods.
Stream bool `json:"stream,omitempty"`
}
In practice, you will mostly use the following two extra parameters:
• MaxTokens *int: Indicates how long (or short) the answer should be.
Listing 1-25 also refactors the loading of the API key using godotenv. You will load
the parameters for the request from an .env file.
package main
import (
"bufio"
"context"
"fmt"
"github.com/PullRequestInc/go-gpt3"
"github.com/joho/godotenv"
"log"
"os"
"strconv"
)
func main() {
godotenv.Load()
39
Chapter 1 Go to the Basics
apiKey := os.Getenv("API_KEY")
if apiKey == "" {
log.Fatalln("Missing API KEY")
}
ctx := context.Background()
client := gpt3.NewClient(apiKey)
for true {
fmt.Print("\n\n> ")
reader := bufio.NewReader(os.Stdin)
line, err := reader.ReadString('\n')
if err != nil {
log.Fatal(err)
}
// fmt.Printf("read line: %s-\n", line)
complete(ctx, client, line)
}
}
maxToken, _ := strconv.Atoi(os.Getenv("MAX_TOKEN"))
temperature, _ := strconv.ParseFloat(os.Getenv("TEMPERATURE"), 32)
questions := []string{question}
return gpt3.CompletionRequest{
Prompt: questions,
MaxTokens: gpt3.IntPtr(maxToken),
Temperature: gpt3.Float32Ptr(float32(temperature)),
}
}
request := makeRequest(question)
resp, _ := client.Completion(ctx, request)
fmt.Print(resp.Choices[0].Text)
}
40
Chapter 1 Go to the Basics
API_KEY=...
MAX_TOKEN=100
TEMPERATURE=0.6
If you try a few times with different temperature values, you will see that a
temperature of 0 will lead to a more predictable answer. Time to try.
...
for true {
fmt.Print("\n\n> ")
reader := bufio.NewReader(os.Stdin)
line, err := reader.ReadString('\n')
if err != nil {
log.Fatal(err)
}
// fmt.Printf("read line: %s-\n", line)
complete(ctx, client, line)
}
...
Running the code, I asked ChatGPT if “I should go to bed” because my writing has
been going quite late tonight again, and it quite amusingly answered no!
Take a look at this output, generated using the loop prompt just created:
Why?
Because that's when the best TV shows come on.
What?
That's right.
The best TV shows come on at 1am.
What are you talking about?
I'm talking about the best TV shows.
What are the best TV shows?
The best TV shows are the ones that come on at 1
package main
import (
"context"
"fmt"
"github.com/PullRequestInc/go-gpt3"
"github.com/joho/godotenv"
"log"
"os"
)
func main() {
godotenv.Load()
apiKey := os.Getenv("API_KEY")
if apiKey == "" {
log.Fatalln("Missing API KEY")
}
42
Chapter 1 Go to the Basics
ctx := context.Background()
client := gpt3.NewClient(apiKey)
request := gpt3.CompletionRequest{
Prompt: []string{"How many cups of coffee should I drink
per day?"},
MaxTokens: gpt3.IntPtr(100),
}
And just like the browser-based version, you can see the output from ChatGPT being
streamed out to the standard output.
package main
import (
"context"
"fmt"
"github.com/PullRequestInc/go-gpt3"
"github.com/joho/godotenv"
"log"
"os"
)
43
Chapter 1 Go to the Basics
func main() {
godotenv.Load()
apiKey := os.Getenv("API_KEY")
if apiKey == "" {
log.Fatalln("Missing API KEY")
}
ctx := context.Background()
client := gpt3.NewClient(apiKey)
Once you have the name of the engine you want to use, you can replace the calls to
client.Completion or client.CompletionStream with client.CompletionWithEngine
or client.CompletionStreamWithEngine, respectively. Listing 1-30 shows this in action.
44
Chapter 1 Go to the Basics
text- The most capable GPT-3 model. Can do any task the other 4,000 Up to Jun
davinci-003 models can do, often with higher quality, longer output, tokens 2021
and better instruction-following. Also supports inserting
completions within text.
text- Very capable, but faster and cheaper than Davinci. 2,048 Up to Oct
curie-001 tokens 2019
text- Capable of straightforward tasks, very fast, and cheaper. 2,048 Up to Oct
babbage-001 tokens 2019
text- Capable of very simple tasks, usually the fastest model in
ada-001 the GPT-3 series, and cheapest.
Note that the davinci model gives the best answers, but as specified in the OpenAI
online page, the other models are cheaper, so they may be worth giving a try, depending
on the questions you need to ask.
S
ummary
In this first chapter, you learned about many basic Go programming techniques:
• Running and debugging a program
45
Chapter 1 Go to the Basics
Finally, you put it all together to create a simple ChatGPT client using the go-gpt3
Go library, where you:
I encourage you to try a few questions. To finish this chapter, I will ask just one more
question to ChatGPT: “What are GoLang’s best features?”
Pros: speed
runs on any platform
runs on any OS
good package management
good community
good documentation
good OSS
good MVC framework
fast compilation
incremental compilation/caching/reloading
best concurrency library
easy to debug
easy to package
It’s a bit early in the book, but already at this stage, I do hope you agree.
46
CHAPTER 2
One of the core reasons to use GoLang is that its energy consumption is well below many
other mainstream languages.
Comparing Go performance to Java performance is like comparing apples to
tomatoes. You can indeed write very efficient and fast Java; it’s a pretty close call between
the two. However, Go uses approximately one fifth of the memory that Java uses “most of
the time.”1
47
© Nicolas Modrzyk 2023
N. Modrzyk, Go Crazy, https://doi.org/10.1007/978-1-4842-9666-0_2
Chapter 2 Write a Tested HTTP Image Generator API
48
Chapter 2 Write a Tested HTTP Image Generator API
gin 66731 7255 642 Gin is a HTTP web framework written in Go 2023-
(GoLang). It features a Martini-like API with 02-21
much better performance—up to 40 times
faster. If you need smashing performance, get
yourself some Gin.
beego 29429 5555 18 beego is an open-source, high-performance 2023-
web framework for the Go programming 02-07
language.
echo 25038 2107 64 High-performance, minimalist Go web 2023-
framework. 02-24
fiber 24855 1266 35 ⚡ Express inspired web framework written in 2023-
Go. 02-25
kit 24623 2384 39 A standard library for microservices. 2023-
01-02
All those frameworks are actively maintained. You can try them out after finishing
this chapter, to compare them and decide which you like best.
This chapter focuses on the most famous/fastest/maddest of them all, Gin (see
https://gin-gonic.com).
In the first example, you create a Gin router that responds on the root route, /. That
route will answer any HTTP GET request with a friendly hello message, as shown in
Listing 2-1.
package main
import (
"github.com/gin-gonic/gin"
)
49
Chapter 2 Write a Tested HTTP Image Generator API
1. You first create a new router instance, which will be able to handle
the different HTTP request coming to the API.
2. Then you define a route for a GET request coming to /, which is the
root of the routing to the HTTP server and so will be accessible at
the default route of http://locahost:8080/.
When you run this listing, the Gin server will start and you will get a bit of friendly
output, where, among other things, the routes defined in the router are displayed. You
can see this in Listing 2-2.
...
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in
production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
50
Chapter 2 Write a Tested HTTP Image Generator API
Also note the default settings. Useful output about the route usage is shown in the
logs when accessing the / route via a browser or a curl request. Listing 2-3 shows the
output when the root route / is accessed.
Building on this tremendous success, you can define a parameter in the route. You
do this by using a semicolon symbol followed by a variable name in the string that
describes the route. That parameter is used in the output of the route.
The parameter is retrieved via the Context.Param call, as shown in Listing 2-4.
package main
import (
"fmt"
"github.com/gin-gonic/gin"
)
51
Chapter 2 Write a Tested HTTP Image Generator API
Note Setting the parameter :name directly on the root route makes it harder to
define other endpoints.
As you may have noticed, it’s harder to add routes if the parameter is directly on the
root route, so let’s define a group of routes using the Group function from the router.
This time, the routes are grouped. To make this easier to read, you can group them in
a block defined by curly braces {} (although you do not have to).
Listing 2-5 shows you how to define the /user route group, including one GET route
called /hello/:name.
package main
import (
"fmt"
"github.com/gin-gonic/gin"
)
This code uses a GET request to retrieve some data. Normally, you would use a POST
request to retrieve a batch of parameters and update the internal data.
52
Chapter 2 Write a Tested HTTP Image Generator API
In Gin, and in many other places in Go coding, it is handy to bind the contents of the
POST request data from a JSON structure.
The contents of the body are bound to a custom type defined as a struct using
the BindJSON function. The route returns a JSON struct, which is built using the JSON
function on the context object, just as you returned simple text in the first example (see
Listing 2-6).
package main
import (
"fmt"
"github.com/gin-gonic/gin"
"net/http"
)
53
Chapter 2 Write a Tested HTTP Image Generator API
fmt.Println(body)
c.JSON(http.StatusAccepted, &body)
})
}
return r
}
func main() {
router().Run()
}
You now define the Message struct with the extra metadata:
If you did a POST without a proper email in the JSON content, the validation would
fail. From the error-handling section, the following message would be printed (on the
server):
Go has a very extensive set of validation rules available for free. The full list is
available at https://github.com/go-playground/validator#baked-in-validations.
For convenience, some of the more useful validation tags are listed in Table 2-2.
54
Chapter 2 Write a Tested HTTP Image Generator API
email Make sure the field is in email format (does not check the
validity of the email itself)
gte=10,lte=1000 When binding to an integer, validate the range of the value
max=255 Maximum length of a string
min=18 Minimum length of a string
oneof=married single One in a set of values (here, married or single)
time_format:"2006-01-02" Useful for defining dates
ltefield=OtherDate" time_ Make sure the date comes before the other date, defined as
format:"2006-01-02" OtherDate in the same struct
gte=1,lte=100,gtfield= You can use gtfield to say the current field is greater than another
GraduationAge field. Here, we expect age to be greater than GraduationAge
startswith=MAC,len=9 Make sure the string is of length 9 and starts with MAC
Uppercase / lowercase Make sure the field is uppercase or lowercase only
alphanum / alpha Only accept English letters and numerals
contains=key Make sure the string contains a key
endswith=. Make sure the string ends with a period (.)
An often-requested feature is to allow the end user to send data over HTTP. This is
done using a file upload POST route, where the file is simply retrieved using FormFile.
The new route definition is shown in Listing 2-7.
c.SaveUploadedFile(file, "/tmp/tempfile")
55
Chapter 2 Write a Tested HTTP Image Generator API
To confirm this route behaves as expected, you can upload a file using Curl with a
multipart upload, as shown in Listing 2-8.
curl \
-XPOST http://localhost:8080/user/upload \
-H "Content-Type: multipart/form-data" \
-F "[email protected]"
As per the code in Listing 2-7, the file will be saved in a temporary file, which you can
check. See Listing 2-9.
Archive: /tmp/tempfile
Length Date Time Name
--------- ---------- ----- ----
1048 02-28-2023 10:39 hello.go
--------- -------
1048 1 file
This concludes the brief introduction to the Gin framework. You should have enough
HTTP knowledge to build a synchronous API. Eventually the image generation should be
asynchronous, so let’s jump to another common development topic, queueing jobs.
56
Chapter 2 Write a Tested HTTP Image Generator API
Figure 2-1 shows how things work with a single clerk handling the ticketing process.
Figure 2-2 shows how things can go faster when three clerks are at the ticket counter.
Now consider how a basic job-dispatching process would work without using
queues. In this case, jobs are dispatched using Go routines, and their result is sent back
using a string channel. Channels are a key feature of Go’s support of asynchronous
computation.
Each dispatched job returns its computed result, which is a simple string made from
the input integer i, to the channel. The main thread then loops several times, equal to the
number of dispatched jobs, and reads the values from the channel.
This basic setup is shown in Listing 2-10.
57
Chapter 2 Write a Tested HTTP Image Generator API
package main
import (
"fmt"
"math/rand"
"time"
)
func main() {
taskN := 100
rets := make(chan string, taskN)
The sleepSomeTime function simply makes the job longer and slows the process
overall, so there is more time to see how things are working.
58
Chapter 2 Write a Tested HTTP Image Generator API
While Go channels are powerful abstract constructs for passing messages, queues
make it easier to distribute the load on multiple machines or servers. Queues also
make it easier to stop all processing that is closely related and so they are often used for
batch work.
Let’s reimplement the same exercise of job dispatching by using a queue. This
example uses the third-party library called go-queue (https://github.com/phf/
go-queue).
You must:
5. As before, loop the main thread and wait for the messages on the
channel.
There are many other aspects to queue creation, like timeouts, logging, and metrics.
They are left out of this chapter to make things easier to grasp.
Listing 2-11 is the implementation of the same dispatching job, but this time using
go-queue.
package main
import (
"context"
"fmt"
"math/rand"
"time"
59
Chapter 2 Write a Tested HTTP Image Generator API
"github.com/golang-queue/queue"
)
func main() {
taskN := 100
rets := make(chan string, taskN)
q := queue.NewPool(5)
defer q.Release()
Next in your discovery of working with queues is to get ready to dispatch jobs on
remote queues.
60
Chapter 2 Write a Tested HTTP Image Generator API
You do this by creating a custom jobData type to handle the passing data. You also
define a custom function to handle the job, marshalling the message in and out after
doing the processing in bytes.
This example also uses the sleepSomeTime function to simulate some heavy, time-
consuming processing. If your computer is slow, or if you’re working on a Raspberry-Pi
or other slow powered devices, this may not be necessary.
The message is marshalled using JSON, as shown in Listing 2-12.
Listing 2-12. Using a Custom Data Type to Transfer Data Between Worker and
Dispatcher
package main
import (
"context"
"encoding/json"
"fmt"
"math/rand"
"time"
"github.com/golang-queue/queue"
"github.com/golang-queue/queue/core"
)
61
Chapter 2 Write a Tested HTTP Image Generator API
func main() {
rand.Seed(time.Now().Unix())
taskN := 100
rets := make(chan string, taskN)
defer q.Release()
62
Chapter 2 Write a Tested HTTP Image Generator API
You are almost done learning about queues. The last example is a little bit more
involved.
This time, the goal is to run the queue itself outside the main Go program. This
example uses a tool that works very well with go-queue, called nsqd (see https://
nsq.io/deployment/installing.html). It is also written in Go and is therefore energy
efficient, while still providing high performance.
Once you have installed the packages, you need to start three different daemons.
The first daemon shares metadata for the queue setup process and is nsqlookupd, as
shown in Figure 2-3.
The second daemon is the actual queue worker, which is started with this command:
nsqd --lookupd-tcp-address=localhost:4160
Finally, a nice-to-have daemon is an admin web UI for the queue. It’s not entirely
necessary, but it’s nice to navigate along the queue setup and see the messages being
processed in real time. Listing 2-13 shows how to start the admin UI.
63
Chapter 2 Write a Tested HTTP Image Generator API
The third daemon starts with some logs, as shown in Figure 2-5.
You can then see what is happening on the admin UI, as shown in Figure 2-6.
Note that if you install the support for bash files in GoLand, you can run all the
scripts from within the IDE, with the overall view shown in Figure 2-7.
64
Chapter 2 Write a Tested HTTP Image Generator API
Figure 2-7. Starting the queue scripts from within the IDE
The necessary daemons are all started up, so let’s go back to update the Go code.
This new distributed code is almost the same as Listing 2-13, except for the queue
definition, where you distribute the messages via the newly defined NSQ worker.
The message distribution is shown in Listing 2-14.
func main() {
...
w := nsq.NewWorker(
nsq.WithAddr("127.0.0.1:4150"),
nsq.WithTopic("crazy"),
nsq.WithChannel("go"),
nsq.WithMaxInFlight(10),
nsq.WithRunFunc(func(ctx context.Context, m core.
QueuedMessage) error {
var v *jobData
if err := json.Unmarshal(m.Bytes(), &v); err != nil {
return err
}
65
Chapter 2 Write a Tested HTTP Image Generator API
In the web UI, if you open the related topic “crazy” and the channel Go, you will see
the messages being queued and processed, with information being collected in real time
(see Figure 2-8).
66
Chapter 2 Write a Tested HTTP Image Generator API
Later, you will use this to distribute jobs, using queues to asynchronously process the
image generation.
Talking about image generation, let’s move to the visually exciting part of this
chapter.
Image Generators
There are several image generator libraries available in Go. Table 2-3 lists a few of the
options.
67
Chapter 2 Write a Tested HTTP Image Generator API
This chapter uses the generativeart library as the source of image generation.
The generativeart library implements a large set of art algorithms (many taken
from Generative Art1), most relying on pseudo-random elements. This means when you
call the same generator function with the same parameters, it will not generate the exact
same picture.
Listing 2-16 shows the first art experiment, using the NewColorCircle2 algorithm.
package main
import (
"github.com/jdxyw/generativeart"
"github.com/jdxyw/generativeart/arts"
"github.com/jdxyw/generativeart/common"
"math/rand"
"time"
)
func main() {
rand.Seed(time.Now().Unix())
c := generativeart.NewCanva(600, 400)
c.SetBackground(common.NavajoWhite)
68
Chapter 2 Write a Tested HTTP Image Generator API
c.FillBackground()
c.SetLineWidth(1.0)
c.SetLineColor(common.Orange)
c.Draw(arts.NewColorCircle2(30))
c.ToPNG("circle.png")
}
If the random algorithm is not seeded properly, you end up generating the same
pictures, which is not the goal here.
Usually, a good source of randomness is the UNIX time, which is the number of
seconds elapsed since January 1, 1970. This is a de facto source of randomness for many
programs that do not need to be ultra-secure. This is done using the rand.Seed(time.
Now().Unix()) call.
Executing the code in Listing 2-16 produces generated art like the image in
Figure 2-9.
Among other settings, you can change the color schema, as shown in Listing 2-17.
69
Chapter 2 Write a Tested HTTP Image Generator API
c.SetColorSchema([]color.RGBA{
common.White,
common.Tomato,
common.Azure,
common.Mintcream,
})
Or you can use the color schema from the generativearts website, which adds a
very Kandinsky-like mood to any sketches (see Listing 2-18).
c.SetColorSchema([]color.RGBA{
{0xCF, 0x2B, 0x34, 0xFF},
{0xF0, 0x8F, 0x46, 0xFF},
{0xF0, 0xC1, 0x29, 0xFF},
{0x19, 0x6E, 0x94, 0xFF},
{0x35, 0x3A, 0x57, 0xFF},
})
Executing the main code with this color schema produces images similar to the one
in Figure 2-11.
70
Chapter 2 Write a Tested HTTP Image Generator API
It’s tiresome to have to write and run programs each time, which is why you are
working toward this HTTP API.
The next task toward this goal is to implement a few other samples from the library,
generate images, group common settings together, and call each algorithm from keys
of a map.
package main
import (
"fmt"
"github.com/jdxyw/generativeart"
"github.com/jdxyw/generativeart/arts"
"github.com/jdxyw/generativeart/common"
"image/color"
71
Chapter 2 Write a Tested HTTP Image Generator API
"math/rand"
"time"
)
func main() {
drawMany(DRAWINGS)
}
72
Chapter 2 Write a Tested HTTP Image Generator API
c.SetBackground(common.NavajoWhite)
c.FillBackground()
c.SetLineWidth(1.0)
c.SetLineColor(common.Orange)
c.Draw(DRAWINGS[art])
You can see that as you use a key in the map, you are going to use a route parameter
to use this key and retrieve the actual art generator.
The next step is to re-use the drawOne function and call it from a Gin route. Listing 2-20
does just that, gluing the Gin router and the image generator code together.
One thing you have not seen before is how to specify the HTTP content header; you
rightfully set it to image/png along the generated image.
package main
import (
"github.com/gin-gonic/gin"
"gocrazy/chapter-02/final-00/drawing"
)
73
Chapter 2 Write a Tested HTTP Image Generator API
}
return r
}
func main() {
router().Run()
}
When you run this program, the server starts, and the command line shows the newly
created circles and some logs as you access the URL via a browser (see Listing 2-21).
74
Chapter 2 Write a Tested HTTP Image Generator API
Note that as you refresh the page, a new circles image is generated each time.
The next step is to plug in the map of drawings string->Engine as a parameter in the
Gin route. The updated route is shown in Listing 2-22.
You can call the different engines directly by using their names from the
DRAWINGS map.
As it might be cumbersome to remember the name of the generators each time, you
can template an index page to make this easier.
75
Chapter 2 Write a Tested HTTP Image Generator API
3. You use the HTML function from the Gin context to return HTML.
4. The HTML function takes the status, a template name, and a map
of values to use in the template.
5. Note that, to create a slice of keys from the DRAWINGS map, you use
the golang.org/x/exp/maps library, which is a set of extra features
not included in the core Go language (but is still quite useful).
package main
import (
"github.com/gin-gonic/gin"
"gocrazy/chapter-02/final-02/drawing"
"golang.org/x/exp/maps"
"net/http"
)
// ...
listRoute := r.Group("/list")
{
listRoute.GET("/simple", func(c *gin.Context) {
c.HTML(http.StatusOK, "simple.tmpl", gin.H{
76
Chapter 2 Write a Tested HTTP Image Generator API
"keys": maps.Keys(drawing.DRAWINGS),
})
})
}
return r
}
func main() {
router().Run()
}
The first template is quite basic, to give you an idea as to how things are assembled.
The template code uses the logic-less template style from Mustache.
1. range iterates over the slice “keys” defined in the gin.H map from
Listing 2-22.
<body>
{{range .keys}}
<p><a href="/image/{{.}}">{{.}}</a></p>
{{end}}
</body>
Starting the Gin server with the new route will allow you to access the list, as shown
in Figure 2-13.
77
Chapter 2 Write a Tested HTTP Image Generator API
And of course, clicking one of the links leads to a newly generated image.
It would be nice to include the styling bootstrap framework here, to make the list a
little bit more beautiful.
You can create a table with preview images for each generator in the list. To do this,
you can use the bootstrap starter template from:
https://getbootstrap.com/docs/4.0/getting-started/introduction/
#starter-template
Then you replace the body of the template with the code in Listing 2-25.
<body>
<table class="table">
<thead>
<tr>
<th scope="col">Generator</th>
<th scope="col">Preview</th>
</tr>
</thead>
<tbody>
{{range .keys}}
<tr>
78
Chapter 2 Write a Tested HTTP Image Generator API
<td><a href="/image/{{.}}">{{.}}</a></td>
<td><a href="/image/{{.}}"><img style="width: 100px;height:86px"
src="/image/{{.}}"></img></a></td>
</tr>
{{end}}
</tbody>
</table>
</body>
I leave it to you to add a new route that uses this bootstrap template or to look at the
companion samples of this book.
Accessing the new route produces something more exciting, as shown in Figure 2-14.
79
Chapter 2 Write a Tested HTTP Image Generator API
2. Before returning the ID and the URL, the route will post a message
to the image-generating queue.
3. This route has completed its job, so any new request to the same
route will return a new ID and start a new image-generation job.
5. Since the image generation is too fast here again, you’ll add some
sleep time to the example.
6. Once the sleep time has elapsed and the image has been
generated, the path to the temporary image is stored in
the synchronized map. This way, you can shortcut the
communication using channels in the previous queue example.
7. The other route reads from the synchronized map and, if the
id->path key value pair is found, it returns the image from the
path. Otherwise, it returns the static image file. (This includes a
Cache-Control header to make sure the temporary image used
before the image is not cached in the browser and the new image
is properly loaded when it’s found.)
80
Chapter 2 Write a Tested HTTP Image Generator API
package main
import (
...
)
var sm sync.Map
rand.Seed(time.Now().Unix())
return nil
}))
81
Chapter 2 Write a Tested HTTP Image Generator API
...
newRoute := r.Group("/new")
{
newRoute.GET("/load/:id", func(c *gin.Context) {
id := c.Param("id")
path, ok := sm.Load(id)
if ok {
fmt.Printf("Found %s for id: %s\n", path, id)
c.Header("Content-Type", "image/png")
c.File(fmt.Sprintf("%s", path.(string)))
} else {
fmt.Printf("Path not found for id: %s\n", id)
c.Header("Content-Type", "image/jpg")
c.Header("Cache-Control", "no-cache")
c.File("static/loading.jpg")
}
})
newRoute.GET("/:generator", func(c *gin.Context) {
generator := c.Param("generator")
newJob := jobData{
Id: strconv.Itoa(rand.Int()),
Generator: generator,
}
q.Queue(&newJob)
res := map[string]string{"id": newJob.Id, "url": "http://" +
c.Request.Host + "/new/load/" + newJob.Id}
c.JSON(200, res)
})
}
return r
}
Once this example starts the new server, you can access the new route via:
http://localhost:8080/new/:generator
For example:
82
Chapter 2 Write a Tested HTTP Image Generator API
http://localhost:8080/new/janus
The route returns a JSON message containing the ID of the generated image, and for
convenience, the URL to retrieve the image itself (see Figure 2-15).
Figures 2-16 and 2-17 shows the content when accessing the indicated
URL. Figure 2-16 shows the temporary picture when the proper image has not been
generated yet, and Figure 2-17 shows the proper image.
Figure 2-16. Temporary image before the image has been generated
83
Chapter 2 Write a Tested HTTP Image Generator API
Figure 2-17. The same URL, but this time the image has been generated
Note As a simple exercise, try to quickly create a list of the already generated
images—an overview list like the one in the previous example.
Now that the API server is fully ready, it would be nice if you could run some
regression testing to verify that the queuing process works all the time.
84
Chapter 2 Write a Tested HTTP Image Generator API
package main
import "github.com/gin-gonic/gin"
func main() {
r := setupRouter()
r.Run(":8080")
}
85
Chapter 2 Write a Tested HTTP Image Generator API
With all this in mind, Listing 2-28 shows how to test this Gin ping route.
package main
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/stretchr/testify/assert"
)
w := httptest.NewRecorder()
req, _ := http.NewRequest("GET", "/ping", nil)
router.ServeHTTP(w, req)
To run the test in GoLand, you can use the green arrows in the Editor. You can run
each test one by one, or run all the tests in the current file (see Figure 2-18).
86
Chapter 2 Write a Tested HTTP Image Generator API
If you force the test to fail, for example by updating the expected code to be 400
instead of the actual 200, the test will fail with some useful output (see Figure 2-19).
87
Chapter 2 Write a Tested HTTP Image Generator API
The idea is to write tests that are easy to maintain and update and to be able to
determine the whys and whens of a failing test. Figure 2-20 pinpoints exactly why the test
is failing.
88
Chapter 2 Write a Tested HTTP Image Generator API
Once you fix the cause of the failure, you can choose to rerun only the failing tests
(see Figure 2-21).
That was it for the basics of testing a Gin route. Now you learn how to apply the same
technique to your freshly created image HTTP API.
89
Chapter 2 Write a Tested HTTP Image Generator API
Listing 2-29. Testing the JSON Message Returned from the Image API
w := httptest.NewRecorder()
req, _ := http.NewRequest("GET", "/new/janus", nil)
router.ServeHTTP(w, req)
Running this test should give you a pass, as shown in Figure 2-23.
The next test you write is according to the job being dispatched in the queue. Here
are the things you to do:
1. Send a request to /new/generator.
90
Chapter 2 Write a Tested HTTP Image Generator API
4. Send the request a first time and determine if the image returned
is the temporary image (remember it was of type image/jpg so it’s
a different type than the generated PNG images).
5. Wait three seconds (modify the wait time to not wait too
long here).
7. This time, the image is in the map and the Content-Type should
be image/png.
Listing 2-30 is simply an expansion of Listing 2-29, with the new extra steps included.
w := httptest.NewRecorder()
req, _ := http.NewRequest("GET", "/new/janus", nil)
router.ServeHTTP(w, req)
time.Sleep(3 * time.Second)
Running the test should result in a pass, as shown in Figure 2-24. Otherwise, it’s time
to analyze the failure.
91
Chapter 2 Write a Tested HTTP Image Generator API
S
ummary
This concludes Chapter 2, where you learned how to write an asynchronous API for
image generation and how to write and run tests for it.
After completing this chapter and going through all the code examples, you should
know how to:
• Create a simple API using Gin routing techniques.
92
CHAPTER 3
I’ve always been on the side of using a game engine instead of other UI tools to create
lively interfaces for interacting between systems, people, or both.
There have been a lot of new 2D gaming interfaces since the pandemic, be it
multiplayer games, like AmongUs (www.innersloth.com/games/among-us/), or
communication tools like Gather.town (www.gather.town/), WorkAdventure (https://
workadventu.re/) and the hobby-like SkyOffice (https://skyoffice.netlify.app/).
We have tried many here, in the Japan workplace, to enhance team collaboration
and enhance team collaboration. Many of those solutions come with a hefty price tag as
the number of user increases, so why not try developing your own? Or what about just
finding your life voice while developing a simple 2D game?
This is the goal of this chapter. Although you won’t see and implement a full game,
you will learn the technical basis for one, which should give you enough inspiration to
keep going.
93
© Nicolas Modrzyk 2023
N. Modrzyk, Go Crazy, https://doi.org/10.1007/978-1-4842-9666-0_3
Chapter 3 Writing the Basics for a 2D Game in Go
The first iteration of a tile set-based game was achieved with Galaxian, Namco’s
answer to Space Invaders. Galaxian had much better graphics and they loaded much
faster too.
When you want to display a graphic in a 2D game, you load or draw graphics into
a framebuffer, which is part of the available memory location that is used for graphic
rendering. Before the tile sets era, developers had to either draw each sprite directly in
the framebuffer or load a file per character. This was usually slow, and the number of
files you could use was largely restricted. The Galaxian developers engineered a way to
load one file once, with all the different tiles required for a proper character animation,
and only display or use a portion of that file.
The obvious advantage is that you could display and animated more beautifully
drawn characters, while at the same time limiting hardware access to them, a key way to
make games faster when resources were limited.
The process of loading and animating using a sprite sheet or tile sheet is how you are
going to develop the simple game in this chapter.
There are a few steps you need to do to get ready to use raylib-go, depending on the
machine you are using. They are specified on the project’s GitHub page (see https://
github.com/gen2brain/raylib-go#requirements).
94
Chapter 3 Writing the Basics for a 2D Game in Go
For the usual mainstream operating systems, the instructions are repeated here:
macOS
On macOS you need Xcode or Command Line Tools for Xcode.
Windows
On Windows you need C compiler, like Mingw-w64 or TDM-GCC. You can also
build binary in MSYS2 shell.
On other *ixes, it’s a matter of installing a few extra packages, notably libmesa3d (see
https://www.mesa3d.org/).
Game Setup
Once you have installed the required libraries, it is time to start with a simple example
straight from the raylib-go front page. The first example simply opens a gaming window
and writes some text to it.
As usual, GoLand will do the project setup and dependencies for you. Therefore, in
a new folder, and with a new Go file in GoLand, copy and paste the code from GitHub.
Listing 3-1 shows the code.
package main
import "github.com/gen2brain/raylib-go/raylib"
func main() {
95
Chapter 3 Writing the Basics for a 2D Game in Go
7. CloseWindow is used when you press the Esc key to finish the game
loop and quit the game.
If you execute the program, you’ll see the window in Figure 3-1.
You could run some basic examples and learn from them, but let’s first see if using
ChatGPT can help you get up to speed faster here.
96
Chapter 3 Writing the Basics for a 2D Game in Go
There are a few things you’ll want to correct in that script. Notably a few things do
not compile right away, as shown in Figure 3-2.
1. To load a font to use in the game, you need to provide the font.
ttf file in the same folder. The Size function does not exist, so you
need to use a BaseSize to check whether the font has been loaded
properly. A free font has been provided in the samples, but you
can of course find and download a font you like.
97
Chapter 3 Writing the Basics for a 2D Game in Go
3. The loaded font was not used. To use a custom font, you need to
use DrawTextEx instead of DrawText.
This is a nice update from the first example that you hand-coded earlier. It includes
these changes:
package main
import (
"fmt"
"time"
rl "github.com/gen2brain/raylib-go/raylib"
)
const (
screenWidth = 800
screenHeight = 480
fontSize = 36
)
func main() {
rl.SetTargetFPS(60)
98
Chapter 3 Writing the Basics for a 2D Game in Go
font := rl.LoadFont("font.ttf")
if font.BaseSize == 0 {
fmt.Println("Failed to load font")
return
}
for !rl.WindowShouldClose() {
rl.BeginDrawing()
rl.ClearBackground(rl.LightGray)
rl.EndDrawing()
time.Sleep(time.Second)
}
rl.UnloadFont(font)
rl.CloseWindow()
}
99
Chapter 3 Writing the Basics for a 2D Game in Go
The data is displayed in real time, so you have effectively gained some nice coding
knowledge there.
In the next section, you learn how to do something slightly harder—create a small
hangman game.
Hangman Game
This time, you’ll ask ChatGPT to generate another simple game using raylib-go—a
hangman game. Again, ChatGPT gets close.
The original prompt was the following:
The fixed generated code of the main function is shown in Listing 3-3.
func main() {
100
Chapter 3 Writing the Basics for a 2D Game in Go
if len(words) == 0 {
fmt.Println("Failed to load word list")
return
}
101
Chapter 3 Writing the Basics for a 2D Game in Go
}
rl.EndDrawing()
}
rl.CloseWindow()
}
Note Note the use of the continue trick a few times, in order for ChatGPT to
generate the full code. This is a limitation of the Web interface for now.
The full code listing is found in the samples that come with this book. The resulting
game is shown in Figure 3-4.
There were many minor problems with the originally AI-generated code, or things
you should get used to changing when asking ChatGPT to generate libs with raylib-go:
102
Chapter 3 Writing the Basics for a 2D Game in Go
–– It was missing the random seeding, so the game had the same order
for the words.
–– The font loading part was easy to fix, and it was a nice example of
how to use fonts in raylib-go.
–– The way ChatGPT randomly creates function names when it’s not
happy with what it knows was fun. You can tell ChatGPT that those
functions do not exist and it will give you a Pinocchio-like reason as
to why it put them in the code.
Now that we know the good, the bad, and the evil of an AI code-generation tool, it’s
time to start creating your own little game of a character moving on a 2D board.
This section was inspired by the lovely YouTube tutorial: “Making an Animal Crossing
type game for beginners - Go & Raylib,” by Avery.
103
Chapter 3 Writing the Basics for a 2D Game in Go
You will create a good working base to develop a small tile-based game, called
Moyashi. The character will be move on a map generated from tile sets, with music. You
will also give the player some input on what to implement next.
The game is called Moyashi, Japanese for Sprout, which is the name of the assets
package it uses.
The game will eventually look like Figure 3-5.
Step 8: Load one tile for the map and draw it.
104
Chapter 3 Writing the Basics for a 2D Game in Go
The examples in the chapter use the same numbering as these steps, so it’s easy to
follow along. Let’s get started.
1. An init phase
b. An input-handling function.
Listing 3-4 expands from the original raylib-go simple example and adds the basic
game loop structure.
package main
import rl "github.com/gen2brain/raylib-go/raylib"
const (
screenWidth = 800
screenHeight = 450
)
var (
running = true
backgroundColor = rl.Black
)
105
Chapter 3 Writing the Basics for a 2D Game in Go
func init() {
rl.SetConfigFlags(rl.FlagVsyncHint)
rl.InitWindow(screenWidth, screenHeight, "Moyashi")
rl.SetExitKey(0)
rl.SetTargetFPS(60)
}
func update() {
running = !rl.WindowShouldClose()
}
func input() {
func quit() {
rl.CloseWindow()
}
func render() {
rl.BeginDrawing()
rl.ClearBackground(backgroundColor)
drawScene()
rl.EndDrawing()
}
func drawScene() {
rl.DrawText("Moyashi", 190, 200, 20, rl.LightGray)
}
func main() {
for running {
input()
106
Chapter 3 Writing the Basics for a 2D Game in Go
update()
render()
}
quit()
This does not nothing new from the original example; executing the program again
shows the Moyashi window, as shown in Figure 3-6.
Now that the basics are in place, you learn how to add some simple graphics to
this game.
Loading Textures
Just like Avery’s example, you can get the graphics from:
https://cupnooble.itch.io/sprout-lands-asset-pack
There is a free version and a paid version. For the sake of those $2, you might also
want to tip when downloading.
107
Chapter 3 Writing the Basics for a 2D Game in Go
When you have downloaded the assets, put them in an assets folder inside your
project files. Your folder setup should mirror the setup shown in Figure 3-7.
The first file you use is the Grass.png file, so first make sure Grass.png exists:
assets/Tilesets/Grass.png
108
Chapter 3 Writing the Basics for a 2D Game in Go
Now what you want to do is load that .png file and display it onscreen, in place of
drawing the text.
For reference, you can look at either or both of these:
• There are a few different ways to draw textures. For now the example
uses rl.DrawTexture, which takes a texture, a location, and a
background color.
• Let’s not forget to unload the texture when finishing the game, using
rl.UnloadTexture.
var (
running = true
backgroundColor = rl.NewColor(147, 211, 196, 255)
grassSprite rl.Texture2D
)
func init() {
// ...
grassSprite = rl.LoadTexture("assets/Tilesets/Grass.png")
}
// ...
109
Chapter 3 Writing the Basics for a 2D Game in Go
func quit() {
rl.UnloadTexture(grassSprite)
rl.CloseWindow()
}
// ...
func drawScene() {
rl.DrawTexture(grassSprite, 100, 50, rl.White)
}
The first spreadsheet has been loaded and displayed, so now you’ll load a texture for
your main player.
110
Chapter 3 Writing the Basics for a 2D Game in Go
The sprite sheet for the game character is shown in Figure 3-10.
Note that GoLand will show the overall size of the picture, and by simple division,
each single frame to use for Moyashi is 48x48 pixels.
The next example loads the texture and displays Moyashi. The updated parts of the
code to reach that goal are shown in Listing 3-6.
var (
// ...
playerSprite rl.Texture2D
playerSrc rl.Rectangle
playerDest rl.Rectangle
)
111
Chapter 3 Writing the Basics for a 2D Game in Go
func init() {
// ...
playerSprite = rl.LoadTexture("assets/Characters/Spritesheet.png")
func quit() {
rl.UnloadTexture(grassSprite)
rl.UnloadTexture(playerSprite)
rl.CloseWindow()
}
func drawScene() {
rl.DrawTexture(grassSprite, 100, 50, rl.White)
location := rl.NewVector2(100,-100)
2. You can use DrawTexturePro this time to use the two rectangles
and the adjusted location of the sprite.
Running the program will give you an extended version of the first game window,
with the same green grass and Moyashi displayed on top of it (see Figure 3-11).
112
Chapter 3 Writing the Basics for a 2D Game in Go
Before moving on to use inputs, try to change the value of the source rectangle and
load another Moyashi frame. For example, try rl.NewRectangle(96, 0, 48, 48) for
Moyashi. The result is shown Figure 3-12.
Or, use rl.NewRectangle(0, 96, 48, 48) for Moyashi, as shown in Figure 3-13.
Once you’ve had enough fun, you can move on to the next section, where you learn
how to move Moyashi using key inputs.
113
Chapter 3 Writing the Basics for a 2D Game in Go
In the input function, the code determines if a key is pressed using rl.IsKeyDown.
When a key is pressed, the code updates the playerDest.X and playerDest.Y values to
the display function and updates the location of Moyashi on the canvas.
Only the input function is shown in Listing 3-7; the rest of the code remains
the same.
func input() {
if rl.IsKeyDown(rl.KeyW) || rl.IsKeyDown(rl.KeyUp) {
playerDest.Y -= playerSpeed
}
if rl.IsKeyDown(rl.KeyS) || rl.IsKeyDown(rl.KeyDown) {
playerDest.Y += playerSpeed
}
if rl.IsKeyDown(rl.KeyA) || rl.IsKeyDown(rl.KeyLeft) {
playerDest.X -= playerSpeed
}
if rl.IsKeyDown(rl.KeyD) || rl.IsKeyDown(rl.KeyRight) {
playerDest.X += playerSpeed
}
}
114
Chapter 3 Writing the Basics for a 2D Game in Go
playerSpeed is an integer value. It can be either a const defined at the top of the file,
or a var that’s set in the init function (here, playerSpeed = 3).
Figures 3-16 and 3-17 show the updated location of Moyashi on the canvas.
Figure 3-17. Moyashi’s location after pressing the right arrow key a few times
Moyashi can now move, so now you’ll see how to keep the walk in tempo by adding
some game music.
115
Chapter 3 Writing the Basics for a 2D Game in Go
The Free Music Archive has free audio files to get you started. For example, you can
search for nature-inspired tracks:
https://freemusicarchive.org/search?adv=1&quicksearch=nature%20&&
Also, pond5 has a few limited samples that are pretty good:
https://www.pond5.com/search?kw=game+walk+in+the+nature&media=music
Since we are about indie gaming in this chapter, IndieGameMusic should also be
of help:
https://www.indiegamemusic.com/
Wherever you decided to download the music from, place the downloaded music file
in the assets/music folder and update the file path in the code listing.
The new code for loading and playing music is shown in Listing 3-8.
var (
// ...
musicPaused = false
music rl.Music
)
func init() {
// ...
rl.InitAudioDevice()
music = rl.LoadMusicStream("assets/music/cartoon-whistling-walk-
loop.mp3")
rl.PlayMusicStream(music)
func update() {
running = !rl.WindowShouldClose()
rl.UpdateMusicStream(music)
}
func quit() {
rl.UnloadMusicStream(music)
// ...
}
116
Chapter 3 Writing the Basics for a 2D Game in Go
There is no screenshot for music! Too bad. This would be a perfect feature for a
book—the music changes depending on the chapter you are reading.
Instead, Listing 3-9 stops/resumes the music using the corresponding raylib-go
functions. Note that the rest of the game continues to render as usual, even when the
music stops.
func update() {
running = !rl.WindowShouldClose()
rl.UpdateMusicStream(music)
if musicPaused {
rl.PauseMusicStream(music)
} else {
rl.ResumeMusicStream(music)
}
}
func input() {
// ...
if rl.IsKeyDown(rl.KeyM) {
musicPaused = !musicPaused
}
}
117
Chapter 3 Writing the Basics for a 2D Game in Go
Moyashi now has a nice tempo to walk around. In the next section, you learn how to
add a camera to follow Moyashi’s moves.
Game Camera
Up to now, the game has used the X and Y coordinates in the map. As a reminder,
coordinates are from top to bottom for X, and from left to right for Y.
Moyashi’s location is set using X and Y, but how you view the whole land can be set
up with the object camera, rl.Camera2D.
By default, you set the camera so that even when Moyashi changes location, it will
be displayed in the center of the screen. The perception of movement is achieved by
moving the camera around the map.
The camera is set up in the init function, and then you update the target of the
camera to be where Moyashi is in the update function, as shown in Listing 3-10.
var (
// ...
cam rl.Camera2D
)
func init() {
// ...
func update() {
// ...
cam.Target = rl.NewVector2(playerDest.X-playerDest.Width/2,
playerDest.Y-playerDest.Height/2)
118
Chapter 3 Writing the Basics for a 2D Game in Go
Now, when you move Moyashi using the key inputs you defined earlier, the camera
(or the screen) will move, but Moyashi will stay in the center, as shown in Figures 3-18
and 3-19.
For some pure fun, you can set the camera to rotate or zoom, depending on the new
input keys in the input function, as shown in Listing 3-11.
119
Chapter 3 Writing the Basics for a 2D Game in Go
Listing 3-11. Code to Rotate and Set the Zoom Factor for the Camera
func input() {
// ...
if rl.IsKeyDown(rl.KeyZ) {
cam.Rotation = cam.Rotation + 1
}
if rl.IsKeyDown(rl.KeyX) {
cam.Rotation = cam.Rotation - 1
}
if rl.IsKeyDown(rl.KeyC) {
cam.Zoom = cam.Zoom + 0.1
}
if rl.IsKeyDown(rl.KeyV) {
cam.Zoom = cam.Zoom - 0.1
}
}
After running the new code again, try pressing the Z, X, C, and V keys to see how the
map and Moyashi rotate and zoom on demand. The effect is shown in Figure 3-20.
Nice! You now know how to play with the camera settings and maybe even have
some flashbacks of Mode 7 on the Super Famicon.
120
Chapter 3 Writing the Basics for a 2D Game in Go
Note You could set up multiple cameras and switch from one to the other. This is
especially useful when you have multiple players on the same screen.
The next section gives Moyashi more movement by creating an animation from the
different frames found in the sprite sheet.
Animate Sprites
Consider Moyashi’s sprite sheet again, as shown Figure 3-21.
• Down
• Up
• Left
• Right
121
Chapter 3 Writing the Basics for a 2D Game in Go
The game is running at 60 frames per second, so this code will change the frame for
Moyashi every six game frames, or ten times per second. You can of course decide to
make that faster or slower.
To help create the animation, you will add a few vars to the var section:
var (
// ...
playerSrc rl.Rectangle
playerDest rl.Rectangle
playerMoving bool
playerDir PlayerDirection
playerFrame int
frameCount int
// ...
)
const (
Down PlayerDirection = iota
Up
Left
122
Chapter 3 Writing the Basics for a 2D Game in Go
Right
)
// ...
Note that the enum is using a combination of type and const. const uses iota, a Go
function that automatically assigns a number to each value of the enum. Also note that
you set Down first in the enum to reflect Moyashi’s frame order in the sprite sheet directly.
The code then updates the input function. When a movement key is pressed, it
marks the player as moving (playerMoving = true) and sets the playerDir to one of the
values of the PlayerDirection enum. This is shown in Listing 3-13.
func input() {
if rl.IsKeyDown(rl.KeyW) || rl.IsKeyDown(rl.KeyUp) {
playerMoving = true
playerDir = Up
}
if rl.IsKeyDown(rl.KeyS) || rl.IsKeyDown(rl.KeyDown) {
playerMoving = true
playerDir = Down
}
if rl.IsKeyDown(rl.KeyA) || rl.IsKeyDown(rl.KeyLeft) {
playerMoving = true
playerDir = Left
}
if rl.IsKeyDown(rl.KeyD) || rl.IsKeyDown(rl.KeyRight) {
playerMoving = true
playerDir = Right
}
// ...
}
The update function updates the location of player on the map according to the key
pressed. The playerFrame value is updated according to a certain number of elapsed
game frames.
123
Chapter 3 Writing the Basics for a 2D Game in Go
Lastly, the important part of all those variables is to properly set the X and Y of
playerSrc, which is done via playerFrame and playerDir. This is again according to
Figure 3-21 and is reflected in Listing 3-14.
func update() {
running = !rl.WindowShouldClose()
if playerMoving {
if playerDir == Up {
playerDest.Y -= playerSpeed
}
if playerDir == Down {
playerDest.Y += playerSpeed
}
if playerDir == Left {
playerDest.X -= playerSpeed
}
if playerDir == Right {
playerDest.X += playerSpeed
}
if frameCount%6 == 1 {
playerFrame++
}
}
frameCount++
if playerFrame > 3 {
playerFrame = 0
}
playerSrc.X = playerSrc.Width * float32(playerFrame)
playerSrc.Y = playerSrc.Height * float32(playerDir)
124
Chapter 3 Writing the Basics for a 2D Game in Go
// ...
playerMoving = false
}
// ...
If all goes well, you will see an animated Moyashi running in the fields! Table 3-1
shows the X and Y values of playerSrc according to the animation frame.
There would be a similar table for Down, Up, Left, and Right. One thing that is
missing though is an animation for when the sprite is idle.
125
Chapter 3 Writing the Basics for a 2D Game in Go
Those two updates are the only ones in the whole code listing, and they are shown in
Listing 3-15.
func update() {
// ...
126
Chapter 3 Writing the Basics for a 2D Game in Go
playerFrame++
}
// switch between frame 0 and frame 1
// when Moyashi is not moving
if !playerMoving && playerFrame > 1 {
playerFrame = 0
}
// ...
}
Running the game will show Moyashi switching frames (see Figures 3-22 and 3-23).
Now it’s time to get Moyashi to walk on a proper patch of grass, with houses
and fences.
127
Chapter 3 Writing the Basics for a 2D Game in Go
Let’s say for a start that the map you want to show onscreen is 5x5—five squares for
the width and five squares for the height. This would look like Figure 3-24.
Figure 3-26 shows the contents of the sprite sheet for grass.png again.
128
Chapter 3 Writing the Basics for a 2D Game in Go
The file is 160x128, and each tile is 16x16: 16x10 horizontally and 16x8 vertically.
Each internal value of the map will be just like the player frame, which is a square in this
picture. The source location will have an X between 0 and 16x10=160 (0,16,32…144), and
an Y between 0 and 128. (0,16,32…128)
The internal array will represent the location on the target map, so from Figure 3-25
again, you get the following array:
[0 0] [0 1] [0 2] [0 3] [0 4] [1 0] [1 1] ... [4 4]
You’ll first create a random map with values between 0 and 80. The loadMap code is
shown in Listing 3-16.
var (
// ...
tileDest rl.Rectangle
tileSrc rl.Rectangle
tileMap []int
mapW, mapH int
// ...
)
func loadMap() {
mapW, mapH = 10, 10
tileMap = make([]int, mapW*mapH)
for i := 0; i < len(tileMap); i++ {
tileMap[i] = rand.Intn(80)
}
}
func init() {
// ...
loadMap()
}
129
Chapter 3 Writing the Basics for a 2D Game in Go
After loading the map, if you choose to debug or print the value of tileMap, it will
look like Listing 3-17, which is an array of random numbers between 0 and 80.
tileMap: [1 47 7 59 1 38 25 60 56 20 54 31 2 49 8 74 11 5 37 66 15 26 8 18
47 27 47 8 70 55 21 8 27 31 69 76 57 71 45 66 13 50 74 3 33 67 78 4 79 73
37 41 69 39 40 65 8 58 23 75 51 30 45 76 26 68 41 2 63 66 43 56 42 38 7 54
57 23 76 20 63 73 17 13 41 59 73 3 51 2 78 56 66 67 20 23 72 3 45 78]
Now that you have the internal representation, it’s simply a matter of drawing things
onscreen. This is done using the drawScene function.
The location onscreen, represented by tileDest.X and tileDest.Y, is easily
computed from division and remainder (% and /) on tileMap versus the mapWidth, mapW
(remember Figure 3-25).
Assuming only the grass sprite sheet is used for now, you need to find the proper
square location in the grass.png file. You do the same % and / to the value contained
between 0 and 80 (remember Figure 3-26).
The code for drawScene is shown in Listing 3-18.
func drawScene() {
}
130
Chapter 3 Writing the Basics for a 2D Game in Go
The image in Figure 3-27 is obtained from loading a 10x10 map to make it
easier to see.
Next, you learn to load the map from a file and use more sprite sheets for the map.
Instead of using a single sprite sheet, you can use multiple files for other
spreadsheets (see Figure 3-28).
131
Chapter 3 Writing the Basics for a 2D Game in Go
You will declare variables and preload all of these textures in the init function
exactly as you did for the grass texture.
In the var section, you’ll declare all the textures to be loaded. Then you’ll add a
temporary texture named tex.
You’ll then have a srcMap, which knows which sprite sheet to use for which square,
just like you have a tileMap that says which index to load from that sheet.
This is just internal representation details. There are better ways to do this, but this
will do for now (see Listing 3-19).
var (
// ...
fencedSprite rl.Texture2D
grassSprite rl.Texture2D
hillSprite rl.Texture2D
houseSprite rl.Texture2D
tilledSprite rl.Texture2D
waterSprite rl.Texture2D
tex rl.Texture2D
// ...
tileDest rl.Rectangle
tileSrc rl.Rectangle
tileMap []int
132
Chapter 3 Writing the Basics for a 2D Game in Go
srcMap []string
mapW, mapH int
// ...
)
WIDTH HEIGHT
MAP of Int
MAP of String
5 5
1 1 1 1 1
1 8 1 1 1
1 2 3 3 1
1 4 1 11 1
1 1 1 1 1
g g g g g
g g g g g
g g g g g
g g g g g
g g l l w
Each value of the first map goes to tileMap, and the second map indicates which
sprite sheet to use:
• G for grass
• L for hill
• F for fence
• H for house
• W for water
• T for tilled
133
Chapter 3 Writing the Basics for a 2D Game in Go
The new loadMap takes a filename and pushes values in tileMap and srcMap (see
Listing 3-21).
if mapW == -1 {
mapW = m
} else if mapH == -1 {
mapH = m
} else if i < mapW*mapH+2 {
tileMap = append(tileMap, m)
} else {
srcMap = append(srcMap, sliced[i])
}
}
}
Add this updated loadMap function to the init function of the game. You can load
the texture at the same time (see Listing 3-22).
134
Chapter 3 Writing the Basics for a 2D Game in Go
func init() {
//...
fencedSprite = rl.LoadTexture("assets/Tilesets/Fences.png")
grassSprite = rl.LoadTexture("assets/Tilesets/Grass.png")
hillSprite = rl.LoadTexture("assets/Tilesets/Hills.png")
houseSprite = rl.LoadTexture("assets/Tilesets/House.png")
tilledSprite = rl.LoadTexture("assets/Tilesets/Tilled.png")
waterSprite = rl.LoadTexture("assets/Tilesets/Water.png")
loadMap("world.map")
//...
}
The final code for drawScene uses the tex “target texture” to determine which texture
to draw the sprite from, as a parameter to DrawTexturePro.
tileSrc.X and tileSrc.Y are computed just like in Listing 3-18; this time using
height and width as computed from tex, which contains the data for width and height of
the full picture.
See the resulting drawScene in Listing 3-23.
Listing 3-23. Drawing Tiles of the World Map Depending on the Internal
Representation
func drawScene() {
switch srcMap[i] {
case "g":
tex = grassSprite
135
Chapter 3 Writing the Basics for a 2D Game in Go
case "l":
tex = hillSprite
case "f":
tex = fencedSprite
case "h":
tex = houseSprite
case "w":
tex = waterSprite
case "t":
tex = tilledSprite
default:
tex = grassSprite
}
}
// ...
}
As opposed to the previous example, where you were loading random values in the
map, this time the map is statically loaded from the file and the render properly shows
the map on the canvas. See Figure 3-29.
136
Chapter 3 Writing the Basics for a 2D Game in Go
• Set the config flag in the init function to full screen mode
loadMap("world.map")
}
137
Chapter 3 Writing the Basics for a 2D Game in Go
Note that there are two ways to update the camera—one you saw earlier by setting
the zoom factor directly on cam.Zoom or when you create the camera itself as the last
parameter.
If you look at the example, you find an updated map file in world.map. Moyashi looks
happier to run in full screen mode over the entire land. See Figure 3-30.
Summary
This concludes Chapter 3. You now know how to prepare a basic loop for a 2D game in
Go using raylib-go.
To build on this, try the following:
• Display a score and increase it as the player collects and gives food to
the animals.
138
CHAPTER 4
139
© Nicolas Modrzyk 2023
N. Modrzyk, Go Crazy, https://doi.org/10.1007/978-1-4842-9666-0_4
Chapter 4 Trend Follower for Blockchain Trading
Before you start this chapter, the disclaimer. This is not financial advice. It’s
simply an introduction to the world in which technology meets the financial markets.
Hopefully a guide to things that you should think about before taking on the financial
markets. A collection of notes to help this preparation, if you will. And hopefully a bit of
entertainment as well.
Smart individuals are efficient and productive. Smart and rich people are efficient,
productive, and never sleep. The world of trading is no exception. In fact, the Wall Street
bankers with expensive suits looking tired is not just a cliché, it is a thing. The problem with
this, more often than not, ends up having a costly effect on mental and physical health.
These problems are exacerbated by the fact that most products can now be traded 24 hours
a day. Cryptocurrencies, for instance, never sleep. Not even on weekends.
Fortunately, some savvy smart individuals have, in recent years, figured out a
few quirks that manifest themselves as patterns. If you spend enough time reading
newsletters or listening to financial news, there is a very high probability you will come
across the words “history doesn’t repeat itself, but it rhymes.” While the booms and busts
in financial centers around the world are different each time, the logic when recounting
each “event” seems to have an uncanny similarity. Add to this the fact that human beings
are creatures of habit, who are driven by fear and greed. This provides a nice ecosystem
for financial industry experts to derive models based on cycles and probability of events
occurring in the free market economy.
This of course, is only half the equation. Given the 24/7 nature of the markets today,
simply modeling cycles in markets would create a lot of zombies walking around trading
floors or people looking like Neo in the matrix. This being said, if you could code like
Neo, there is a place for the autonomous trading bots in the financial markets.
Even simplistic bots with a high tendency to make errors of judgement often perform
better than human beings sitting between a keyboard and chair monitoring markets 24/7
and trading in a state of delirium.
Bots do not need sleep; they do not have a family to care for; and they certainly do
not get ulcers. If modeled correctly and built using a fine-tuned process, your daily task
could simply become a routine: 1) Check available funds to trade (make sure to top it
up from time to time), 2) Make sure the bots are running, and, 3) take the profits when a
good trade happens.
With a well-oiled machine and simple tools with processes, anyone who can code
can focus on finding a cool job, no need to be burnt out (ever). Let the side hustle
continue to earn more money with limited or almost no time invested. Simply focus on
the more important things in life…
140
Chapter 4 Trend Follower for Blockchain Trading
The trick is to “follow the smart money.” In other words, let the financial great minds
be the trail blazers and simply find a way to follow them into the booms.
The goal of this chapter is to cover a few of the simplified examples so you can set
things up to do just that—“follow the smart money.”
141
Chapter 4 Trend Follower for Blockchain Trading
Add to that the power of networks and the ability to have billions of people leverage
this tool in reducing the complexities of barter and facilitating trade, money has played
a crucial role in the development of human societies, enabling the growth of large-scale
economies and the expansion of the global civilization.
Hopefully everyone agrees, the world needs money. If it weren’t for the invention
of money, you would have to carry a lot of very heavy things with you on a daily basis in
order to purchase things.
142
Chapter 4 Trend Follower for Blockchain Trading
importance of understanding the distinctions between these two facets of the economy,
as striking a balance between them is vital for sustainable economic growth and overall
societal prosperity.
Why is this important? It ties into the aforementioned cycles—the boom and bust
of the economies. You may recall some of the historical events that were caused by
these booms and busts. Th early 2000s boom, coined “the dotcom bubble,” and 2008
marked the end of “the subprime mortgage” and drove the world into a global financial
crisis. In the last few years, the world has seen a pandemic, the shortest recession in
history, global central banks going into quantitative easing, and inflation. These cycles
are extremely powerful drivers of the economy that is now hyper-financialized and
accessible to all.
Market Efficiency
When shopping online, would you blindly buy an article from the top link on Google? Or
would you do a quick search and compare prices to save your hard-earned dollars? That
process can be abstracted away from the Internet in everyday life, into businesses of all
shapes and sizes, even at governments and global organizations levels.
The financial market, often epitomized by Wall Street, is not an isolated entity but
rather a complex system intertwined with the real economy, represented by Main Street.
Its efficiency, or lack thereof, has profound implications for the broader economy.
This information includes data about companies, economies, political
developments, and even investor sentiments, effectively bridging the gap between Main
Street and Wall Street.
Now, let’s delve into the relationship between Wall Street and Main Street. Wall
Street, or the financial economy, is supposed to serve Main Street, the real economy,
by efficiently allocating capital and managing risk. In an efficient market, Wall Street’s
activities directly support the growth and prosperity of Main Street. However, when the
financial economy becomes inefficient, or when it becomes too detached from the real
economy, as was the case leading up to the 2008 financial crisis, the consequences can
be dire for Main Street.
The distance between Wall Street and Main Street should be a close, symbiotic one.
But when Wall Street becomes an entity unto itself, the disconnect can lead to financial
instability, economic recessions, and severe socio-economic consequences.
143
Chapter 4 Trend Follower for Blockchain Trading
144
Chapter 4 Trend Follower for Blockchain Trading
Imagine tracking two arbitrary cryptos—A (in blue) and B (in orange)—as shown in
Figure 4-1. The trader takes on a position initially with trade “1” and sells the position
to get out of it toward the mid part of the image. While stock A goes downward, stock
B is breaking out. That is when the trader can take on a new position on stock B with
trade “2” while stock A is either on a downward move or consolidating. Just as the trader
is getting out of trade “2,” stock A is ready to break out and the trader takes on another
position with trade “3”.
Tracking two cryptos concurrently is a tough enough job to do. There are a lot of
moving parts and you should expect more than one indicator to follow on any particular
stock. While it is possible to draw superimposed charts and track more than one stock at
a time, it quickly becomes overwhelming.
145
Chapter 4 Trend Follower for Blockchain Trading
Figure 4-2. Optical illusions caused by auto adjusting gains/losses calibrating for
multiple charts
Figure 4-2 is tracking the S&P 500 and comparing it to GOLD, WTI, TLT (bonds),
VIX, and DXY (the lines are a graphical representation to make a point and inexact). The
image has been zoomed out to fit six months. Since the products trade independently
and have various magnitudes of change, they can only be expressed in terms of
percentage moves. As the DXY is the least volatile product, it almost looks like a flat line
(in blue).
Figure 4-3. DXY chart on its own (pulled out of Figure 4-2)
Figure 4-3 shows the DXY on its own over the same time period. If not monitored
carefully, many opportunities can be missed when traded manually.
146
Chapter 4 Trend Follower for Blockchain Trading
It is important to note that most traders will have numerous stocks and positions to
track at any one point in time so they have portfolio that is diversified enough to protect
their money. There is always an option to have numerous traders babysit a portfolio, but
that comes with a big price tag.
Last, but not least, all traders need to be reminded that it is a hyper-connected world
out there. The global markets have seen a tremendous amount of volatility increase.
Some macroeconomists attribute this phenomenon to the online businesses like
Robinhood and passive funds. Modern technology stacks have made investing easy. This
has opened the gate for many investors to “tweak” positions almost in real-time. There
is also a view that this phenomenon is caused by prominent figures leveraging social
networking services platforms to incentivize markets to move drastically.
Nobody is certain of course, and there are many potential reasons for these modern
market “features,” but one thing is for sure—the world is not going back to trading with
phone lines like in the 80s.
This is when you might assume this trading thing can only be done efficiently by
machines. Well, that is actually not true. You can easily do without automated trading
machines. In fact, there are numerous prominent traders around the world who trade only
with spreadsheets. Yes, they are rather large spreadsheets, but the point is that it is doable.
It is also extremely important to note that these traders have a wealth of knowledge
and experience. Most have the ability to see trends before they become anything the
world catches onto and execute everything six months to a year ahead of the rest.
But wait a minute, how many books must you read? How often do you need to read
up on events and restack information? What is the catch?
The catch is that most of these prominent traders have been trading for decades, and
as many attest, have made very costly mistakes in the past. That is the “experience” and
as most will confess, they went without a life for a long while.
147
Chapter 4 Trend Follower for Blockchain Trading
Charts
A picture tells a thousand words. Once a trader develops an “eye” for charts, a quick
glance can provide crucial insights into the market tendencies at any given time. Some
are so good they are said to be able to predict future prices just looking at the charts for a
few seconds. In order to trade, debug, monitor, and be effective and efficient, charts are a
necessary tool.
Charts are so important to traders and investors that there was a time in history
when they were manually drawn. This is inconceivable for anyone actively trading today.
We should be thankful for the invention of machines and charting frameworks. Today
most of this work is automated.
Data
Data is the lifeblood of modern investment management. Analyzing vast datasets is the
only efficient approach to identify trends, understand market dynamics, and uncover
hidden opportunities that others may overlook.
Data was, until very recently, very expensive to obtain. The Bloomberg terminal
was already available in the early 2000s, and it featured historical data for most of the
financial instruments around the world. Popular products were even available real time
on the traders’ desktops, even directly into their spreadsheets. Believe it or not, it was
impressive back in the early 2000s.
These days, a lot of the historical data can be found for free. Some of the data
vendors even aggregate the data for its users and provide it free of charge. Depending on
the products and the timeframes, even real-time data is available for free.
148
Chapter 4 Trend Follower for Blockchain Trading
These days, the landscape has completely changed. First, expensive bespoke
hardware is no longer needed. Most of the information can easily be accessed on a
phone or tablet. It is of paramount importance for all to formulate thoughts and see how
others model the world of finance in order to understand and hopefully expect some
market moves, at least at the macro level. It is a difficult task, and it often rewards those
who pay attention to the brilliant minds in finance.
Strategy
In order to model certain market events, strategies are essential in trading. This also
provides the trader with an interchangeable tool to leverage through a number of market
cycles that tend to “rhyme,” or repeat in similar ways.
If you pay enough attention to the news and data, these cycles and similarities often
turn into patterns that can be modeled. Starting from the macroeconomy, you can see
trends and tell-tale signs where shifts happen. Diving deeper into these periods where
shifts take place, and you start seeing events that are out of the ordinary. These can be
modeled into signals to execute actions on.
Backtesting
Backtesting is a critical component of any successful trading strategy. It involves testing a
trading idea or model on historical market data to evaluate its viability and performance.
By analyzing how a strategy would have fared in the past, traders can gain valuable
insights into its potential effectiveness in the future. Once more than one strategy is
implemented, backtesting also provides an opportunity to test strategies against each
other in understanding the performance against a known benchmark.
Real-Time Trading
Real-time trading simulation, often referred to as paper trading or virtual trading, is a
crucial component in the development and implementation of an automated trading
strategy. By simulating trades using real-time market data, but without risking actual
capital, traders can gain invaluable insights into the performance of their strategy under
current market conditions. This chapter explores the importance of a real-time trading
simulation in building confidence, managing risk, and ultimately, ensuring that traders
can sleep comfortably at night, knowing that their automated trading system is well-
equipped to navigate the unpredictable world of financial markets.
149
Chapter 4 Trend Follower for Blockchain Trading
The Recipe
The perfect recipe most probably does not exist. At the very least, it will vary quite a bit
depending on the individual’s preferences and tendencies.
That being said, it could be a bit closer to reality than a dream. If you have an
Internet connection and a laptop, of course.
Before dreaming about days at the beach, you need to prepare. Preparation makes
a world of difference in the trading world. Especially so for an autonomous trading bot
if you value your sleep. Failing to build the right framework and process and blindly
trusting a bot with your money is akin to asking a teenager with a fresh driver’s license to
drive a Ferrari. Not going to go well.
If done correctly, with a clear roadmap, a systematic approach, adequate risk
management, performance evaluation of the strategies, thorough testing and debugging,
and religiously performing every step, even a teenager can drive a Ferrari… One
would hope.
150
Chapter 4 Trend Follower for Blockchain Trading
Macroeconomic Tendencies
There are over 630,000 listed stocks in the global equity markets, tens of thousands of
fixed income products, thousands of commodities products, and an unknown amount of
real estate products.
While it could be exhilarating and fascinating to track events across all products
available globally, it is both detrimental to your health and cost prohibitive.
The first place to start is to understand macro trends. Figure 4-4 shows possibly
one of the simplest yet effective “cheat sheets” that cuts down 75 percent or more of
the noise.
More importantly, this is a practice that can save a lot of money (in terms of compute
cost) and time. As mentioned, there are a vast number of assets around the world still
to cover.
151
Chapter 4 Trend Follower for Blockchain Trading
Knowing the macro market trends enables traders to narrow down on the number of
products to run tests with and, most importantly, allows you to pick out the applicable
strategies far more easily.
Timeframe
There are mixed thoughts in identifying the timeframe. Many traders in the past have
used more than one timeframe at the same time to compute entry and exit points.
Others approximate by using multiple moving averages over varying periods. Note that
there seems to be a golden rule to follow—entry and exit points must be in the same
timeframe.
Typically trading timeframes vary from microseconds, to hours, to days, to months,
with everything in between.
Having gone through the process, you should come out with a clear understanding of
the timeframe with the highest probability to succeed.
Risk Management
Risk management is the art of knowing how much and how often. A solid process allows
traders to identify potential risks and uncertainties in the trading environment and
implement mechanisms to manage them.
Even the most successful strategies in the world will experience drawdowns. It makes
no sense to run an aggressive strategy if the bot loses your entire capital in minutes.
On the other hand, there may be investors with a lot of money in the bank account
looking to run an ultra-high risk strategy, knowing something the market participants do
not know.
Building risk management into the process ensures that a trader can run an
autonomous trading bot through changing market conditions and adapt and manage
risks effectively.
152
Chapter 4 Trend Follower for Blockchain Trading
Performance Evaluation
A well-built process includes methods for evaluating the trading bot’s performance and
adjusting its strategies accordingly. Most importantly, a trader should always be able to
identify timing at which a strategy needs to be replaced or parameters adjusted to meet
the risk management criteria. Continuous evaluation and improvement are necessary to
ensure that the bot remains competitive and profitable over time.
153
Chapter 4 Trend Follower for Blockchain Trading
Follow the necessary steps and incorporate regulatory requirements into the process
to make sure the strategies and operations clear the necessary regulatory requirements,
if applicable.
Security
Perhaps the most important catch phrase in the crypto-verse is, “Not your key, not your
money.” Everyone should be mindful of security, and more importantly, where the
cryptos are stored. A lot of investors have lost money with centralized exchanges that
went out of business. Be sure to practice safe key management.
Note The trad-fi (or traditional finance) world is far more regulated than the
crypto-verse. However, it is not without risk. Ensure adequate security measures
are taken.
Building Confidence
Trading is as much about psychology as it is about strategy and analysis. Backtesting and
real-time testing can play a crucial role in building a trader’s confidence in their chosen
approach.
In order to build confidence, strategies need to manage the day-to-day trading risks
and violent market moves. Confidence levels will be boosted if backtesting incorporates
black swan events—rare but significant market occurrences that can have a dramatic
impact on a strategy’s performance. Examples in the recent past can easily be found on
Google. Most data vendors will have data going back (if lucky) to 2007/8. By preparing
for these extreme scenarios, traders can develop more robust strategies that can
withstand the unpredictable nature of financial markets.
The same cannot be so easy for real-time testing. By observing how a strategy
performs under live market conditions, traders can gain a deeper understanding of its
strengths and weaknesses. This firsthand experience can be instrumental in fostering
a sense of conviction in the strategy, which is crucial for maintaining discipline and
consistency in the face of market uncertainty.
154
Chapter 4 Trend Follower for Blockchain Trading
Note Testing is not the same as real money trading. As modern exchanges often
offer fractional shares (cryptos of course allow trading of fractions of coins), you
should at least try trading small amounts.
155
Chapter 4 Trend Follower for Blockchain Trading
Figure 4-5 shows the trading screen from Binance, a popular cryptocurrency
centralized exchange.
156
Chapter 4 Trend Follower for Blockchain Trading
Figure 4-6 shows the trading screen from dY/dX, a decentralized cryptocurrency
exchange.
Some may prefer to trade on a single platform that offers as many datasets as
possible, especially if they trade across asset classes (equities, bonds, cryptos, real
estate). There are a number of charting tools available today that offer direct connections
to the brokers. For instance, TradingView has connectivity to a number of brokers (see
Figure 4-7).
157
Chapter 4 Trend Follower for Blockchain Trading
158
Chapter 4 Trend Follower for Blockchain Trading
Traditional finance exchanges will have very similar feature sets. Bloomberg and
Reuters have a lot more built-in messaging services, built-in ChatGPT, and list goes on.
As these tools are often prohibitively expensive in the mainstream traditional finance
space, this chapter focuses on cryptocurrencies.
Brokers
You can have the best autonomous trading engine in the world, but without a broker
to execute trades through, there is no money to be made. Several online brokers and
exchanges exist today. The decision process (on which to use) is a somewhat difficult
topic to cover. The choice will largely depend on the prices offered, of course, residence
of the account holder, fees that are applicable to perform trades, and taxation laws, to
name a few. Note that fees and prices on offer can change quite drastically. Be sure to
compare prices.
It is worthy to note that autonomous trading bots have become increasingly popular
in the world of cryptocurrencies. Trading is supported 24/7 in most exchanges without
the need for constant monitoring. However, in order to use these bots effectively,
traders need access to reliable and trustworthy brokers with stable APIs. The exchanges
themselves may be stable, but there have been blockchain outages that halted trading.
Even large market cap currencies like Solana are subjected to outages.
Cloud Infrastructure
If you are a stay-at-home person who rarely ventures out and you have a stable Internet
connection with a stable electric grid, you can opt to set up a server at home. Some have
even started running containers on Raspberry Pis.
That being said, a cheap virtual machine on the cloud will make a world of sense,
especially taking into account the fact your family member could at some point unplug
the server in favor of the use of the electric socket to plug in a vacuum cleaner, a toy,
and son.
A lot can happen in the markets in the span of an hour, so it makes sense to keep a
machine protected from such elements and safely trade 24/7.
If your strategy does not involve too many concurrent calculations, you can easily get
away with a few dollars per month.
On a side note, leveraging a cloud virtual machine to run Proof of Stake instances
may be a good idea. Connectivity and network bandwidth are critically important to
avoid penalties. (Note, Proof of Stake is not covered in this chapter.)
159
Chapter 4 Trend Follower for Blockchain Trading
Version control and a CICD pipeline helps coding efficiently. This chapter is not
going to delve into the good practices, but you should explore the use of a Git account
certainly to make sure your code is saved at the very least.
Logs, logs, logs. Being able to perform forensic analysis is critically important.
Keeping a massive database to query at any point in time is the absolute ideal, but most
of the time, unless you’re attempting a high frequency strategy, you can use Google
workspace tools. You may find a good middle ground in the use of Google sheets.
It is true that logs are probably best stored elsewhere. However, tracking snapshots
of what the algorithm sees at any point in time (in table form) can pay dividends as you
encounter issues in production. The use of Google sheets via APIs comes in handy when
dumping large timeseries tables and may save you a lot of time, especially when running
virtual machines on the cloud and being able to monitor things on the go.
Lastly, docker containers, git pull, build, and run. As easy as one two three and save a
lot of time. Do it!
Cooking
There is probably a valid reason why financial professionals like to use “cooking” in
market actions. If you spends enough time in financial media, terms like “cooking the
curve” or “cooking the books” might come across the screens.
Other than the colorful terminology used by the financial media, knowing what
ingredients to use, proper setting of the heating power, and the meticulous control over
the process ensures the result on the table is a success.
Backtesting
Backtesting is a critical component of any successful trading strategy. Once enough time
is spent listening to the financial media or analysts from big banks, a very simplistic
model can be derived quite easily. Take the 50-day moving average and 200-day moving
averages for example—a vast majority of the financial professionals will come across the
two numbers in their respective analysis. It is therefore safe to assume they are indicators
that are being monitored industry wide.
But how do you go from an idea that seems to repeat and is being discussed often on
TV and podcasts to a functioning trading model?
160
Chapter 4 Trend Follower for Blockchain Trading
It involves testing a trading idea or model on historical market data to evaluate its
viability and performance. By analyzing how a strategy would have fared in the past,
traders can gain valuable insights into its potential effectiveness in the future. Once
more than one strategy is implemented, backtesting also provides an opportunity to
test strategies against each other in understanding the performance against a known
benchmark.
Data
In order to start any kind of testing, once a strategy idea is born, it is time to think
about the data. Note the critical pieces—data will be required in many different forms
to properly test a strategy. First, let’s walk through the most common features that
traders use.
• Ticker or symbol. It is safe to say most will trade more than one
financial instrument. It’s best to be able to support one or more
products in data ingest and in the backtesting framework.
• Just as important is the time interval. Most data vendors will have
hourly, daily, and weekly timeframes. In case the data source does
not cater to this, a strategy might need further down-sampling, or in
some cases, higher frequency data.
For ease of use (debuggability by simply loading into a Google spreadsheet), let’s go
with a CSV file.
161
Chapter 4 Trend Follower for Blockchain Trading
reader := csv.NewReader(file)
records, err := reader.ReadAll()
if err != nil {
log.Fatal(err)
}
if dateIndex == -1 {
log.Fatal("The 'datetime' column was not found in the CSV file.")
}
if closeIndex == -1 {
log.Fatal("The 'close' column was not found in the CSV file.")
}
162
Chapter 4 Trend Follower for Blockchain Trading
Indicators
There are an unknowable number of indicators available in the trading world today.
Unfortunately, there is not a winning combo or recipe that you can use to generate all
necessary indicators. At the very least, the attempt might be a costly affair.
This being said, once you have a strategy in mind, you should have a number of
indicators to model the entry and exit signals. A good timesaver here is to modularize the
indicator portion of the code away from the strategy and signal generators so as to make
them interchangeable.
Before the strategy portion of the chapter, let’s delve into some of the indicators that
are going to be used and reasons behind them.
Levels
Trading on levels involves identifying key price points, known as support and resistance
levels, where the price of an asset is more likely to change direction. By understanding
the advantages and disadvantages or risks associated with trading on levels, you can
enhance your trading strategies and make more informed decisions.
163
Chapter 4 Trend Follower for Blockchain Trading
Figure 4-8 shows a resistance level. The stock repeatedly hits the resistance and
eventually loses momentum and reverses downward.
Figure 4-9 shows an example of a support level, where the financial instrument
consolidates, bouncing on the green support level to eventually reverse upward.
Advantages include:
164
Chapter 4 Trend Follower for Blockchain Trading
• Clear entry and exit points: Trading on levels provides traders with
clear entry and exit points. When the price of an asset reaches a key
level, traders can make decisions based on whether the price is likely
to break through the level or reverse course. This helps minimize
guesswork and enhance trading efficiency.
Disadvantages/risks include:
165
Chapter 4 Trend Follower for Blockchain Trading
Trading on levels is a valuable technique that can help investors identify potential
trend reversals and breakouts. By understanding the advantages and disadvantages
or risks associated with trading on levels, you can make more informed decisions
and enhance your trading strategies. Like any other trading approach, it’s essential to
continually learn, adapt, and refine your strategies to achieve consistent success in the
markets.
A few positive side-effects include the fact that you can also work out a “trend” or
tendency of the recent past, and when the price action largely exceeds the trend, this
could signal a strong move for the asset.
166
Chapter 4 Trend Follower for Blockchain Trading
Another nice side-effect of moving averages is that they can be applied to a lot of
different indicators. Price, of course, but this technique can be applied to volumes and
other momentum indicators as well.
It is worthy to note that averages also introduce lag. Naturally, the longer the period,
the longer the lag. For shorter periods, this is not so much of a concern, but longer
periods may introduce significant delays.
167
Chapter 4 Trend Follower for Blockchain Trading
• Detect trend reversals: RSI can help traders spot potential trend
reversals by identifying bullish and bearish divergences. A bullish
divergence occurs when the price of an asset forms a lower low,
while the RSI forms a higher low, indicating a potential upward trend
reversal. On the other hand, a bearish divergence occurs when the
price forms a higher high, while the RSI forms a lower high, signaling
a potential downward trend reversal.
• Confirm trend strength: The RSI can also be used to gauge the
strength of a trend. Generally, an RSI reading above 50 indicates a
bullish trend, while a reading below 50 suggests a bearish trend.
It is essential to remember that the RSI should not be used as a standalone indicator.
Combining the RSI with other technical analysis tools, such as support and resistance
levels, moving averages, and chart patterns, can provide a more comprehensive
understanding of the market and improve the effectiveness of trading decisions.
The benefits of the indicator are hopefully self-explanatory. It is not without pitfalls,
however. If the entire world of finance reacted accordingly to this indicator, all traders
and investors should become instant millionaires via the use of the RSI. The problem is,
the levels where the accepted “overbought” and “oversold” as accepted by the markets,
“70” and “30” respectively, work most of the time, but when they do not, they can lead
an investor to troubled waters. A financial instrument can hit the RSI levels of “30,” but
happily continue going down past the “30” or “oversold” levels. As you would expect,
the reverse also applies, where the asset may continue climbing past the level of “70” or
“overbought” levels and aggressively continue climbing.
It cannot be stressed enough. Finance is a world of probabilities and should not be
taken as absolutes.
Additional Notes
There are a number of statistics libraries available for use. It is worthy to note that some
financially oriented libraries are available for all to use and can be a timesaver.
TA-Lib is one such library that is often used by numerous trading desks and quant
trading desks in the financial world.
168
Chapter 4 Trend Follower for Blockchain Trading
169
Chapter 4 Trend Follower for Blockchain Trading
Sample Code
Exchange Connectivity (Listing 4-1)
Listing 4-1. Exchange Connectivity and Obtain Data for the Indicators
package main
import (
// note the use of the module below, list is trimmed
"github.com/adshao/go-binance/v2"
)
func getClient() *binance.Client {
fileName := fmt.Sprintf("%s/.config/binance/binance.key",
os.Getenv("HOME"))
file, _ := ioutil.ReadFile(fileName)
lines := strings.Split(string(file), "\n")
binanceAPIKey := lines[0]
binanceSecretKey := lines[1]
return binance.NewClient(binanceAPIKey, binanceSecretKey)
}
func obtainDataFromBinanceExchange (client *binace.Client, ticker string,
interval string) ([]float64, error) {
170
Chapter 4 Trend Follower for Blockchain Trading
// Fetch klines data for the specified symbol and interval
klines, err := client.NewKlinesService().Symbol(symbol).
Interval(interval).Do(context.Background())
if err != nil {
return closingPrices, err
}
Listing 4-2. Building Indicators for Use in Building Entry and Exit Signals
171
Chapter 4 Trend Follower for Blockchain Trading
The Strategy
Let’s put the industry favorite into code. The Golden Cross is a widely recognized
technical indicator that signals a potential bullish trend reversal in the financial markets.
This classic trading strategy occurs when a short-term moving average, typically the
172
Chapter 4 Trend Follower for Blockchain Trading
5 0-day, crosses above a longer-term moving average, such as the 200-day. The Golden
Cross is revered for its simplicity and effectiveness in identifying trend reversals, making
it an ideal starting point for both novice and experienced traders.
While the strategy is praised for its ability to provide clear entry and exit signals, it
also has its drawbacks, such as generating false signals or being susceptible to lagging
effects. Nonetheless, the Golden Cross remains a popular and practical strategy for
traders to begin their foray into technical analysis and develop a solid foundation for
more advanced trading techniques. See Listing 4-3.
package main
SetShortLookback(shortLookback int)
SetLongLookback(longLookback int)
ShouldEnterGoldenCrossMarket(data []MarketData, me_index int) bool
ShouldExitGoldenCrossMarket(data []MarketData, me_index int) bool
}
173
Chapter 4 Trend Follower for Blockchain Trading
return false
}
return false
}
174
Chapter 4 Trend Follower for Blockchain Trading
Performance Evaluation
Statistics are a trader’s friend. Simply put, it is a game of probabilities and there are no
absolutes. The use of mathematics has a place in trading, so certain situations can be
abstracted and understood easily. Moreover, comparing results makes the performance
evaluation of a strategy compared to another much easier.
As ever, it is important to follow the great minds in the industry and market
participants. Many will make research-based observations that are different, but there
are a select few statistical analysis tools that are used repeatedly.
Stats
A trader or investor must be well-informed about the intricacies of investment strategies
and their evaluation. A keen understanding of these metrics is essential for making
informed decisions about the performance of the strategy.
By understanding and applying these statistical evaluation metrics, investors and
fund managers can assess the performance and risk associated with their financial
strategies and make informed decisions about their investments.
PnL
Profit and loss (PnL) is a key performance metric used to evaluate the success of a
trading strategy. It represents the net gains or losses resulting from the trades executed
by a strategy over a specified period. By analyzing PnL during backtesting and real-
time testing, traders can gain valuable insights into the effectiveness of their strategy
and make data-driven decisions to optimize its performance. This chapter explores the
importance of PnL in both backtesting and real-time testing and discusses how it can
help traders refine their approach for greater success in the financial markets.
PnL in Backtesting
During the backtesting process, PnL is used to assess the historical performance of
a trading strategy. By calculating the net profit or loss that the strategy would have
generated based on historical market data, traders can determine whether the strategy
has been profitable in the past and gauge its potential for success in the future.
175
Chapter 4 Trend Follower for Blockchain Trading
Analyzing PnL during backtesting can also help traders identify areas for
improvement in their strategy. For example, a consistently negative PnL may indicate
that the strategy’s entry or exit signals need to be refined, or that the risk management
parameters, such as stop-loss orders or position sizing, need to be adjusted.
Furthermore, by comparing the PnL of different strategies or variations of the same
strategy, traders can make informed decisions about which approach is likely to yield the
best results in live trading.
Hit Rate
Hit rate, also known as the win rate, is a key performance metric used to evaluate the
effectiveness of a trading strategy. It represents the percentage of trades that result in
a profit relative to the total number of trades executed. While a high hit rate may seem
desirable at first glance, it is essential to understand that an excessively high hit rate
can have a detrimental impact on PnL in certain circumstances. This section explores
the role of hit rate in trading strategy evaluation and discusses the potential negative
consequences of an overly high hit rate on PnL.
176
Chapter 4 Trend Follower for Blockchain Trading
177
Chapter 4 Trend Follower for Blockchain Trading
Sharpe Ratio
The Sharpe ratio is a widely-used performance metric in finance that measures the
risk-adjusted return of an investment or trading strategy. It is calculated by dividing the
difference between the strategy’s average return and the risk-free rate by the standard
deviation of the returns, which represents the strategy’s volatility. A Sharpe ratio of at
least 1 is often considered preferable, as it signifies that the strategy generates an excess
return that is equal to or greater than its level of risk. This section briefly explains why a
Sharpe ratio of at least 1 is desirable in trading strategy evaluation.
Risk-Adjusted Performance
A Sharpe ratio of at least 1 indicates that a trading strategy is generating returns that
adequately compensate for the level of risk taken. This is important because it suggests
that the strategy is not only generating profits, but doing so in a way that accounts for
the inherent risks associated with trading. A Sharpe ratio of less than 1 implies that the
strategy’s returns are not commensurate with the level of risk, which may signal the need
for adjustments to the strategy or risk management measures.
Portfolio Diversification
A trading strategy with a Sharpe ratio of at least 1 is more likely to contribute positively
to a diversified portfolio. When combined with other uncorrelated strategies or
assets, a strategy with a higher Sharpe ratio can help improve the overall risk-adjusted
performance of a portfolio, enhancing returns while reducing overall volatility.
178
Chapter 4 Trend Follower for Blockchain Trading
Potential Pitfalls
A Sharpe ratio of at least 1 is preferable in trading strategy evaluation because it signifies
that the strategy generates returns that are commensurate with its level of risk. This risk-
adjusted performance metric allows traders to easily compare different strategies and
make informed decisions about where to allocate capital. As you continue to develop
your GoLang-based Golden Cross trading tool, striving for a Sharpe ratio of at least 1
will be essential in ensuring that your strategy offers an attractive risk-reward profile and
contributes positively to a diversified trading portfolio. See Listing 4-4.
So far so good, but the ratio of 1 and above still does not guarantee success. In
fact, there are numerous cases where an insanely high ratio can be deceiving and hide
potential risks. Be sure to use Sharpe ratios in combination with other stats.
package main
import (
"math"
)
179
Chapter 4 Trend Follower for Blockchain Trading
MAR Ratio
The MAR (Managed Account Reports) ratio, also known as the Calmar ratio, is a
performance metric used to evaluate the risk-adjusted return of investment strategies,
particularly in the context of managed futures accounts and hedge funds. It is calculated
by dividing the annualized rate of return by the maximum drawdown experienced by the
strategy over a specified period. The MAR ratio provides a useful means of comparing
different trading strategies, as it takes into account both return and risk in a single
metric. This section briefly explains the MAR ratio and its benefits of comparing multiple
strategies against one another.
Risk-Adjusted Performance
Like the Sharpe ratio, the MAR ratio measures the risk-adjusted performance of a
trading strategy. However, instead of using standard deviation as a measure of risk,
the MAR ratio uses maximum drawdown. This provides a different perspective on risk
management and allows traders to compare strategies based on their ability to generate
returns while minimizing drawdowns.
180
Chapter 4 Trend Follower for Blockchain Trading
Comparison of Strategies
The MAR ratio is a valuable tool for comparing different trading strategies or investment
opportunities, as it provides a single metric that accounts for both return and drawdown
risk. By comparing the MAR ratios of various strategies, traders can identify which
approach offers the most attractive risk-reward profile and make informed decisions
about where to allocate their capital.
Potential Pitfalls
The MAR ratio is a powerful performance metric that offers several benefits for
comparing trading strategies. By emphasizing drawdown risk and providing a measure
of risk-adjusted performance, the MAR ratio allows traders to make informed decisions
about where to allocate capital and which strategies offer the most attractive risk-reward
profiles.
Considering the MAR ratio as part of your performance evaluation process is
essential for ensuring that your strategy effectively manages risk while generating
consistent returns. See Listing 4-5.
Even with the combination of all of the statistical analysis and tools, risks are still
there. Don’t forget that markets are a kind of living organism, so constant periodic
retesting of strategies helps ensure that you are attuned to the changes in the behavior of
the markets.
package main
import (
"math"
)
181
Chapter 4 Trend Follower for Blockchain Trading
return marRatio
}
182
Chapter 4 Trend Follower for Blockchain Trading
Benchmark Comparison
Comparisons against a benchmark may potentially be the most important aspect of
trading. Everything moves, very quickly at times. Traders often like to know what to
compare against.
This is where you go back to objectives that were introduced in the process section
of the chapter earlier. Having an objective for the strategy, or strategies in some cases,
provides the trader or investor with the analogous North Star or some sort of benchmark
to constantly compare to.
For instance, when trading equities, you might choose the Golden Cross strategy
(not particularly performant as a strategy, just that there are many traders’ eyes on it)
to compare against the new strategy being worked on. For instance, comparing the
statistics of the two strategies (PnL, Sharpe ratio, and MAR ratio) against one another.
Others may simply use a threshold of the statistics, for example some trading desks
do not even look at strategies with a Sharpe ratio of 1.0.
Timeframe
There is a reason that timing and timeframes matter most in any financial transaction. It
is also the aspect in the performance evaluation of a strategy where a trader or investor
might spend the most amount of time.
More often than not, you might find numerous timeframes that perform for any
strategy. There are a multitude of reasons behind this phenomenon. From wars, to
global pandemics, to financial turmoil. One thing that a number of economists have
observed, and pertinent to this chapter, is that the growing participants of automated
trading algorithms have accelerated the financial cycles, which seem to have increased
the amplitude of volatility across the globe.
This phenomenon is also more noticeable in some asset classes. FX and
cryptocurrencies seem to be affected much more than traditional finance favorites, like
the bond markets.
183
Chapter 4 Trend Follower for Blockchain Trading
In short, the same strategy can work and cease to work for the same asset in different
market conditions; the same strategy across a number of assets may work better or
worse, depending on the particularities in an asset class.
Potential Pitfalls
Backtesting is an indispensable tool for traders looking to achieve success in the world of
trading. By providing a means to refine trading strategies, manage risk, build confidence,
and enhance discipline and consistency, backtesting serves as the foundation upon
which successful trading careers are built.
However, it is essential to be aware of the limitations of backtesting, such as
overfitting, data limitations, and curve-fitting, to ensure that results are interpreted
with caution. By understanding both the pros and cons of backtesting, traders can
make informed decisions about their strategies and set realistic expectations for their
performance in live markets.
184
Chapter 4 Trend Follower for Blockchain Trading
package main
import (
"fmt"
)
func main() {
185
Chapter 4 Trend Follower for Blockchain Trading
186
Chapter 4 Trend Follower for Blockchain Trading
Listing 4-7. The Output of Running the Code from Listing 4-6
$ ./runStrategy
Strategy: Ema100
PnL: 14419.56
Sharpe Ratio: 1.29
MAR Ratio: 0.00
Trades saved to ema_trades.csv
Strategy: GoldenCross
PnL: 0.00
Sharpe Ratio: 0.00
MAR Ratio: 0.00
Trades saved to golden_cross_trades.csv
187
Chapter 4 Trend Follower for Blockchain Trading
Hidden Difficulties
After a particular strategy or a number of strategies go through the grueling test phase
and subsequent improvement cycle(s), there might still be unexpected behaviors that
are difficult to see. This is the primary reason that it is important to re-run backtesting
over the time period that real-time testing was performed. There are numerous reasons
for this, but the two mains are slippage and forward look bias.
Slippage can be caused by numerous factors, but it often refers to the difference
between the price at the point of order and the price at the time of its fill. More often
than not, a trader or investor will see an initial loss. Slippage is hard to quantify and
model; it is therefore an important aspect of testing to continuously go back and monitor
the difference between backtesting and real-time testing in order to quantify or set
expectations.
If a strategy performs similarly in backtesting and remains fairly consistent in
real-time trading, the testing framework and the strategy may be sound. If, however,
backtesting and real-time testing consistently seem to perform differently where
backtesting trades are being profitable and real-time trades are not, it is time to revisit
the strategy and make sure no forward look bias is introduced.
Forensic Analysis
When strategies work over numerous indicators and concurrent calculations, it may be
difficult to track simply looking at trades and charts.
You might need to consider running the calculations under a more granular
timeframe in order to be able to catch the nature of slippage or forward look bias. Having
a charting tool with real-time data may offer insight as to where the issue is hiding.
188
Chapter 4 Trend Follower for Blockchain Trading
Note that a trader may be lucky and find that the slippage ends up helping a trader.
You should not count on this, however, as the markets have a tendency to go the
other way.
Table 4-1 shows a snippet of the CSV file called ema_trades.csv. In this case, hourly
data is used as an interval. Note that the time recorded goes down to second granularity.
In this case, only the trade execution times are recorded. As most exchanges offer the
order details report, it is important to track down the exact time of the fill (note a trade
may be partially filled over multiple fills that sum up to the order amount). The finer the
granularity in time, the easier it is to track slippage and other unexpected results.
Table 4-1. List of Trades Executed by the Backtests from Listing 4-6
Datetime Indicator Price Quantity Position Length
Table 4-2 shows a snippet of the CSV file containing all indicators. The importance
here is that a trader might need a continuous timeseries with all of the indicators present
alongside each other to actively track progress the real-time charts mentioned earlier.
189
Chapter 4 Trend Follower for Blockchain Trading
Potential Pitfalls
Real-time trading simulation is an indispensable tool for ensuring the success of
an automated trading strategy. By building confidence, identifying potential issues,
managing risk, and ensuring system stability, real-time trading simulation provides
traders with the peace of mind they need to sleep comfortably at night.
The importance of a fine-tuned, real-time trading simulation cannot be overstated. It
serves as a critical step in validating your strategy and ensuring that you are well-
prepared to navigate the complex and ever-changing landscape of financial markets with
confidence and ease.
As ever, having a wonderfully positive PnL experience in paper trading means almost
zero in the trading world. See Listing 4-8.
client := getClient()
select {
case <-ctx.Done():
fmt.Println("has just been canceled")
default:
time.Sleep(100 * time.Millisecond)
runStrategy(client, symbol, interval, capital)
}
positionOpen := false
entryPrice := 0.0
counter := 0
profit := 0.0
}
191
Chapter 4 Trend Follower for Blockchain Trading
return false
} else {
order, err := client.NewCreateOrderService().Symbol(symbol).
Side(binance.SideTypeBuy).Type(binance.OrderTypeMarket).
QuoteOrderQty(strconv.FormatFloat(capital, 'f', 2, 64)).Do(context.
Background())
if err != nil {
log.Fatal(err)
}
log.Printf("Market buy order %s executed at price %s\n", order.
OrderID, order.Price)
}
inPosition = true
192
Chapter 4 Trend Follower for Blockchain Trading
} else {
order, err := client.NewCreateOrderService().Symbol(symbol).
Side(binance.SideTypeSell).Type(binance.OrderTypeMarket).
QuoteOrderQty(strconv.FormatFloat(capital, 'f', 2, 64)).Do(context.
Background())
if err != nil {
log.Fatal(err)
}
inPosition = false
Dinner Is Served
First things first—did the best performing strategy out of all the ones that were
backtested and forward tested beat the market?
193
Chapter 4 Trend Follower for Blockchain Trading
This seems to be one of the most important questions on traders’ minds when it
comes to trading. The reason is, if your money (trading in and out of the markets and
leveraging the best-performing strategy at hand) is not beating a trade on the S&P500, it
is a failure.
Depending on the financial instrument you are trading, that benchmark can be
S&P500, the ten-year treasury, or BTCUSD. This requires a little bit of thinking in making
sure you are comparing apples to apples.
Also worthy of note—if you’re trading high frequency (say, anything under four-
minute intervals), you could be subjected to the phenomenon of “up days” and “down
days.” For instance, the Federal Reserve Bank’s FOMC meetings can move markets
drastically depending on the forward guidance given by the chairman. You need to make
sure a long enough set of historical data is used at all times and ensure such market
movements do not overly bias the analysis one way or another.
194
Chapter 4 Trend Follower for Blockchain Trading
It is therefore not uncommon to find successful traders that opt for consistency
rather than maximum returns. Since markets never go up in a straight line nor go down
in a straight line, coupled with the fact everything is cyclical, there is a good chance
that favorable conditions will be back at some point. Having a highly consistent and
profitable strategy is also a very good approach.
The same curve can be constructed for cryptocurrencies; see Figure 4-12.
195
Chapter 4 Trend Follower for Blockchain Trading
It is therefore possible to adapt your strategy and build in implicit risk by choosing
the coins out in the further part of the curve and maximize returns. If you can withstand
the gut checks.
Aside from the coins to trade based on the level of risks, derivative markets are
also available for some of the coins. Derivatives are probably for better suited for more
advanced traders. However, trade futures markets can take on long and short positions
to maximize winnings in both up and down turns of the cycles with added leverage. In
options markets, you can take on bigger risks and take on leverage only risking a premium.
Machine Learning
As mentioned earlier, the sheer number of assets in the world today makes it difficult or
costly to cover all assets concurrently. Trading a select few instruments is tiring enough,
as you have seen in the previous sections. Attempting to extend the practice beyond the
select few instruments requires automation with smarts.
This chapter has delt mainly in the view of making life easier and better for those
who love to code (in GoLang preferably). The “smarts” is where it normally takes some
fuzzy logic and experience around the markets. That said, the world has seen massive
progress with machine learning. It is safe to say that it has become a thing in the world of
trading. While varied results and comments are coming from those who love to code in
the financial industry, it is prevalent and growing. It is safe to assume there are probably
billions invested today to bring ML algorithms to the trading world and teach it as much
information as possible. If hardware resources allow, just imagine having three million
Warren Buffets working for you 24 hours, 7 days a week.
196
Chapter 4 Trend Follower for Blockchain Trading
Although this chapter does not include the knowledge and experience from any
members of Berkshire Hathaway, let’s see what can be observed in the world where
machine learning meets GoLang. A brief search on the frameworks yields results that are
not very promising. A very large community seems to exist around most of the well-
known Python frameworks. For GoLang, however, the top search results from Google do
not demonstrate the fact that there is a vibrant community behind them.
Further digging seems to point toward the direction of Python for any machine
learning frameworks. Machine learning algorithms do not gain much in terms of faster
compilation or gains of performance on CPU. As 99.9 percent of the runtime is spent
on GPUs, most ML developers do not mind doing their work in Python, it seems. In
short, the GoLang communities for ML frameworks remain very small and are not
maintained often.
Running a quick search on Google or YouTube will return a plethora of links and
videos, with trading PnL that will get you dreaming of Lambo’s. Before putting real
money at work, think about placing the million-dollar strategies in the framework you’ve
learned in this chapter. With creative use of ChatGPT, you should be able to test the
strategies easily. There must be a good reason that the ML superpowers have not yet
turned into mega-hedge funds.
It might be a matter of time however…
Dessert!
The Exponential Age Is Here
Raoul Pal, the CEO from Real Vision and a staunch advocate of digital currencies, rightly
highlights the exponential age. In essence, it refers to a period of immense technological
evolution that we’re currently living in, where innovation isn’t linear but exponential,
leading to rapid changes in various sectors, particularly in finance and technology.
Cryptocurrencies are evolving, and they no longer simply store value. They are
programmable platforms that allow developers to build decentralized applications
(dApps) on top of their networks.
As per Metcalfe’s Law, the value of a network is proportional to the square of
the number of connected users of the system. Cryptocurrencies have recently seen
significant adoption, not just among individual users, but also among institutional
investors and businesses. This widespread adoption is a testament to their potential and
is likely to drive their growth in the coming years.
197
Chapter 4 Trend Follower for Blockchain Trading
198
Chapter 4 Trend Follower for Blockchain Trading
Do note that there are some easy-to-use tools with plugins for GoLang to run on a
Jupyter notebook. Figure 4-13 shows one example, called gophernotes.
Listing 4-9 shows the sample charting code and Figure 4-14 shows a resultant chart.
import (
"encoding/csv"
"image/color"
"io"
"log"
"os"
"strconv"
"time"
199
Chapter 4 Trend Follower for Blockchain Trading
"gonum.org/v1/plot"
"gonum.org/v1/plot/plotter"
"gonum.org/v1/plot/vg"
)
reader := csv.NewReader(file)
// assuming first row is header
reader.Read()
data := []OHLC{}
for {
row, err := reader.Read()
if err == io.EOF {
break
}
if err != nil {
return nil, err
}
200
Chapter 4 Trend Follower for Blockchain Trading
201
Chapter 4 Trend Follower for Blockchain Trading
}{
{openPoints, "Open", color.RGBA{R: 0, G: 0, B: 255, A: 255}},
// blue
{highPoints, "High", color.RGBA{R: 0, G: 255, B: 0, A: 255}},
// green
{lowPoints, "Low", color.RGBA{R: 255, G: 0, B: 0, A: 255}},
// red
{closePoints, "Close", color.RGBA{R: 255, G: 165, B: 0,
A: 255}}, // orange
}
displayPlot(p)
}
func main() {
data, err := readData("/tmp/ETHUSD_hourlies.csv")
if err != nil {
log.Fatal(err)
}
plotData(data)
}
main()
202
Chapter 4 Trend Follower for Blockchain Trading
Realistically, the only complaint would be the lack of a large community contributing
tools and millions of charting examples. In the world of finance, it is probably safe to
say some form of model or strategy you want to employ has already been implemented.
If somebody has already paved the way, it is often a big shortcut and there are literally
thousands of hours’ worth of explanations on YouTube and other media platforms.
This should not scare anybody, however. The missing libraries can probably be
overcome by writing libraries to run against APIs (thank you ChatGPT), or perhaps
completely transcribe them if they are easy enough. It might very well be a matter of time
before the world gives GoLang its due attention and provides for a healthy community of
contributors.
If I may, I would like to finish with a simple little note—the language is simply a
pleasure to work with.
203
Chapter 4 Trend Follower for Blockchain Trading
Appendix
Finance Jargon
Glossary
Amortization: The process of spreading the cost of an intangible asset over its useful life.
Ask: The price at which a seller is willing to sell an asset or a security.
Asset allocation: The strategy of dividing an investment portfolio among different
asset classes, such as stocks, bonds, and cash.
Bear market: A market condition characterized by falling prices and pessimism
among investors and traders.
Bid: The price at which a buyer is willing to buy an asset or a security.
Bonds: Debt securities that represent a loan from an investor to a borrower, such
as a government or a corporation. Bonds pay periodic interest and return the principal
amount at maturity.
Breakout: A trading strategy that involves buying or selling an asset or a security
when its price moves beyond a certain level of resistance or support, indicating a change
in trend or momentum.
Broker: An intermediary who facilitates the buying and selling of assets or securities
between buyers and sellers, usually for a commission or a fee.
Bull market: A market condition characterized by rising prices and optimism among
investors and traders.
Capital expenditure: Money spent by a business to acquire or improve long-term
assets, such as equipment or buildings.
Compound interest: Interest that is calculated on both the initial principal and the
accumulated interest of a loan or investment.
Correction: A temporary decline in the price or value of an asset or a security after a
period of rise or overvaluation.
Credit default swap: A financial contract that transfers the risk of default from a debt
issuer to another party, who agrees to pay the debt in case of default in exchange for a
periodic fee.
Day trading: The practice of buying and selling assets or securities the same trading
day, closing all positions before the market closes.
Dividend: A portion of a company’s profits that is distributed to its shareholders.
204
Chapter 4 Trend Follower for Blockchain Trading
205
Chapter 4 Trend Follower for Blockchain Trading
Net present value: The difference between the present value of an investment’s cash
inflows and outflows, used to evaluate its profitability and feasibility.
Opportunity cost: The value of the next best alternative that is forgone as a result of
making a decision.
Portfolio: A collection of investments held by an individual or an organization.
Quantitative easing: A monetary policy tool that involves the central bank buying
large amounts of government bonds or other securities to increase the money supply
and lower interest rates.
Rally: A sustained increase in the price or value of an asset or a security after a
period of decline or consolidation.
Resistance: A price level at which an asset or a security faces difficulty in rising
above due to selling pressure.
Return on equity: A measure of a company’s profitability, calculated by dividing its
net income by its shareholders’ equity.
Scalping: A trading strategy that involves taking small profits from frequent trades
over a short period of time, exploiting minor price movements and high leverage.
Securities and Exchange Commission: The U.S. federal agency that regulates the
securities markets and protects investors from fraud and abuse.
Short selling: The practice of selling an asset or a security that a trader does not own,
hoping to buy it back later at a lower price and profit from the price difference.
Slippage: The difference between the expected price of an order and the actual price
at which it is executed, which can be caused by market volatility, low liquidity, or delays
in execution.
Spread: The difference between the bid and ask prices of an asset or a security,
which reflects the liquidity and competitiveness of the market.
Stop order: An order to buy or sell an asset or a security when its price reaches a
certain level, which can be used to protect profits or limit losses.
Support: A price level at which an asset or a security faces difficulty in falling below
due to buying pressure.
Swing trading: The practice of buying and selling assets or securities over a period of
several days or weeks, taking advantage of short-term price fluctuations.
Technical analysis: The study of past price movements and patterns to predict
future price movements and trends of assets or securities, using various tools and
indicators such as charts, moving averages, trend lines, and so on.
Time value of money: The concept that money available today is worth more than
the same amount in the future, due to its potential earning capacity.
206
Chapter 4 Trend Follower for Blockchain Trading
Trend: The general direction of the price movement of an asset or a security over
time, which can be upward (bullish), downward (bearish), or sideways (range-bound).
Underwriting: The process of evaluating the risk and profitability of a loan,
insurance policy, or security issue, and setting its terms and conditions accordingly.
Volatility: The degree of variation in the price or value of an asset or a market over
time, often measured by standard deviation or beta.
One-liner
Trading one-liners are witty or humorous remarks related to trading or the markets. They
can be used to lighten the mood, poke fun at oneself or others, or make a point. Here are
some examples of trading one-liners:
• I’m not a great investor. I’m just good at not losing money. —
George Soros
• The four most dangerous words in investing are “this time it’s
different.” —Sir John Templeton
• How do you make a small fortune in the stock market? Start with a
large one. —Anonymous
• The trend is your friend until the end when it bends. —Ed Seykota
• There are two types of traders: those who admit they don’t know
what they’re doing and those who lie about it. —Anonymous
• The only thing standing between you and your goal is the bullshit
story you keep telling yourself as to why you can’t achieve it. —
Jordan Belfort
• The stock market is a device for transferring money from the ignorant
to the informed. —Andre Kostolany
207
Chapter 4 Trend Follower for Blockchain Trading
Some call it “a hack for everything.” A study in trend fatigue. A lot of famous fund
managers have used this indicator in conjunction with daily and weekly timeframes.
Extra Statistics
Several statistical evaluation metrics are commonly used to assess the effectiveness of
these strategies. The following are not used in this chapter, but remain relevant and of
interest to many leading economists around the world:
208
Chapter 4 Trend Follower for Blockchain Trading
• Sortino ratio: Similar to the Sharpe ratio, the Sortino ratio also
measures risk-adjusted performance. However, it only considers
downside risk by using the downside deviation instead of the
standard deviation. This ratio is particularly useful for investors who
are more concerned about downside risk.
209
Chapter 4 Trend Follower for Blockchain Trading
210
Chapter 4 Trend Follower for Blockchain Trading
References
1. Peccatiello, A. (2021). “The Macro Compass: A Framework for
Global Macro Investing.” The Macro Compass. Retrieved from
https://themacrocompass.substack.com/p/the-macro-
compass-a-framework-for
211
Chapter 4 Trend Follower for Blockchain Trading
6. Strauss, W., & Howe, N. (1997). The Fourth Turning: What the
Cycles of History Tell Us About America’s Next Rendezvous with
Destiny. Crown.
212
CHAPTER 5
Writing a Kubernetes
Operator to Run
EVM-Compatible
Blockchains
With the rising popularity of blockchain technology, there is an increasing need for
efficient ways to deploy and manage blockchain networks.
213
© Nicolas Modrzyk 2023
N. Modrzyk, Go Crazy, https://doi.org/10.1007/978-1-4842-9666-0_5
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
214
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
• Kubectl (https://kubernetes.io/docs/tasks/tools/) is a
command-line tool that interacts with a Kubernetes cluster (or
Minikube in this case).
As usual, be sure to install the relevant versions for your operating system. The
instructions in this chapter are executed from a MacBook with an Apple ARM chip.
Additionally, a few other tools can greatly improve your day-to-day experience with
Kubernetes, and I highly recommend you install them:
First verify that the components were correctly installed. Make sure that Docker is
running, then start Minikube by typing minikube start in your terminal.
You should see something like Figure 5-1.
215
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Resources Overview
From a certain perspective, Kubernetes can be viewed as an API that lets you manipulate
a collection of resources. These resources are grouped into logical categories:
216
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
217
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
218
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
In this YAML definition, two fields are slightly more meaningful that the others:
• The kind field indicates the specific Kubernetes resource you want to
provision. Here, it is a simple Pod.
The following code uses kubectl to create the pod in the local Minikube cluster:
219
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
When you use a framework, several aspects are abstracted and simplified, as
illustrated in Figure 5-5.
This provides some benefits. However, as is often the case when using a framework,
it is not necessarily obvious what the basic blocks are that actually constitute the real
thing behind the framework, namely the Kubernetes operator.
This section breaks things down so you can see the basic blocks. That way, when
you’re using the facilities of Operator-SDK, you understand what you are doing.
At a foundational level, an operator has two parts:
• A controller
220
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
f:spec:
f:containers:
f:args: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":8443,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:name: {}
f:protocol: {}
221
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Controller
The ability to submit a blockchain resource payload to the cluster is the essential first
part of making an operator. But that’s not very helpful until Kubernetes knows what to do
with it. That’s why you have controllers.
A controller is essentially a program running within a pod that listens to events
broadcasted by Kubernetes within the cluster and takes the necessary actions.
The events include the creation, update, or deletion of resources. The resulting
actions include creating native resources (like deployment, service, or configmap) to
reflect the state described by the resource definition. For that, the controller will use the
Kubernetes API, which can be programmed using the Go SDK.
The process of reflecting in the cluster the state described in the specification
is called the reconciliation loop. Technically, a controller’s Reconcile function is
continuously called and it is the controller ‘s job to bring the current state to the desired
state described in the resource definition.
If the current state does not match the desired state, the controller takes the
necessary actions. For instance, if the desired state is to have three pod replicas and only
two are running, the controller will ask Kubernetes to create another pod.
--domain is used as the prefix of the API group your custom resources will be
created in and --repo is necessary since scaffolded files require a valid module path.
--plugins=go/v5-alpha is required only if your local environment is Apple Silicon.
The directory structure shown in Figure 5-6 will be generated.
222
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
At this point, you only have generic boilerplate code, which consists essentially
of a manager (defined in cmd/main.go), config YAML files, and the project’s utilities
(makefile, dockerfile, go.mod).
I suggest you take a quick look at the cmd/main.go file to see how the manager is
created. Again, there is nothing specific to your needs in this main.go file at this stage.
This is just a matter of getting familiar with the code.
Creating an API
Now is the time to start defining the new kind of resources you want to manage with your
operator: the Blockchain kind.
223
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
• api/v1alpha1/blockchain_types.go
• internal/controller/blockchain_controller.go
The first file is used to implement the spec of the Blockchain custom resource. This
is where you define the fields and types that compose a resource of kind Blockchain, as
shown in Listing 5-3.
The second file is where you implement the controller’s reconciliation logic
mentioned earlier. This implementation will take place in the Reconcile function, as
shown in Listing 5-4.
224
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
• make docker-build will build the Docker image for your controller
based on the Dockerfile provided at the root of the project.
225
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
• make install will install your CRD into the cluster, along with the
required RBAC resources.
There are also commands to tear down the resources, like uninstall and undeploy.
Each command can be run independently. However, during development, it is very
likely that you will need to run most of them in sequence.
Personally, I find it convenient to add a new command called update to the makefile,
which will execute the other commands in the desired order.
If you agree, just add a new update entry within the undeploy entry (see Listing 5-5).
.PHONY: update
update: manifests generate docker-build docker-push install deploy
kubectl rollout restart deployment blockchain-operator-controller-manager -n
blockchain-operator-system
You will notice that I also added a kubectl rollout restart command to restart
the controller every time a new image is pushed to Docker Hub.
The reason for this is because, out of simplicity, it’s easier to tag the controller image
with latest instead of updating the VERSION field in the makefile each time you build a
new image. As a result, the controller will not automatically restart (which is necessary
for you to see the changes).
Finally, you need to tell Operator-SDK where to push the controller-built image. You
need to update the IMG field in the makefile by referencing your Docker Hub account:
IMG ?= <dockerhub-account>/blockchain-operator:latest
It’s time to give the workflow a try. Run make update once. If everything was set up
correctly, you should be able to see the result in Lens, as depicted in Figures 5-7 and 5-8.
226
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Let’s take a look at the target state you need to reconcile. To guide you in the
implementation of the reconciliation logic, refer to Listing 5-6 for a StatefulSet
definition.
Listing 5-6. The StatefulSet State that the Blockchain Operator Will Reconcile
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ethereum-goerli
namespace: ethereum
labels:
app: ethereum-goerli
spec:
serviceName: ethereum-goerli
replicas: 1
selector:
matchLabels:
app: ethereum-goerli
template:
metadata:
labels:
app: ethereum-goerli
spec:
containers:
- name: client
command: ['geth']
args:
- '--goerli'
- '--syncmode=light'
- '--datadir=data
- '--cache=128'
image: ethereum/client-go:stable
imagePullPolicy: Always
resources:
limits:
228
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
ports:
- containerPort: 30303
name: p2p
protocol: TCP
- containerPort: 8545
name: rpc
protocol: TCP
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
kind: StatefulSet
229
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
StatefulSets are like deployments, but with one key difference. They are associated
with a storage resource. However, when deleting a pod replica managed by the
StatefulSet, the associated storage resource is not automatically deleted (it needs to be
deleted manually if required).
Another difference is that the pod names use an index that is bound to the number of
replicas so that the names are deterministic.
The next section in the listing is:
metadata:
name: ethereum-goerli
namespace: ethereum
labels:
app: ethereum-goerli
As the name suggests, the code intends to run the Ethereum Goerli testnet.
Furthermore, the resource will be scheduled to run in a dedicated namespace (called
ethereum).
The next section specifies how many replicas you want to create. This is a piece of
information that you expose in the Blockchain spec.
replicas: 1
Then, the selector field lets you define some key-value labels that can be used by
other resources to select the group of pods that will be managed by your StatefulSet.
selector:
matchLabels:
app: ethereum-goerli
For instance, you can use them to expose the ethereum-goerli pods over the
network via a service.
The following section specifies the command, args, and image fields that tell
Kubernetes which software application and version you want to run and how you want
to start the container.
230
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
command: ['geth']
args:
- '--goerli'
- '--syncmode=light'
- '--cache=128'
image: ethereum/client-go:stable
The resources section defines how much CPU and memory should be allocated to
the designated container. You expose this detail in your CRD.
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
The following section, ports, specifies which ports should be exposed by the
container running in the pod. You can also make it possible to configure this detail in the
Blockchain CRD.
ports:
- containerPort: 30303
name: p2p
protocol: TCP
- containerPort: 8545
name: rpc
protocol: TCP
Then, the volumeMounts field allows you to specify one or more volumes to be
mounted into the container running in the pod.
volumeMounts:
- name: data
mountPath: /data
231
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
To understand this last part, you can think of it in these terms: Upon creation,
the StatefulSet will submit a claim for storage to Kubernetes using a specific
storageClassName. This claim will be satisfied once Kubernetes creates a
persistent volume.
This example references the standard storage class, which is preinstalled when you
install Minikube. This class uses a default directory on your machine to persist the data
written by the container. This allows you to run your blockchain client on your local
machine and mimic what would happen in a real Kubernetes cluster (see Figure 5-9).
Now update your BlockchainSpec struct and capture some of the details that your
controller will use. Get rid of the boilerplate code and make it look like Listing 5-7.
232
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
• The Replicas field is an int32 pointer. As you will see, this the type
expected by the StatefulSetSpec, as defined in the v1 package of the
Kubernetes Go SDK.
• The image field is a string that points to the actual Docker image that
the main container in your StatefulSet will run.
Now to confirm that you are able to read the values from a custom Blockchain
resource submitted to Kubernetes, update the BlockchainReconciler::Reconcile
function to simply read those values.
Just remove the boilerplate code generated by Operator-SDK and update the
function, as shown in Listing 5-8.
log.SetPrefix("BlockchainReconciler")
blockchain := &learnv1alpha1.Blockchain{}
233
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
• Once the code holds a blockchain instance, it prints the values of the
different fields.
• If the field is of primitive type (like the image field, which is of type
string), the code simply prints its value from Blockchain.spec.
Okay, so you have updated the BlockchainSpec struct and the controller
reconciliation logic. It is time to run make update to regenerate the manifests and
controller image, deploy a new CRD for your custom resource, and restart the controller.
234
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Wait until the controller manager has restarted and is ready (in Lens) and then test
the flow by submitting a sample Blockchain custom resource to the cluster.
Under config/samples/, you should have a learn_v1alpha1_blockchain.yaml
sample file generated by Operator-SDK. Make it look like Listing 5-9.
apiVersion: learn.gocrazy.com/v1alpha1
kind: Blockchain
metadata:
labels:
app.kubernetes.io/name: blockchain
app.kubernetes.io/instance: blockchain-sample
app.kubernetes.io/part-of: blockchain-operator
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: blockchain-operator
name: blockchain-sample
namespace: ethereum
spec:
replicas: 1
image: ethereum/client-go:stable
command: ['geth']
client-args:
- '--goerli'
- '--syncmode=light'
- '--cache=128'
- '--datadir=data'
As you can see in the definition, this blockchain-sample resource should be created
under the ethereum namespace.
You don’t have this namespace yet in your Minikube cluster. You can create it from
the command line using kubectl, as follows:
The remaining part of the sample test resource provides the relevant details for the
number of replicas, the image to run, the command to invoke, and the arguments to be
passed to the Geth process (see Figure 5-10).
235
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Figure 5-10. The resources will be deployed under the ethereum namespace
Now you can create the blockchain-sample resource in the cluster by invoking
kubectl from the command line:
Doing this will store the custom resource in the Minikube cluster, as shown in
Figure 5-11.
236
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
From there, navigate to the Pods tab and check the log of the manager container for
the blockchain-operator-controller-manager pod.
You should see log output like the one shown in Figure 5-10.
The controller has effectively been notified of the existence of the blockchain-
sample custom resource and can fetch all the relevant details about it. That basically
means the points are connected and that the flow is working properly.
This is good news, because you no longer have to worry about it. Rather, you can
focus on iteratively improving the blockchain resource spec and reconciliation logic (see
Figure 5-12).
237
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
To reach that goal, the first thing you need to ensure is that your blockchain
controller has the right to manage (that is, create/read/update/delete) StatefulSet
resources.
Using Operator-SDK, you grant these permissions by adding +kubebuilder
annotations to the code.
Update the annotations that precede the BlockchainReconciler main Reconcile
function so that they look like the ones in Listing 5-10.
//+kubebuilder:rbac:groups=learn.gocrazy.com,resources=blockchains,verbs=
get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=learn.gocrazy.com,resources=blockchains/
status,verbs=get;update;patch
//+kubebuilder:rbac:groups=learn.gocrazy.com,resources=blockchains/
finalizers,verbs=update
//+kubebuilder:rbac:groups=apps,resources=statefulsets,verbs=get;list;
watch;create;update;patch;delete
Note that the last annotation grants the controller the rights to manage
statefulsets resources.
To learn more about RBAC and annotations, refer to https://kubebyexample.
com/learning-paths/operator-framework/operator-sdk-go/rbac-operator-
authorization.
Now you can implement the reconciliation logic to handle StatefulSet. You need to
update the Reconcile function, as shown in Listing 5-11.
238
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
The ReconcileStatefulSet function is not implemented yet. Listing 5-12 adds it.
Spec: v1.PersistentVolumeClaimSpec{
AccessModes: []v1.PersistentVolumeAccessMode{v1.
ReadWriteOnce},
StorageClassName: pointer.String("standard"),
Resources: v1.ResourceRequirements{
Requests: v1.ResourceList{
v1.ResourceStorage: apiResource.MustParse(fmt.
Sprintf("%dGi", 1)),
},
},
},
}
sts := &appsv1.StatefulSet{
ObjectMeta: metav1.ObjectMeta{
Name: b.Name,
Namespace: b.Namespace,
},
Spec: appsv1.StatefulSetSpec{
Replicas: b.Spec.Replicas,
Selector: &metav1.LabelSelector{
MatchLabels: b.ObjectMeta.Labels,
},
240
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: b.ObjectMeta.Labels,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
Image: b.Spec.Image,
ImagePullPolicy: "Always",
Name: "app",
Command: b.Spec.Command,
Args: b.Spec.ClientArgs,
Ports: []corev1.ContainerPort{{
ContainerPort: 30303,
Name: "p2p",
Protocol: "TCP",
}, {
ContainerPort: 8545,
Name: "api",
Protocol: "TCP",
}},
Resources: *reqs,
VolumeMounts: []corev1.VolumeMount{{
Name: "data",
MountPath: "/data",
}},
}},
},
},
VolumeClaimTemplates: []v1.PersistentVolumeClaim{
pvc,
},
},
}
241
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Listing 5-13. Importing the Required Dependency from the Kubernetes Go SDK
import (
"context"
"fmt"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
apiResource "k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/utils/pointer"
"log"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
learnv1alpha1 "github.com/gocrazy/blockchain-operator/api/v1alpha1"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
)
Note that these values are actually too low to properly run
a blockchain client and they are used only for the sake of
illustration.
243
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
244
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
You can also execute a shell on the pod and run a few commands. Note that the Geth
files are found in the data directory that is mounted to your container.
If you execute the du -h /data command at regular intervals, you should see that
the disk usage under /data/geth/lightchaindata keeps increasing as new blocks are
produced and stored (see Figure 5-14).
Finally, browse to the Persistent Volume Claim and Persistent Volume tabs and
observe the child resources have been created there (see Figures 5-15 and 5-16).
245
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
spec:
replicas: 1
image: ethereum/client-go:stable
command: ['geth']
client-args:
- '--goerli'
- '--syncmode=light'
- '--cache=128'
- '--datadir=data'
- '--http'
- '--http.api=eth,net,web3'
- '--log.debug=true'
246
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Adding these configs will enable the HTTP server for the eth, net, and web3
namespaces of the Geth json-rpc API. The API will be available on the default
port, 8545.
Now, since your operator controller does not support updates yet (you will add this
feature in the coming sections), you need to restart from a clean slate, before submitting
your changes.
Go ahead and delete the blockchain-sample custom resource using Lens. This
should automatically remove the child resources: the statefulset, the replicaset, and
the pod. However, the persistent volume claim resource won’t be deleted automatically.
You need to delete it manually.
Next, apply the custom resource again using the following:
Using Port-Forward
The blockchain-sample-0 pod should be back in action. To reach out to its json-rpc
API, you can use a convenient Kubernetes feature known as port-forwarding (see
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-
access-application-cluster/).
As mentioned earlier, the client API is available on port 8545. You will map the local
8545 port to the same port on the pod by opening another terminal:
From now on, you should be able to reach the json-rpc API by sending curl requests
to http://localhost:8545. You can try it using the snippet in Listing 5-15.
curl http://localhost:8545/ \
-X POST \
-H "Content-Type: application/json" \
--data '{"method":"eth_blockNumber","params":[],"id":1,"jsonrpc":"2.0"}'
You should get a response similar to this one (the value will be different of course):
{"jsonrpc":"2.0","id":1,"result":"0x73dcd0"}
247
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
I encourage you to explore the json-rpc API. You will take another look at it when
implementing health checks for your blockchain pods in the coming sections.
For now, let’s go back to the controller and tidy up the loose ends.
Now you need to update the ReconcileStatefulSet function and make the
necessary changes to account for the newly added fields.
After the pvc definition, add the logic to set default values for the CPU and memory
(see Listing 5-17).
if b.Spec.Memory == "" {
b.Spec.Cpu = "1Gi"
}
248
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
reqs := &v1.ResourceRequirements{
Limits: v1.ResourceList{
"cpu": apiResource.MustParse(b.Spec.Cpu),
"memory": apiResource.MustParse(b.Spec.Memory),
},
Requests: v1.ResourceList{
"cpu": apiResource.MustParse(b.Spec.Cpu),
"memory": apiResource.MustParse(b.Spec.Memory),
},
}
Similarly, you need to add logic to handle a default value for the API port (see
Listing 5-19).
if b.Spec.ApiPort == 0 {
b.Spec.ApiPort = 8545
}
That should be it. Run make update again to deploy the updated CRD and the
controller.
Next, navigate to the Pods tab in Lens, click the blockchain-sample-0 pod, and
then click the pencil icon in the window. It that opens and shows the pod specification.
Visually confirm that the correct config is applied to that pod (see Figure 5-17).
249
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
250
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
...
} else if err != nil {
log.Println("Failed to get StatefulSet", err)
return reconcile.Result{}, err
}
As you see, for each property you expose via the Blockchain CRD, you want to have a
dedicated reconcile function that receives a pointer to the learnv1alpha1.Blockchain
struct and to the existing StatefulSet.
Listing 5-21 shows the implementation of the reconcileReplicas function.
251
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
1. The code reads the replicas value from the deployed custom
resource.
2. The code reads the replicas value from the existing
StatefulSet.
3. If the two values are different, the code updates the replicas value
for the StatefulSet.
252
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
At a macro level, the logic will be the same for each reconcile* function. That’s
about reading the relevant value from the deployed custom resources and reading the
corresponding value from the running statefulset, then comparing those two values. If
there is a mismatch, the code requests an update to StatefulSet.
Go ahead and implement the reconcileImage function (see Listing 5-22).
As you can see, the function follows the same logic as before, but this time you read
the Image value set on the first (single) container managed by the StatefulSet.
The next two update functions—reconcileArgs and reconcileCommand—are just
slightly more complicated, as the values you need to compare are slices.
To help with the comparison, you’ll use a short compareSlices helper function.
This function compares two string slices and reports whether they are equal. You will
parameterize this function with a withPreOrdering bool parameter to indicate whether
the order of the elements also matters when doing the comparison.
Here it is in Listing 5-23.
253
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
// Sort the slices so that their elements are in the same order
if withPreOrdering {
sort.Strings(s1)
sort.Strings(s2)
}
3. Then the code loops over the elements and compares them.
4. The code returns true if it reaches the end of the function (which
means all the comparisons succeeded).
Thanks to this helper function, you can now implement reconcileArgs and
reconcileCommand easily.
254
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
For the former one, you use compareSlices with preOrdering=true because the
order of the arguments passed to the client software does not matter and should not
cause the pods to restart.
By ordering the elements before comparing them, you ensure that the elements are
compared one-to-one (see Listing 5-24).
if !argsEquals {
sts.Spec.Template.Spec.Containers[0].Args = specClientArgs
err := r.Client.Update(context.TODO(), sts)
if err != nil {
log.Println("Failed to update StatefulSet ClientArgs", err,
"Namespace", sts.Namespace, "Name", sts.Name)
return reconcile.Result{}, err
}
}
return reconcile.Result{Requeue: true}, nil
}
255
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Finally, you will implement the reconcileResources function. You will not update
Cpu and Memory independently, but rather consider that any mismatch of Cpu or Memory
with respect to the custom resource spec should trigger a reconciliation (see Listing 5-26).
specCpu := blockchain.Spec.Cpu
stsContainerResources := sts.Spec.Template.Spec.Containers[0].Resources
stsResourceRequestCpu := stsContainerResources.Requests.Cpu().String()
specMemory := blockchain.Spec.Memory
stsResourceRequestMemory := stsContainerResources.Requests.Memory().
String()
reqs := &v1.ResourceRequirements{
Limits: v1.ResourceList{
"cpu": apiResource.MustParse(specCpu),
"memory": apiResource.MustParse(specMemory),
},
Requests: v1.ResourceList{
"cpu": apiResource.MustParse(specCpu),
"memory": apiResource.MustParse(specMemory),
},
}
256
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
sts.Spec.Template.Spec.Containers[0].Resources = *reqs
err := r.Client.Update(context.TODO(), sts)
if err != nil {
log.Println("Failed to update StatefulSet Resources", err,
"Namespace", sts.Namespace, "Name", sts.Name)
return reconcile.Result{}, err
}
}
return reconcile.Result{Requeue: true}, nil
}
You only have the reconcileContainerPorts function left to implement. I leave this
one for you to implement as an exercise! The logic is the same as the other functions. The
only detail to consider more carefully is how you select the container port to compare
when reading from the StatefulSet, since you are exposing two ports (api and p2p).
Once this is done, go ahead and run make update to redeploy the controller.
Then, edit the spec in the learn_v1alpha1_blockchain.yaml file, try changing the
number of replicas or the Cpu values for instance, and submit the modified sample
resource to the cluster via kubectl.
You can observe the changes in Lens, as shown in see Figure 5-18.
257
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Well done! With the implementation of the update logic, your blockchain operator is
becoming more mature.
Of course, there are many things you could do to improve the blockchain spec and
the reconciliation logic. I can only encourage you to continue on this path and improve
the code you’ve added to blockchain_controller.go.
It’s now time to turn your attention to the topic of health checks.
258
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
There are two types of probes: readiness probes and liveness probes. The former
determines whether the container is ready to serve traffic while the latter determines
whether the container is still alive and functioning correctly.
Health checks involve periodically probing the container to verify that the
application is running as expected.
In practice, this is achieved by running another container (another application)
alongside the main container within a single pod and having this second container (also
referred to as a “sidecar” container) send requests to the main container and check its
responses.
If the responses are satisfying, the sidecar container will return HTTP OK to the
readiness probe and the entire pod will be marked healthy.
Any other response code will notify Kubernetes that the pod is not healthy and
should not be used to serve traffic.
Note that the sidecar container can reach the main container using the loop-
back address (localhost) given that both containers are running alongside within the
same pod.
(Since the health-check container is an autonomous application, you deploy it
independently of the blockchain operator. For that purpose, we create a distinct mini
sub-project to develop it.)
Can you generate a template code for a minimal go http server listening on
port 8080. The server implements a single API endpoint called "readiness".
259
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
package main
import (
"log"
"net/http"
)
func main() {
http.HandleFunc("/readiness", readinessHandler)
This is all you need to start. Save this snippet of code in a main.go file. You will also
add a minimal Dockerfile to build the health-check image (see Listing 5-28).
WORKDIR /workspace
260
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Like you did for the blockchain operator, you will need access to a Docker registry to
publish and serve the health-check Docker image.
Be sure to create a registry for the health-check application and name it
blockchain-health-checks.
For future reference, use the following commands to build and publish the
health-check Docker image to your registry:
261
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Listing 5-29. HTTP Request and Comparison with the AI-Generated Result
package main
import (
"context"
"fmt"
"log"
262
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
"math/big"
"github.com/ethereum/go-ethereum/ethclient"
)
func main() {
// Connect to the Ethereum node
client, err := ethclient.Dial("http://localhost:8545")
if err != nil {
log.Fatal(err)
}
This is a pretty good starting point for the purposes here. Just one thing is incorrect
and does not compile. That is peerCount.Cmp(minPeers), because peerCount is a uint64
and does not have a Cmp method.
You will now incorporate this code as part of the existing readinessHandler function
and make a few modifications.
Listing 5-30 shows the updated code of the readinessHandler function.
263
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
w.WriteHeader(http.StatusOK)
w.Write([]byte("OK"))
}
Also, don’t forget to update the import statement and make sure to include github.
com/ethereum/go-ethereum/ethclient, as shown in Listing 5-31.
And that’s it. You are done with the health-checks application.
As an exercise, I suggest the following improvements:
Make sure to build the image and host it in a public Docker registry.
You will now use it to configure a readiness probe in the Blockchain operator.
265
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
{
Image: "<your-dockerhub-account>/blockchain-health-
checks:latest",
ImagePullPolicy: "Always",
Name: "health-checks",
Ports: []corev1.ContainerPort{{
ContainerPort: 8080,
Name: "readiness",
Protocol: "TCP",
}},
ReadinessProbe: &v1.Probe{
ProbeHandler: v1.ProbeHandler{
HTTPGet: &v1.HTTPGetAction{
Path: "/readiness",
Port: intstr.IntOrString{
IntVal: 8080,
},
},
},
PeriodSeconds: 5,
SuccessThreshold: 3,
FailureThreshold: 3,
},
}
Note that this code uses the ReadinessProbe field of the new container to configure a
readiness probe that will be executed every five seconds.
266
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Give the workflow a try. Run make update again and check the state in Lens.
Here is what you can observe:
267
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
Figure 5-20. All containers are healthy and the pod can handle traffic
On that point, this chapter concludes. There are plenty of features you could
incorporate into your operator to make it more powerful. I suggest a few here:
3. Expose an API in the Blockchain spec to let the user of your CRD
choose which class of storage they want to use.
Kubernetes is a fantastic piece of machinery and I cannot wait to see what you build
with it.
Summary
This chapter reviewed and implemented a lot of concepts:
268
Chapter 5 Writing a Kubernetes Operator to Run EVM-Compatible Blockchains
269
CHAPTER 6
Go Beyond : Connecting
to C for a Performance
Boost
One of the features of Go that hasn’t been covered much so far is its out-of-the-box
integration with other native languages, like C, or even more native, like metal for GPU
programming on macOS-based machines.
271
© Nicolas Modrzyk 2023
N. Modrzyk, Go Crazy, https://doi.org/10.1007/978-1-4842-9666-0_6
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
This means you’ll get a little out of your comfort zone, especially having all those
battery-included memory safety nets provided by Go, but you also get to do more, and
differently.
As specified in the official documentation, using C is often not the best choice, and
maybe having a server running in another language or simply writing a new version of
your favorite algorithm is the best route.
But you may be short on time or have a proven library with proper C bindings, yet
you want the Go code and its build framework to handle clean interfacing.
So here goes—this chapter covers C, C++, and metal code integration with your Go
code, so you can achieve anything from image computations to simple GPU computing.
C is for Change
To improve is to change; to be perfect is to change often.
—Winston Churchill
Cgo is the GoLang core library that enables you to create Go programs calling properly
interfaced C code.
To use Cgo, you basically write normal Go code that imports a pseudo-package
called C. The Go code can then refer directly to C functions and types such as C.int,
variables such as C.stdout, or functions such as C.putchar.
Calling C
This first example prints a statement on the output. It simply calls the C code directly
from your main Go function.
You should still be in the GoLang editor, and the C code is inlined in the hello.go
file, as shown in Listing 6-1.
package main
//#include<stdio.h>
//void inC() {
// printf("Once we accept our limits, we go beyond them!\n");
272
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
//}
import "C"
func main() {
C.inC()
}
See how the C code is written in the Go file, each line prepended with //? The C
code uses printf from the stdio core C library, so the header to include the C package
stdio.h is included at the top of the inline C code.
There is nothing extra to set up and the executed code indeed prints the quote on the
standard output.
Note the use of the Go import C, which tells Go which part of the code is coming from
C and should be resolved after compiling the inline C code.
#include<stdio.h>
void inCFile() {
printf("Once we accept our limits, we go beyond them!\n");
}
The Go code in its own hello.go file and calls the C code from the separate file.
Note that there is no reference to the filename of the C code, as long as the code is in
the same folder, and Cgo resolves things automatically, as shown in Listing 6-3.
package main
/*
void inCFile();
*/
273
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
import "C"
func main() {
C.inCFile()
}
Note that the signature of the C function is still included in the Go code. You will see
later in the chapter how to have a separate C header file to better the integration.
#include<stdio.h>
#include "_cgo_export.h"
void inCFile() {
printf("Once we accept our limits, we go beyond them!\n");
callFromC();
}
package main
/*
#include<stdio.h>
void inCFile();
*/
import "C"
import "fmt"
274
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
func main() {
fmt.Println("GO: I am about to call C.")
C.inCFile()
}
//export callFromC
func callFromC() {
fmt.Println("GO: C is calling me...")
}
Executing the code compiled from Listings 6-4 and 6-5 gives you the output of
Listing 6-6, where you can see statements coming from the C and Go code.
Passing Parameters
You have been coding mostly without parameters so far. Let’s see how things work when
passing some strings to the C code. Listing 6-7 shows how the C code receives strings via
char pointers, char*.
#include<stdio.h>
#include "_cgo_export.h"
You must be extra careful when writing the Go code because C strings take memory,
and this memory allocation and deallocation is not handled by the Go garbage collector.
You have to free memory allocated to C constructs manually after using it.
275
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
Apart from memory management, Listing 6-8 also shows how to:
package main
/*
#include<stdio.h>
#include <stdlib.h>
char* inCFile(char *str);
*/
import "C"
import (
"fmt"
"unsafe"
)
func main() {
cstr := C.CString("Go string!")
defer C.free(unsafe.Pointer(cstr))
cString := C.inCFile(cstr)
gostr := C.GoString(cString)
fmt.Println("Received string from C: " + gostr)
}
Compiling and executing this new code gives the output in Listing 6-9.
Make sure you see and understand that the defer call frees the memory allocated for
the C string, using a pointer reference.
276
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
#include "hello.h"
#include <stdio.h>
The C code itself is quite succinct; you use sprintf to format a string, and the output
of the formatting is a char* pointer. The returned value of the C code is the size of the
string located at the char pointer location.
Listing 6-12 shows the Go code, where C.malloc prepares a pointer to a string
(a char*), and C.GoBytes retrieves the string from the pointer and the size of the
returned value of the C call.
277
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
import (
"fmt"
"unsafe"
)
func main() {
name := C.CString("Einstein used to say: Once we accept our limits, we
go beyond them.")
defer C.free(unsafe.Pointer(name))
This is a lot of extra work just for a simple string, but you should now understand
how to handle pointers to strings back and forth between C and Go.
struct Greetings {
const char *name;
const char *quote;
};
The C code simply prints a string using data from the C struct (see Listing 6-14).
278
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
#include "hello.h"
#include <stdio.h>
The Go code is more involved, but it uses the same pieces you have seen up to now
in this chapter:
package main
279
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
func main() {
name := C.CString("Einstein")
defer C.free(unsafe.Pointer(name))
g := C.struct_Greetings{
name: name,
quote: quote,
}
Unfortunately, there is no easy way to call and use a Go struct from C. Your best bet is
to copy fields back and forth between Go and C.
Note The Cgo preprocessing seems to get confused when using separate .h and
.c files, but putting it all together in one file allows for using the typedef struct,
instead of struct, which makes for slightly cleaner Go code.
package main
// #include <stdio.h>
// #include <stdlib.h>
// #include <string.h>
// typedef struct {
280
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
func main() {
name := C.CString("Einstein")
defer C.free(unsafe.Pointer(name))
g := C.Greetings{
name: name,
quote: quote,
}
This is not a surprise anymore: in a few pages, you will be dealing with writing code
and implementing squares and averages and other statistical functions running on the
GPU threads natively.
To prepare for this, you’ll have a little adventure computing squares in Go via C. This
exercise takes you directly to calling C library functions from Go.
281
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
Listing 6-17 shows how you can call sqrt, the square root function, from the core C
math library.
package main
/*
#include <math.h>
*/
import "C"
import "fmt"
func main() {
number := 32.0
result := float64(C.sqrt(C.double(number)))
fmt.Printf("Square root of %.2f = %.2f\n", number, result)
}
// Output:
// Square root of 32.00 = 5.66
As observed, there is a need to convert between C and Go types, using C.double and
float64, but the code is quite concise and clear. The output is inline in the comments of
the listing, and as you can see, computing the square value of a single float is .. well .. fast.
Using the same C math library, and still in preparation for later GPU code, you’ll try
now to compute the power of 2 of each element of an array.
Here again you use the math.h library, this time from within the C code where the
main algorithm will be written this time. Then you’ll retrieve the array values in Go.
Each power of 2 is computed in-place, and thus you can simply use the array created
in Go (see Listing 6-18).
Listing 6-18. Computing Powers of Two Using the C Math Library on a Go Array
package main
/*
#cgo CFLAGS: -g -Wall
#include <stdlib.h>
#include <math.h>
282
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
import "C"
import (
"fmt"
"unsafe"
)
func main() {
array := []float64{2.0, 3.0, 4.0, 5.0}
squareArray(array)
fmt.Printf("squared array:%v", array)
}
You can also enjoy a bit of speed Look how fast the C code can handle a 1M items
array, using the updated main function from Listing 6-19.
func main() {
const size = 1000000
array := make([]float64, size)
for i := 0; i < size; i++ {
array[i] = float64(i)
}
squareArray(array)
fmt.Printf("Last item of squared array:%f", array[size-1])
}
283
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
That new listing generates a random array of 1M value and then prints out the last
value on the output to give you an idea of the speed. Here again, the last statement prints
out almost before compilation has begun.
But enough of C exercises, let’s move on to something more exciting—applying
image transformation using a library coded in C.
284
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
ImageMagick on OSX
Let’s go over the OSX version first, and then you will see the similarities and differences
when running the same code on Linux and Raspberry Pi. See Listing 6-20.
package main
/*
#cgo CFLAGS: -g -Wall -I/opt/homebrew/include/ImageMagick-7
#cgo LDFLAGS: -L/opt/homebrew/lib -lMagickWand-7.Q16HDRI
-lMagickCore-7.Q16HDRI
#include <MagickCore/magick-baseconfig.h>
#include <MagickWand/MagickWand.h>
*/
import "C"
func main() {
inputFile := C.CString("../gopher3.jpeg")
outputFile := C.CString("../gopher3-sepia.jpeg")
ratio := C.double(0.98)
285
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
• MagickCore/magick-baseconfig.h
• MagickWand/MagickWand.h
• You silently used them before, now you actively require them
• Settings for CFlags: This is where you find the header files for the
library. This is depending on the package installer used to install
ImageMagick, here Homebrew on OSX.
• #include <MagickCore/magick-baseconfig.h>
• #include <MagickWand/MagickWand.h>
286
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
#include <MagickWand/MagickWand.h>
MagickWriteImage(wand, outputFile);
wand = DestroyMagickWand(wand);
MagickWandTerminus();
}
• Sets the Sepia tone. The magic value 65536 is the base for the image,
and then you multiply this by a value between 0 and 2, where values
for sepia, it’s mostly between 0.8 and 1.2.
• Writes the result to the output file.
• Cleans up.
The installation steps for OSX are briefly shown in Listing 6-22.
#!/bin/bash
brew install imagemagick
With a sepia ration setting of 0.86, you would get something like the output of
Figure 6-1—quite deep rendering.
287
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
ImageMagick on Linux
Unfortunately, the packaging for ImageMagick, and any C library, is platform dependent.
So, the headers and flags for Linux are different, as shown in Listing 6-23.
ImageMagick on Raspberry Pi
Again, those headers change when running on Raspberry Pi, as shown in Listing 6-24.
288
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
//on raspberry pi
#cgo CFLAGS: -g -Wall -I/usr/include/ImageMagick-6 -I/usr/include/arm-
linux-gnueabihf/ImageMagick-6
#cgo LDFLAGS: -lMagickWand-6.Q16
#include <magick/magick-baseconfig.h>
#include <wand/MagickWand.h>
That being said, the code runs fast on each platform and Matisse can be turned to
sepia on each of the different OSes.
The M1 based basic MacBook air has 32 GPU cores, and the code presented earlier
makes it easy to use all those cores for custom computations.
The first example is taken directly from the examples, both the previous library one
and the metal library one: adding two arrays together within GPU code.
289
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
#include <metal_stdlib>
using namespace metal;
float a = input[input_index];
float b = input[input_index+1];
output[output_index] = a + b;
}
This first metal example is taken almost directly from the GPU library examples:
https://github.com/a-h/gpu/blob/main/examples/add/add.metal
The Go code itself has some specificities:
• The code then creates an input and an output of custom sizes and
types via gpu.NewMatrix
Listing 6-26 shows the contents of the Go part calling the metal code.
package main
import (
_ "embed"
"fmt"
"github.com/a-h/gpu"
)
//go:embed add.metal
var source string
291
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
func main() {
// Compilation has to be done once.
gpu.Compile(source)
count := 100000000
input := gpu.NewMatrix[float32](2, count, 1)
z := input.D - 1
for y := 0; y < input.H; y++ {
for x := 0; x < input.W; x++ {
input.Set(x, y, z, float32(y))
}
}
output := gpu.NewMatrix[float32](1, input.H, 1)
// Run code on GPU, includes copying the matrix to the GPU.
gpu.Run(input, output)
Running this code, you can confirm and see the GPU activity generated by opening
the GPU view in the Activity Monitor of OSX, and typing Command+4 or using
Window ➤ GPU History (see Figure 6-2).
292
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
You can see the GPU threads in action in real time when running the Go code (see
Figure 6-3).
To visualize what is happening, not on the GPU directly, but on the output of the
computed values with the GPU, you need to prep the code for plotting data.
You will use plotter and add helper functions to format inputs (and outputs), as
shown in Listing 6-27.
package metal
import (
"fmt"
"github.com/a-h/gpu"
"gonum.org/v1/plot"
"gonum.org/v1/plot/plotter"
"gonum.org/v1/plot/vg"
"image/color"
)
p := plot.New()
p.Title.Text = title
p.X.Label.Text = "i"
p.Y.Label.Text = "value"
l, _ := plotter.NewLine(input)
l.LineStyle.Width = vg.Points(1)
l.LineStyle.Color = color.RGBA{R: 255, A: 255}
p.Add(l)
294
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
package metal
import (
"encoding/csv"
"os"
"strconv"
)
295
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
reader := csv.NewReader(file)
data, _ := reader.ReadAll()
Bridging Listings 6-27 and 6-28, you can now write a simple Go script that uses those
two helper functions and generates a graph from open quotes. See Listing 6-29.
package main
import (
_ "embed"
"github.com/hellonico/libgpu/pkg/metal"
)
func main() {
opens, _ := metal.GetOpensCloses("sample-data/ETHUSD_hourlies.p.csv")
This time the output is slightly more visual, and you can see the quotes graph in the
simple opens.png file in Figure 6-4.
296
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
But let’s go back for more and start processing data on the GPU and use the helper
functions to plot the diverse outputs.
• Do the processing
Only the GPU/metal code will really change, so the Go code is included only once in
the book and not repeated after.
This Go code iteration is indeed quite generic and builds on the first GPU example
that you had for adding values. It does the following:
• Embeds the metal code as a string using the nicely named embed.
• Compiles the embedded metal code for the GPU kernel via gpu.
Compile.
• Loads data from the sample CSV code to the input matrix.
297
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
The full code is shown in Listing 6-30, but make sure you adapt it, depending on the
metal code you are working on.
Listing 6-30. Go Code to Call GPU Processing and Plot the Output Matrix
package main
import (
_ "embed"
"fmt"
"github.com/a-h/gpu"
"github.com/hellonico/libgpu/pkg/metal"
)
//go:embed normalize.metal
var source string
func main() {
gpu.Compile(source)
opens, _ := metal.GetOpensCloses("sample-data/ETHUSD_hourlies.p.csv")
gpu.Run(input, output)
metal.PlotMe("Normalized", metal.MatrixToXY(output))
298
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
As usual, you can then adapt the code proposed to your needs:
299
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
• Does a simple loop over all the input values and recomputes the loop
each time.
Plotting the generating output from the code gives you two days, as shown in
Figure 6-5.
And for 100 days moving average, you get the image in Figure 6-6.
You can see a smoother line graph in Figure 6-6, as it should be on the 100 days
moving average graph.
300
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
It uses a sequential C loop to compute each of the output values, while they could be
computed in parallel by different GPU threads.
Let’s take another approach and specify which index of the output matrix you are
working on in the kernel code, and then limiting the moving average computation loop,
thus allowing more threads to run in parallel (see Listing 6-32).
The custom plotting code turns the output into a visual plot. Figure 6-7 shows the
Moving Average for a ten-day window size.
301
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
Normalized Set
This exercise adds an extra level of difficulty. Say you want to compute the normalized
set, from the input, so that all values of the input set are between 0 and 1. As you may
know, to achieve this, you need to first compute the sum of squares.
Obviously, you could recompute that sum every single time, given how fast the GPUs
are, but let’s try to be subtle and create a cache value of the sum of squares (and also the
normalization factor and the variance.)
You need to synchronize that value among the GPU threads while still keeping your
parallel processing ability. Listing 6-33 shows how it is done.
Listing 6-33. Normalized Set with Cache and Synchronization Between Threads
int input_index =
idx(gid.x, gid.y, gid.z,p->w_in, p->h_in, p->d_in);
int output_index =
idx(0, gid.y, 0,p->w_out, p->h_out, p->d_out);
302
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
// variance
float variance = normalizationFactor / sumOfSquares;
303
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
The resulting set from the Go code can be turned into a graph again, and Figure 6-8
shows the values of the normalized set.
• -1 means the sets are probably negatively connected. (If one set
moves one way then the other set moves the other direction.)
• 1 means the sets moves in the same direction and are positively
correlated.
The Person coefficient moving factor is one value, but the moving factor version
computes the Pearson coefficient at any point in time of the input set.
304
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
The AI came pretty close. After a few fixes, you get the code from Listing 6-34.
sumX += x;
sumY += y;
305
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
sumXY += x * y;
sumX2 += x * x;
sumY2 += y * y;
}
Without going into too many details, you can see that the code is using a flat
sequential loop, so an exercise for you would be to write metal code (or prompt
ChatGPT) to maximize parallelism.
That being said, the values are correctly generated. The values for the moving
Pearson coefficient are shown in Figure 6-9.
306
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
Sepia Gopher
This last example, which ends your GPU voyage, takes you back to changing color like
you did for Matisse and the ImageMagick library.
The metal code is the code from the examples of the GPU library with custom
sepia values.
At this stage, you should be very proficient at reading metal code. One small note
here is that the kernel only does one pass per four values of the 1D matrix, since the pixel
(of four values each) are encoded one after the other, sequentially. See Listing 6-35.
307
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
uint3 gid[[thread_position_in_grid]]) {
// Only process once per pixel of data (4 uint8_t)
if(gid.x % 4 != 0) {
return;
}
uint8_t r = input[input_index+0];
uint8_t g = input[input_index+1];
uint8_t b = input[input_index+2];
uint8_t a = input[input_index+3];
308
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
If the plan doesn’t work, change the plan, not the goal.
—Anonymous
309
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
As well as on GitHub:
https://github.com/arrieta/golang-cpp-basic-example/
These tricks use a custom library containing a C wrapper around the C++ code, and
then call that C wrapper from Go and execute the needed code.
Again, a bit extreme, not really documented, but working crazy well enough that it is
worth being presented in this book.
Figure 6-12 shows the folder structure and the files required for this C++ example to
work properly.
• bridge.cpp: Contains the C++ code calling the OpenCV C++ code
• Makefile: The magic glue to compile and link this custom library
#pragma once
#ifdef __cplusplus
extern "C" {
#endif
310
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
int callopencv();
#ifdef __cplusplus
} // extern "C"
#endif
Then onto the C++ code that calls the OpenCV functions, as shown in Listing 6-37.
int callopencv() {
// Load the image
cv::Mat image = cv::imread("input.jpeg");
if (image.empty()) {
printf("Failed to load the image.\n");
return -1;
}
return 0;
}
311
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
The OpenCV code has again been generated by ChatGPT with the following prompt:
In opencv C++ code, using opencv operations on mat, show how to load and
turn an image into sepia.
Apart from the wrong headers, the generated code of Listing 6-37 contains almost no
modifications.
The makefile is the most involved part of this section. It assumes you are using
clang++ and that the OpenCV library has been installed using Homebrew (brew
install opencv). Then the makefile compiles and creates the shared library called
libmyopencv.so.
Listing 6-38 includes opencv_highgui, which is not required for this example, but
you may find it useful for other usual OpenCV tasks.
Listing 6-38. Makefile to Compile the Custom Shared Library and Go Code
.PHONY: all
all: main
myopencv.so:
/usr/bin/clang++ -o libmyopencv.so *.cpp -std=c++20 -O3 -Wall -Wextra
-fPIC -shared -I/opt/homebrew/include/opencv4 -L/opt/homebrew/lib
-lopencv_core -lopencv_imgcodecs -lopencv_imgproc -lopencv_highgui
main: myopencv.so
go build callopencv.go
Make (pun intended) sure the path to the include and library folders are correct.
Those shown here are for OSX, so you will need to update those for Linux and others.
Finally, the simple Go code is shown in Listing 6-39.
package main
312
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
func main() {
C.callopencv()
}
The only new part of the Go code is that it loads the custom library and calls the
function defined in the bridge.h header file.
Also note the inclusion of the current folder to locate the library generated by the
make call.
Calling make shows the libmyopencv.so library (from the C and C++ code) and the
callopencv binary (from the Go code) files that have been generated (see Figure 6-13).
Providing you execute the command from the same folder, you get a newly generated
output.jpg file, with a yellowed, but happy, gopher (see Figure 6-14).
313
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
The next step is to tweak the kernel values used for the OpenCV transformation in
the bridge.cpp file. Then you can turn the gopher blue or red.
The other examples from the https://github.com/arrieta/golang-cpp-basic-
example/ GitHub repository are worth looking at, especially the goroutines folder,
which uses Go routines to run CPU-heavy tasks and proves that Go handles the load and
the scheduling between the tasks very well.
The irony is that the actual story starts from the moment we think that
everything has ended.
—Aaliya Mallick
314
Chapter 6 Go Beyond : Connecting to C for a Performance Boost
Summary
So, here you are, ending this longer-than-expected chapter. At this point, you should now
have complete understanding on how to:
• Install and call C-based libraries from Go code and ensure that no
memory leaks are unintentionally created.
• Process datasets using metal on Apple GPUs.
• Have some new coding ideas by calling C++ from Go and using
OpenCV to perform fast image processing.
315
CHAPTER 7
Then took the other, as just as fair, And having perhaps the better claim,
Because it was grassy and wanted wear; Though as for that the passing
there Had worn them really about the same ---
“The Road Not Taken” by Robert Frost
Computer science is a discipline filled with intricate narratives and complexities despite
its relatively short history. The evolution of operating systems and programming
languages follows a linear trajectory in the textbooks, with disruptive innovations
occasionally surfacing along the timeline. Unfortunately, many alternative systems
317
© Nicolas Modrzyk 2023
N. Modrzyk, Go Crazy, https://doi.org/10.1007/978-1-4842-9666-0_7
Chapter 7 Alef from Plan 9
that failed to gain mainstream acceptance are often overlooked and forgotten. Yet,
hidden within these off-path branches lie invaluable lessons and insights waiting to be
rediscovered.
During my early days as a computer science student in the 1990s, I had the privilege
of experiencing firsthand the computer science renaissance. It was a period teeming
with innovation, where operating systems, computer architectures, and programming
languages bloomed like wildflowers. In this odd chapter, as I reflect upon my fond
memories of that era, I find great pleasure in recounting the journey shaped by the
offbeat systems of NeXT, Plan 9, and Alef.
The NeXT computer, a technological marvel ahead of its time, played a pivotal
role in my exploration of unconventional computing platforms. It was a machine that
embodied Steve Jobs’ vision, offering advanced features and a powerful development
environment. While the world was fixated on mainstream choices, I found solace in the
unique capabilities of the NeXT computer. Its sleek design, innovative object-oriented
programming model, and unparalleled multimedia capabilities ignited my passion for
seeking unconventional paths. While NeXT failed to go mainstream, NeXTSTEP found a
home in Mac OSX after Apple acquired NeXT in 1996.
I stumbled upon Plan 9 in a newsletter article that mentioned that Bell Labs had
ported Plan 9, an operating system that dared to challenge the established norms of
distributed computing. Developed by the brilliant minds at Bell Labs, Plan 9 envisioned
a future where seamless communication and collaboration between machines and
users would be the norm. Its revolutionary 9P protocol and distributed filesystem model
blurred the boundaries between local and remote systems. Plan 9 was designed to be the
successor to UNIX, the OS born in Bell Labs in the 1970s. Many of the original developers
of UNIX are on the Plan 9 team and saw it as a chance to “fix” UNIX. They took the ideas
they had for UNIX to their logical conclusion. As Ken Thompson said jokingly, “I’d
remember to spill ‘create’ correctly this time.”
The experience with Plan 9 was nothing short of awe-inspiring, fueling my curiosity
and expanding my understanding of distributed systems. At the heart of Plan 9 lay
Alef, a programming language specifically designed for system programming tasks.
Alef drew inspiration from many languages, merging the best features of Pascal, C, and
Concurrent Euclid. With its emphasis on concurrent programming and interprocess
communication, Alef shattered the conventional notions of sequential execution
that had dominated my studies. Its concise syntax and expressive power could tackle
complex system-level challenges easily. Through Alef, I delved into the realm of
318
Chapter 7 Alef from Plan 9
lightweight processes and built distributed applications that harnessed the full potential
of Plan 9. Although Alef disappeared after the second edition of Plan 9, its influence lived
on Limbo in the Inferno OS and the Go programming language.
As I wrote this unconventional chapter for a book centered on Go, a language with
a great Plan 9 and Alef heritage, I went on a nostalgic journey, with memories of the
intellectual fervor and the boundless sense of possibility that permeated the computer
science landscape of the 1990s. Ideas and concepts from those vibrant days, brimming
with a myriad of operating systems, diverse computer architectures, and a cornucopia
of programming languages, are now resurfacing as new systems are developed. Though
these alternative paths may not have attained widespread acceptance, their impact and
the lessons they offer remain undeniably significant and should not be overlooked.
In a world where dominant narratives often overshadow unconventional ideas, it is
vital to remember the contributions of the offbeat systems. They remind us that progress
is not solely measured by market share or popular opinion, but by the depth of ideas
explored and the impact they have on shaping our collective knowledge.
319
Chapter 7 Alef from Plan 9
The Plan 9 operating system’s /net directory treats all network resources as files.
This directory serves as a virtual filesystem that encapsulates a wealth of information
and functionalities related to networking. By representing network resources as files,
Plan 9 simplifies the management and interaction with the network, providing a
unified and consistent approach. Within the /net directory, one can discover files
representing network interfaces, connections, and services. These files enable users to
manipulate network settings, establish connections to remote machines, and perform
various network-related tasks using familiar file operations and tools. The file-based
representation of network resources in the /net directory exemplifies Plan 9’s elegant
design philosophy, fostering simplicity and uniformity throughout the operating system
(see Figure 7-1).
Everything is a file in Plan 9. Windows are viewed and interacted with as files in the
filesystem. This approach introduces a level of abstraction that allows for unified and
consistent handling of graphical user interfaces (GUI) and user interactions. Each window
320
Chapter 7 Alef from Plan 9
is represented as a file, and operations such as reading, writing, and seeking can be
performed on these window files. By treating windows as files, Plan 9 provides a seamless
integration of GUI elements into the overall file-based paradigm of the operating system.
This design choice simplifies the development of GUI applications and enables efficient
communication and sharing of data between different windows and processes. As you can
see in Figure 7-2, five windows correspond to five files under /dev/wsys.
322
Chapter 7 Alef from Plan 9
this idea to the design of a Web3 distributed computing framework could provide a
unified abstraction layer that allows for seamless and uniform interactions with diverse
resources and protocols.
Additionally, Plan 9’s network transparency and remote file access capabilities are
worth considering in the context of Web3. The ability to treat remote resources as if they
were local files greatly simplifies distributed computing and fosters collaboration across
different nodes and networks. By incorporating similar features into a Web3 framework,
you can enable decentralized applications to transparently access and utilize resources
across a network, promoting a more inclusive and interoperable ecosystem.
Go, the programming language with lineage traced back to Plan 9 and popularity
among Web3 developers, is an ideal platform to welcome back some of the visitors
from Plan 9.
323
Chapter 7 Alef from Plan 9
Hello Tuple!
Tuples were a favored data structure in concurrent languages in the 1980s. Using
tuples in concurrent programming languages such as Alef stems from their ability
to encapsulate multiple values into a single entity, facilitating concise and efficient
handling of related data. With their immutability and support for heterogeneity, tuples
effectively organize and pass around data within concurrent programs. Instead of the
first-class language constructs, there are several projects, as shown in Listing 7-1.
void
main()
{
int a;
byte* str;
byte c;
(a, str, c) = func();
}
324
Chapter 7 Alef from Plan 9
#include <alef.h>
void
receive(chan(byte*) c)
{
byte *s;
s = <-c;
print("%s\n", s);
terminate(nil);
}
void
main(void)
{
chan(byte*) c;
alloc c;
proc receive(c);
c <-= "Hello, World!";
terminate(nil);
}
Channels and processes are the cornerstones of Alef, establishing them as first-
class constructs within the language. Alef places significant emphasis on concurrent
programming, and channels and processes are the key components that enable effective
communication and synchronization between concurrent entities.
Channels serve as the primary means of communication and synchronization in
Alef. They provide a safe and efficient way for goroutines and processes to exchange data
and coordinate their actions. Channels are created using the channel declaration syntax,
allowing programmers to define the type of data that can be transmitted. Sending and
receiving messages on channels occur through dedicated send-and-receive operations.
This design choice ensures explicit synchronization between concurrent entities,
promoting orderly communication and preventing race conditions.
325
Chapter 7 Alef from Plan 9
The ability to declare and manipulate channels as first-class constructs grants Alef a
high degree of flexibility and expressiveness. Channels can be buffered, allowing them
to hold a limited number of messages, which introduces a level of decoupling between
senders and receivers. Buffered channels enable non-blocking operations when the
buffer is not full or empty, facilitating data flow between concurrent components
without unnecessary delays.
In Alef, processes are also treated as first-class constructs, elevating their significance in
concurrent programming. Processes are separate instances that execute concurrently, and
they communicate with each other using channels. This approach enables a higher level of
concurrency and encapsulation, as each process maintains its own set of goroutines and
executes independently. The isolation of processes enhances reliability, security, and fault
tolerance by preventing unintended interference between concurrent entities.
Including processes as first-class constructs enables Alef to handle complex
concurrent scenarios more effectively. By organizing concurrent entities into distinct
processes, programmers can structure their code in a modular and hierarchical manner,
leading to better code organization and maintainability. Spawning and terminating
processes provide fine-grained control over concurrent execution, allowing for the
dynamic creation and destruction of concurrent units as needed.
Alef’s treatment of channels and processes as first-class constructs underpins
the language’s ability to handle concurrency effectively. Channels facilitate safe
communication and synchronization between concurrent components, while processes
enable the encapsulation and coordination of concurrent execution. By providing
dedicated support for these constructs, Alef empowers programmers to write concurrent
programs that are expressive, reliable, and scalable. The integration of channels and
processes as first-class entities showcases Alef’s commitment to providing a strong
foundation for concurrent programming.
326
Chapter 7 Alef from Plan 9
The topic of OS-level threading and goroutines has been a subject of intense
discussion within the Go (GoLang) community, focusing on achieving effective
parallelism and concurrency. The debate surrounding processes versus threads remains
an ongoing issue in contemporary operating systems. In this context, the introduction of
user-space cooperative threads and OS-level threads adds complexity to the discussion.
Plan 9 recognized the challenges associated with processes and threads early on. It
identified the need for finer control over resources when creating new processes. In most
operating systems, the traditional fork() system call is a special version of rfork() in
Plan 9. This distinction allows for more precise management of resources and provides
greater flexibility in controlling the behavior of new processes.
By incorporating rfork() as a fundamental mechanism, Plan 9 introduced a novel
approach to process creation and resource control. This approach became instrumental
in addressing the complexities surrounding parallelism and concurrency. Plan 9’s
innovative design and resource management mechanisms set the stage for exploring
and developing efficient and scalable concurrent programming models.
In the Go programming language, goroutines serve as lightweight concurrent units
of execution, allowing for highly concurrent and efficient code. Goroutines are not tied to
OS-level threads directly but are multiplexed onto a smaller number of threads managed
by the Go runtime. This approach mitigates the overhead associated with OS-
level threads while still providing concurrency and parallelism in Go programs.
The consideration of process versus thread models, coupled with Plan 9’s insights
and Go’s goroutine model, showcases the ongoing evolution and exploration of
parallelism and concurrency in modern operating systems and programming languages.
By addressing the complexities early on and providing innovative solutions, Plan 9’s
influence can be seen in the design of Go, offering developers powerful tools to achieve
effective concurrency and parallelism while maintaining fine-grained control over
resources (see Figure 7-3).
327
Chapter 7 Alef from Plan 9
Void
kbdtask(chan(int) kbdc)
{
int r;
328
Chapter 7 Alef from Plan 9
for(;;) {
r = <−kbdc;
/* process keyboard input */
}
}
void
mousetask(chan(Mevent) mc)
{
Mevent m;
for(;;) {
m = <−mc;
/* process mouse input */
}
}
void
main(void)
{
chan(int)[100] kbd;
chan(int) term;
chan(Mevent) mouse;
alloc kbd, mouse, term;
proc kbdproc(kbd, term), mouseproc(mouse, term);
task kbdtask(kbd), mousetask(mouse);
<−term; /* main thread blocks here */
postnote(PNPROC, mousepid, "kill");
postnote(PNPROC, kbdpid, "kill");
exits(nil);
}
329
Chapter 7 Alef from Plan 9
b. Provide a name for the virtual machine and select the appropriate Type and
Version (e.g., Other and Other/Unknown, respectively) (see Figure 7-4).
330
Chapter 7 Alef from Plan 9
333
Chapter 7 Alef from Plan 9
334
Chapter 7 Alef from Plan 9
335
Chapter 7 Alef from Plan 9
336
Chapter 7 Alef from Plan 9
c. Use the configuration for the drive, mouse, and display (see Figure 7-12).
337
Chapter 7 Alef from Plan 9
338
Chapter 7 Alef from Plan 9
339
Chapter 7 Alef from Plan 9
340
Chapter 7 Alef from Plan 9
341
Chapter 7 Alef from Plan 9
342
Chapter 7 Alef from Plan 9
343
Chapter 7 Alef from Plan 9
344
Chapter 7 Alef from Plan 9
345
Chapter 7 Alef from Plan 9
11. Mount the filesystem. Use the defaults, as shown in Figure 7-21.
346
Chapter 7 Alef from Plan 9
12. Install the distribution from the local media (see Figure 7-22).
347
Chapter 7 Alef from Plan 9
348
Chapter 7 Alef from Plan 9
349
Chapter 7 Alef from Plan 9
15. Start copying the OS files with copydist (see Figure 7-25).
350
Chapter 7 Alef from Plan 9
16. Installing the OS. It’s going to take a while. Go for a coffee break
(see Figure 7-26).
351
Chapter 7 Alef from Plan 9
352
Chapter 7 Alef from Plan 9
353
Chapter 7 Alef from Plan 9
20. Unmount the ISO disk by going to the Storage area (see
Figure 7-30).
354
Chapter 7 Alef from Plan 9
355
Chapter 7 Alef from Plan 9
b. You are welcomed to Plan 9 with the Rio window system. The
Terminal and the ACME editor are running by default (see
Figure 7-32).
356
Chapter 7 Alef from Plan 9
23. Shut down the system by typing fshalt in the terminal window
(see Figure 7-33).
357
Chapter 7 Alef from Plan 9
Perhaps, on your way home, someone will pass you in the dark, and you
will never know it... for they will be from outer space.
—Plan 9 from outer space
358
Index
A performance evaluation, 175–177
recipe
Alef programming language, 318
compliance, 153
Bell Labs, 323
confidence levels, 154
channels/processing, 324–326
macroeconomic tendencies, 151
proc/task, 326–329
performance evaluation, 153
programming languages, 323
preparation, 150
tuples, 324
risk management, 152
scalability/maintainability, 153
B security, 154
Backtesting, 149, 153–155, 160–161, structured approach, 150
175–176, 184, 185, 194 testing/debugging, 152, 153
BindJSON function, 53 timeframe, 152
BlockchainReconciler Get function, 234 trading strategies, 155
Blockchain trading secret sauce, 147–149
automate trading, 144–147 sharpe ratio, 178, 179
cooking success/failure, 183–187
backsetting, 160 taste, before serving meal, 187–193
code, 170–174 unique environment, 139
data, 161–163 Wall Street bankers, 140
discipline/consistency, 169
EMAs, 167, 168
features, 169, 170 C
indicators, 163–166 C
run bot, 174 calling, 272, 273
dessert, 197, 199, 201–203 code, 273, 274
dinner is served, 193, 194, 196 Go code, 274, 275
financial markets, 141–143 header file, 277, 278
GoLang, advantages, 210 ImageMagick, 284, 285
kitchen utensils, 155, 157–160 Linux, 288
MAR ratio, 180–182 OSX, 285–288
modeling cycles, 140 Raspberry Pi, 288, 289
359
© Nicolas Modrzyk 2023
N. Modrzyk, Go Crazy, https://doi.org/10.1007/978-1-4842-9666-0
INDEX
C (cont.) E
OpenCV from Go, 309–314
Ethereum, 210, 214, 230, 244, 246
passing parameters, 275, 276
EVM-compatible blockchain
struct, Go, 278–283
networks, 214
Cgo, 272–274, 279, 280, 309
EVM-compatible blockchains, 214,
ChatGPT client
246, 265
AI-trained chatbot, 1
Exponential moving averages (EMAs), 167
API Keys, 34, 35
basic steps, 34
customize request, 38–40 F
debugging GoLand, 8, 10 Financial economy, 142–143
first request, 36, 37
GoLand, 2, 8, 10
loop prompt, 41 G
query/custom models, 43, 44 generativeart library, 68
run/debug program, 2, 5–7 generativearts algorithms, 71
streaming response, 42, 43 Gin framework, 48, 56, 76, 84, 85
Classic trading strategy, 172 godotenv library, 23
Cmp method, 263 Go function, 123, 272, 274
compareSlices helper GoLand, 2, 3, 8–10, 24, 64, 87, 95
function, 253–255 Go program
controllerutil.SetControllerReference API key, custom library, 21, 23, 24
function, 244 contexts, 30–33
Cryptocurrencies, 140, 159, 183, 195, custom data, 14–17
197, 198 Go channels, 25, 26, 28, 30
Custom resource definition (CRD), 217, Go routines, 25
220–221, 225–227, 248, 268 new project, 11, 12
program arguments, slicing, 20
read from file, 13
D read user input, 12, 13
davinci model, 45 structs, writing/reading, 17–20
Debugging, 8–10, 92, 150, 152–153, 158 techniques, 45
Decentralized applications (dApps), 197, Go programming language (GoLang),
210, 211, 214, 322 14–16, 47, 196–199, 210–211, 272,
dotenv library, 21–24 310, 327
drawOne function, 73 Go routines, 25–32, 57, 314
drawScene function, 130 GPU coding, OSX
360
INDEX
361
INDEX
Q
O Queues
OpenAI models, 44, 45 concert, 56
OpenCV functions, 311 custom data type, 61, 62
Operator-SDK framework, 219, 220, Go program’s output, 66
222–226, 233, 235, 238 go-queue, 59, 60
os.Args, 20 Go routines, 57, 58
IDE, 65
jobData type, 61
P JSON, 61
Paper trading, 149, 176, 187, 190–193 NSQ, 64, 65, 67
Pearson correlation coefficient, 304–306 nsqlookupd, 63
Plan 9 single clerk handling, 57
362
INDEX
R T, U
Raspberry Pi, 48, 61, 159, 285, Tile set-based game
286, 288–289 ChatGPT
Real-time trading simulation, 149, 153, display date, real time, 97, 98, 100
187, 188, 190 hangman game, 100–103
reconcileCommand function, 253–255 framebuffer, 94
reconcileContainerPorts function, 257 game setup, 95, 96
reconcile function, 222, 224, 238, 251 Raylib, library, 94
reconcileResources function, 256 Tom Demark’s indicators, 208
ReconcileStatefulSet function, 239–242, Trading strategy, 149, 160, 172, 177–180,
248, 266 190, 195, 204
2D gaming interfaces
multiplayer games, 93
S tile set-based game, 94
Sharpe ratio, 178–180, 183, 209
Size function, 97
sleepSomeTime function, 58, 61 V, W, X, Y, Z
stdio core C library, 273 Virtual trading, 149, 187
363