Web Development With Go
Web Development With Go
Web Development With Go
Web
Development
with Go
Building Scalable Web Apps and
RESTful Services
Shiju Varghese
Web Development
with Go
Building Scalable Web Apps and
RESTful Services
Shiju Varghese
I would like to dedicate this book to my parents, the late C.S. Varghese and Rosy Varghese. I would
like to thank them for their unconditional love and life struggles for the betterment of our lives.
I would like to dedicate this book to my lovely wife Rosmi and beautiful daughter Irene Rose.
Without their love and support, this book would not have been possible.
Finally, I would like to dedicate this book to my elder sister Shaijy and younger brother Shinto.
Shiju Varghese
Contents at a Glance
About the Authorxv
About the Technical Reviewerxvii
Introductionxix
Chapter
2: Go Fundamentals 15
Chapter
6: HTTP Middleware 99
Chapter
Contents
About the Authorxv
About the Technical Reviewerxvii
Introductionxix
Chapter
Go Ecosystem 4
Installing the Go Tools 4
Checking the Installation 6
Writing Go Programs 8
Writing a Hello World Program 8
Writing a Library 9
Testing Go Code 11
Using Go Playground 12
vii
Contents
Using Go Mobile 13
Go as a Language for Web and Microservices 13
Summary 14
Chapter
2: Go Fundamentals 15
Packages 15
Package main 15
Package Alias 16
Function init 16
Using a Blank Identifier 17
Importing Packages 18
Install Third-Party Packages 18
Writing Packages 19
Go Tool 21
Formatting Go Code 22
Go Documentation 23
Error Handling 33
Summary 34
Chapter
Contents
Type Composition 40
Overriding Methods of Embedded Type 43
Working with Interfaces 44
Concurrency 50
Goroutines 50
GOMAXPROCS and Parallelism 53
Channels 54
Summary 58
Chapter
DefaultServeMux 66
http.Server Struct 67
Gorilla Mux 69
Building a RESTful API 70
Data Model and Data Store 72
Configuring the Multiplexer 73
Handler Functions for CRUD Operations 74
Summary 77
ix
Contents
Chapter
Summary 97
Chapter
6: HTTP Middleware 99
Introduction to HTTP Middleware 99
Writing HTTP Middleware 100
How to Write HTTP Middleware 101
Writing a Logging Middleware 101
Summary 120
Chapter
Contents
Summary 139
Chapter
Contents
xii
Contents
Summary 249
Chapter
xiii
Contents
Summary 282
References 283
Index 285
xiv
xv
xvii
Introduction
Go, often referred to as Golang, is a general-purpose programming language that was developed at Google in
November 2009.
Several programming languages are available for writing different kinds of software systems, and some
languages have existed for decades. Some mainstream programming languages are evolving by adding new
features in their newer versions, which are released with many new features in each version. Both C# and
Java provide too many features in their language specification.
At the same time, lots of innovations and evolutions are happening for the computer hardware and IT
infrastructure. Software systems are written with feature-rich programming languages, but we cant leverage
the power of modern computers and IT infrastructures by using them. We are using programming languages
that were created in the era of single-core machines, and now we write applications for multicore machines
using these languages.
Just like everything else, computer programming languages are evolving. Go is an evolutionary
language for writing software systems for modern computers and IT infrastructures using a simple and
pragmatic programming language. On the Go web site at https://golang.org/, Go is defined as follows:
Go is an open source programming language that makes it easy to build simple, reliable, and efficient
software.
Go is designed for solving real-world problems instead of academic theories and intellectual thoughts.
Go is a pragmatic programming language that ignores the programming language theory (PLT) that has
evolved in the last three decades; it provides a simple programming model for building efficient software
systems with first-class support for concurrency. Gos built-in Concurrency feature gives you an exciting
programming experience for writing highly efficient software systems by leveraging concurrency. For every
programming language, there is a design goal. Go is designed to be a simple programming language, and it
excels as a simple and pragmatic language.
Go is the language of choice for building many innovative software systems, including Docker,
Kubernetes, and others. Like Parse MBaaS by Facebook, many existing systems are re-engineering to Go.
I have assisted several organizations to successfully adopt Go, and the adoption process was extremely easy
thanks to Gos simplicity and pragmatism. I am sure that you will be excited about Go when you develop
real-world software systems.
Go is a general-purpose programming language that can be used to build a variety of software
systems, including networked servers, system-level applications, infrastructure tools, DevOps, native
mobile applications, graphics, the Internet of Things (IoT), and machine learning. Go can be used for
building native mobile applications, and I predict that Go will be a great choice for building native Android
applications in the near future.
Go is a great choice of language for building web applications and back-end APIs. I highly recommend
Go for building massively scalable back-end RESTful APIs. I predict that Go will be the language of choice in
the enterprises for building back-end RESTful APIs, the backbone for building modern business applications
in this mobility era.
In this book, I assume that you have knowledge of at least one programming language and have some
experience in web programming. If you have prior knowledge of Go, it will help you follow along in this
book. If you are completely new to Go, I recommend the following tutorial before you start reading the book:
http://tour.golang.org/welcome/1.
xix
Introduction
When you go through the language fundamentals, I recommend accessing the following section of the
Go documentation: https://golang.org/doc/effective_go.html.
The primary focus of this book is web development using the Go programming language. Before diving
into web development, the book quickly goes through language fundamentals and concurrency, but doesnt
delve too deeply, especially regarding concurrency. You should spend some time exploring concurrency
if you want to effectively leverage it for your real-world applications. I recommend the following resource
for learning more about concurrency and parallelism: http://blog.golang.org/concurrency-is-notparallelism.
This book explores various aspects of Go web programming, with a focus on providing practical code.
Chapter 9, Building RESTful Services, can help you to start developing real-world APIs in Go.
I have created a GitHub repository for this book at https://github.com/shijuvar/go-web. The
repository provides example code for the book and a few example applications in the near future to help you
build real-world web applications.
xx
Chapter 1
Introducing Go
Go, also referred to as Golang, is a general-purpose programming language, developed by a team at Google
and many contributors from the open source community (http://golang.org/contributors). The
language was announced in November 2009, and the first version was released in December 2012. Go is an
open source project that is distributed under a BSD-style license. The official web site of the Go project is
available at http://golang.org/. It is a statically typed, natively compiled, garbage-collected, concurrent
programming language that mostly belongs to the C family of languages in terms of basic syntax. Lets look
at some of the features of Go to understand its design principles.
Although Go has fewer language features, productivity is not affected by its pragmatic design. A new Go
programmer can quickly learn the language and can easily start to develop production-quality applications.
Go has simply ignored many language features from the last three decades and focuses on real-world
practices instead of academics and programming language theory (PLT).
From a practical perspective, you might say that Go is an object-oriented programming (OOP)
language. But Gos object-oriented approach is different from programming languages such as C++, Java,
and C#. Go is not a full-fledged OOP language from an academic perspective. Unlike many existing OOP
languages, Go does not support inheritance and does not even have a class keyword. It uses composition
over inheritance through its simple type system. Gos interface type design shows its uniqueness when
compared with other object-oriented programming languages.
Is Go an OOP language? The answer is both yes and no. Go language includes all batteries required for
writing applications with an object-oriented approach, but it is not a complete OOP language because it
lacks some traditional OOP features.
NoteProgramming language theory (PLT) is a branch of computer science that deals with the design,
implementation, analysis, characterization, and classification of programming languages and their individual
features.
In Go, concurrency is built into the language and is designed for writing high-performance concurrent
applications for modern computers. Concurrency is one of the unique features of the Go language and it is
considered a major selling point. Gos concurrency is implemented using two unique features: goroutines
and channels. A goroutine is a function that can run concurrently with other goroutines. It is a lightweight
thread of execution in which many goroutines execute in a single thread that enables more program
performance and efficiency. The most important feature of goroutine is that it is managed and executed by
Go runtime. Many programming languages provide support for writing concurrent programs, but they are
limited only to communication and synchronization among the threads being executed. And most of the
existing languages provide support for concurrency through a framework, but not a built-in feature in the
language, so it makes restrictions when concurrency is implemented with these languages.
Go provides channels that enable communication between goroutines and the synchronization of
their executions. With channels, you can send data from one goroutine to another. Channels also provide a
greater level of synchronization between goroutines and ensure that two goroutines are running in a known
state. Concurrency is a major reason for adopting Go as the language for building highly efficient software
systems with greater levels of performance.
Go as a General-Purpose Language
Different programming languages are used to develop different kinds of applications. C and C++ have been
widely used for systems programming and for systems in which performance is very critical. At the same
time, working with C and C++ affect the productivity of application development. Some other programming
languages, such as Ruby and Python, offer rapid application development that enables better productivity.
Although the server-side JavaScript platform Node.js is good for building lightweight JSON APIs and
real-time applications, it gets a fail when CPU-intensive programming tasks are executed. Another set of
programming languages is used for building native mobile applications. Programming languages such as
Objective C and Swift are restricted for use only with mobile application development. Various programming
languages are used for a variety of use cases, such as systems programming, distributed computing, web
application development, enterprise applications, and mobile application development.
The greatest practical benefit of using Go is that it can be used to build a variety of applications, including
systems that require high performance, and also for rapid application development scenarios. Although Go
was initially designed as a systems programming language, it is also used for developing enterprise business
applications and powerful back-end servers. Go provides high performance while keeping high productivity
for application development, thanks to its minimalistic and pragmatic design. The Go ecosystem (which
includes Go tooling, the Go standard library, and the Go third-party library) provides essential tools and
libraries for building a variety of Go applications. The Go Mobile project adds support for building native
mobile applications for both Android and iOS platforms, enabling more opportunities with Go.
In the era of cloud computing, Go is a modern programming language that can be used to build
system-level applications; distributed applications; networking programs; games; web apps; RESTful
services; back-end servers; native mobile applications; and cloud-optimized, next-generation applications.
Go is the choice of many revolutionary innovative systems such as Docker and Kubernetes. A majority of
tools on the software containerization ecosystem are being written in Go.
NoteDocker is a revolutionary software container platform, and Kubernetes is a container cluster manager.
Both are written in Go.
Go Ecosystem
Go is not just a simple programming language; it is also an ecosystem that provides essential tools and
features for writing a variety of efficient software systems. The Go ecosystem contains the following
components:
Go language
Go libraries
Go tooling
Go language provides essential syntax and features that allows you to write your programs. These
programs leverage libraries for reusable pieces of functionality and tooling for formatting code, compiling
code, running tests, installing programs, and creating documentations.
Libraries play a key role in the Go ecosystem because Go is designed to be a modular programming
language for writing highly maintainable and composable applications. Libraries provide reusable pieces
of functionality distributed as packages. You can use packages in Go to write software components in a
modular and reusable manner to be shared across Go programs, and can easily maintain your applications.
The design philosophy of a Go application is to write small pieces of software components through
packages and then compose Go applications with these smaller packages. Libraries are available from the
standard library and third-party libraries. When you install Go packages from the standard library, they
are installed into the Go installation directory. When you install Go, the environment variable GOROOT
will be automatically added to your system for specifying the Go installation directory. The standard
library includes a larger set of packages that provide a wide range of functionality for writing real-world
applications. For example, "net/http", a package from the standard library, can be used to write powerful
web application and RESTful services.
Note For documentation about packages from the standard library, go to http://golang.org/pkg/.
If you need extra functionality not available from the Go standard library, you can leverage thirdparty libraries provided by the Go developer community, which is very enthusiastic about developing
and providing many useful third-party Go packages. For example, if you want to work with the MongoDB
database, you can leverage a third-party package called "mgo".
Go tooling is an important component in the Go ecosystem, which provides a number of
tooling-support services: building, testing, and installing Go programs; formatting Go code; creating
documentation; fetching and installing Go packages; and so on.
Figure1-1 shows the installer packages and archived sources for Mac, Windows, and Linux platforms,
which are listed on the download page of the Go web site. Go provides installers for both Mac and Windows
OSs. A package installer is available for Mac X OS that installs the Go distribution to /usr/local/go and puts
the /usr/local/go/bin directory in the PATH environment variable.
NoteThe complete instructions for downloading and installing Go tools are available at
http://golang.org/doc/install.
Go Workspace
Go programs must be kept in a directory hierarchy called a workspace, which is simply a root directory of
the Go programs.
A workspace contains three subdirectories at its root:
When you start working with Go, the initial step is to set up a workspace in which Go programs reside.
You must create a directory with three subdirectories for setting up the Go workspace. A Go developer
writes Go programs as packages into the src directory. Go source files are organized into directories called
packages, in which a single directory is used for a single package. You can write two types of packages in Go:
The Go tool builds Go packages and installs the resulting binaries into the pkg directory if it is a shared
library, and into the bin directory if it is an executable program. So the pkg and bin directories are used
for storing the output of the packages based on the package type. Keep in mind that you can have multiple
workspaces for your Go programs (Go developers typically use a single workspace for their Go programs).
For example, if you have a GitHub account at github.com/user, it should be your base path. Lets
say you write a package named "mypackage" at github.com/user; your code organization path will be at
%GOPATH%/src/github.com/user/mypackage. When you import this package to other programs, the path for
importing the package will be github.com/user/mypackage. If you maintain the source in your local system,
you can directly write programs under the GOPATH src directory. Suppose that you write a package named
mypackage on a local system; your code organization path will be at %GOPATH%/src/mypackage, and the path
for importing the package will be mypackage.
Writing Go Programs
Once you create a workspace and set the GOPATH environment variable, you can start working with Go. Lets
write few simple programs in Go to get started.
package main
import "fmt"
func main() {
fmt.Println("Hello, world")
}
Line 1: Go programs are organized as packages, and the package name here is specified as main. If you
name a package main, it has a special meaning in Go: the resulting binary will be an executable program.
Line 2: The "fmt" package, which provides the functionality for format and print data, is imported from
the standard library. The keyword import is used for importing packages.
Line 3: The keyword func is used to define a function. The function main will be the entry point of an
executable program and will be executed when the application runs. The package main will have one main
function.
Line 4 The function Println is provided by the package fmt to print the data. Note that the name of the
Println function started with an uppercase letter. In Go, all identifiers that start with an uppercase letter are
exported to other packages so they will be available to call in other packages.
Lets compile and run the sample program using the Go tool. Navigate to the package directory and
then type the run command to run the program. Suppose that the location of the package directory is at
github.com/user/hello:
cd $GOPATH/src/github.com/user/hello
go run main.go
The preceding command simply prints the phrase Hello, world. The run command compiles the
source and runs the program. You can also use the build and install commands with the Go tool to build
and install Go programs that produce binary executables to be run later.
The build command compiles the package and puts the resulting binary into the package folder:
cd $GOPATH/src/github.com/user/hello
go build
The name of the resulting binary is the same as the directory name. If you write this program in a
directory named hello, the resulting binary will be hello (or hello.exe under Windows). After compiling
the source with the build command, you can run the program by typing the binary name.
The install command compiles the package and installs the resulting binary into the bin directory of
GOPATH:
cd $GOPATH/src/github.com/user/hello
go install
You can run this command from any location on your system:
go install github.com/user/hello
The name of the resulting binary is the same as the directory name. You can now run the program by
typing the binary from the bin directory of GOPATH:
$GOPATH/bin/hello
If you have added $GOPATH/bin to your PATH environment variable, just type the binary name from any
location on your system:
hello
Writing a Library
In Go, you can write two types of programs: executable programs and reusable libraries. The previous
sample program was an executable program. Lets write a shared library to provide a reusable piece of code
to other programs. Create a package directory at the location $GOPATH/src/github.com/shijuvar/go-webbook/chapter1/calc.
Listing 1-2 shows a simple package that provides the functionality for adding and subtracting two values.
Listing 1-2. Shared Library Program in Go
1
2
3
4
5
6
7
8
package calc
func Add(x, y int) int {
return x + y
}
func Subtract(x, y int) int {
return x - y
}
Line 1: The package name is specified as calc. The name of the package and package directory must
be same.
Line 3: A function named Add is defined with the keyword func. The name of this function starts with
an uppercase letter, so the Add function will be exported to other packages. If the name of the function starts
with a lowercase letter, it is not exported to other packages, and accessibility will be limited to the same
package. Unlike programming in C++, Java, and C#, you dont need to use private and public keywords to
specify the accessibility of identifiers. You can see the simplicity of the Go language throughout the language
features. The Add method takes two integer parameters and returns an integer value.
Line 6: The Subtract function is similar to the Add function, but subtracts values between two integer types.
Lets build and install the package. Navigate to the package directory in the terminal window and type
the following command:
go install
The install command compiles the source and installs the resulting binary into the pkg folder of
GOPATH (see Figure1-3). In the pkg directory, the calc package will be installed at the location github.com/
shijuvar/go-web-book/chapter1/calc under the platform-specific directory.
Figure 1-3. Install command installing calc package into pkg folder
The install command behaves a bit differently depending on whether you are creating an executable
program or a reusable library. When you run install for executable programs the resulting binary will
be installed into the bin directory of GOPATH while it will be installed into the pkg directory of GOPATH for
libraries.
Now you can reuse this package from any program residing in the GOPATH. Code reusability in Go is very
easy with packages. You have created your first library package. Lets reuse the package code from another
executable program (see Listing 1-3).
Listing 1-3. Reusing the calc Package in a Go Program
1 package main
2
3 import (
4
"fmt"
5
"github.com/shijuvar/go-web-book/chapter1/calc"
6 )
7
8 func main() {
9
var x, y int = 10, 5
10
fmt.Println(calc.Add(x, y))
11
fmt.Println(calc.Subtract(x, y))
12 }
Line 1: Create an executable program.
Lines 3 to 6: These lines import the "fmt" package from the standard library and the "calc" package
from your own library. You can use a single import statement to import multiple packages. The path for the
packages from the standard library uses short paths such as the "fmt" package. For your own packages, you
must specify the full path when importing the packages. Using the full path for external packages avoids
name conflicts among packages, and you can use the same name for multiple external packages in which
the package path would be different.
Line 8: The function main, entry point of the package main.
10
Line 9: Declaring two variables, x and y, with the int data type. Go uses the var keyword to declare
variables in which you can declare multiple variables in a single statement. If you assign values to variables
along with the variable declarations, you can use a shorter statement:
x,y:=10,5
When you use Gos shorter statement for declaring variables, you dont need to specify the variable
type because the Go compiler can infer the type, based on the value you assign to the variable. Go provides
the productivity of a dynamically typed language while keeping it as a statically typed language. The Go
compiler can also do type inference with the var statement:
var x,y=10,5
Lines 10 to 11: You call the exported functions of the calc package and reuse the functionality provided
by the library.
To run the program, type the following command from the program directory:
go run main.go
Testing Go Code
The Go ecosystem provides all the essential tools for developing Go applications, including the capability
for testing Go code without leveraging any external library or tool. The testing package from the standard
library provides the features for writing automated tests, and Go tooling provides support for running
automated tests. When you develop software systems, writing automated tests for application code is an
important practice to ensure quality and improve maintainability. If your code is covered by tests, you can
fearlessly refactor your application code.
Lets write some tests for the calc package created in the previous section. You create a source file with
a name ending in _test.go, in which you write tests by adding functions starting with "Test" and taking one
argument of type *testing.T.
In the calc package directory, create a new source file named calc_test.go that contains the code
shown in Listing 1-4.
Listing 1-4. Testing the calc Package
1 package calc
2
3 import "testing"
4
5 func TestAdd(t *testing.T) {
6
var result int
7
result = Add(15,10)
8
if result!= 25 {
9
t.Error("Expected 25, got ", result)
10
}
11 }
12 func TestSubtract(t *testing.T) {
13
var result int
14
result = Subtract(15,10)
11
15
16
17
18 }
if result!= 5 {
t.Error("Expected 5, got ", result)
}
github.com/shijuvar/go-web-book/chapter1/calc
0.310s
Using Go Playground
Go Playground is a tool that allows you to write and run Go programs from your web browser (see Figure1-4).
By using this tool, you can write and run Go programs without having to install Go on your system.
12
Using Go Mobile
You already know that Go can be used as a general-purpose programming language for building a variety of
applications. It can also be used for building native mobile applications for both Android and iOS. The Go
Mobile project provides tools and libraries for building native mobile applications. It includes a commandline tool called gomobile to help you build these applications.
You can follow two development strategies to include Go into your mobile stack:
The first strategy is to use Go everywhere in your mobile project by using the packages and tools
provided by Go Mobile. Here you can use Go to develop both Android and iOS applications. In the second
strategy, you can reuse a Go library package from a mobile application without making significant changes
to your existing application. In this strategy, you can share a common code base for both Android and iOS
applications. You can write the common functionality once in Go as a library package and put it to the
platform-specific code by invoking the Go package through bindings.
Note You can find out more about the Go Mobile project at https://github.com/golang/mobile.
13
The simplicity of Go is also reflected in Go web programming that enables lots of developer
productivity. When you build web-based systems in other programming languages, you may have to use a
full-fledged web framework such as Rails for Ruby, Django for Python, or ASP.NET MVC for C#. In Go, lots of
web frameworks are available as third-party packages. But without using a full-fledged web framework, you
can build highly scalable web systems by simply using built-in Go packages and a few lightweight libraries
available as third-party packages.
NoteIn Chapter 9, you will learn how to build a scalable web API in Go without using a web framework.
Microservices architecture, in which independently running services have been widely used to
communicate over RESTful APIs, was previously discussed. Go is a great choice for building these RESTful
services and is also becoming the language of choice for building independent services (microservices)
in the microservices architecture because of the simplicity of its language, concurrency capability,
performance, and capability to develop distributed applications.
Microservices architecture is a distributed application architecture, and Go is a great choice for building
distributed systems. Some technologies such as Node.js are great for building lightweight RESTful APIs,
but simply fail when they are used to build distributed applications. Go is the perfect language of choice for
applications with microservices architecture, in which Go can be used for all components of the application
architecture, including small services running as independent units, RESTful services to communicate
among independent services, and message brokers to communicate among independent services using
asynchronous protocols such as AMQP.
Summary
Go is a modern, statically typed, natively compiled, garbage-collected programming language that allows
you to write high-performance applications while enabling a greater level of productivity with its simple
syntax and pragmatic design. In Go, concurrency is a built-in feature at the core language level that
allows you to write highly efficient software systems for modern computers. The Go ecosystem includes
the language, libraries, and tools that provide all the essential features for developing a wide variety of
applications. The Go Mobile project includes packages and tools for building native mobile applications for
Android and iOS. Go is a great programming language for building scalable, web-based, back-end systems
and microservices.
14
Chapter 2
Go Fundamentals
Chapter 1 furnished an overview of the Go programming language and discussed how it is different from
other programming languages. In this chapter, you will learn Go fundamentals for writing reusable code
using packages and how to work with arrays and collections. You will also learn Go language fundamentals
such as defer, panic, and recover, and about Gos unique error-handling capabilities.
Packages
For a Go developer, the design philosophy for developing applications is to develop reusable pieces of
smaller software components and build applications by composing these components. Go provides
modularity, composability, and code reusability through its package ecosystem. Go encourages you to write
maintainable and reusable pieces of code through packages that enable you to compose your applications
with these smaller packages. Go packages are a vital concept that allow you to achieve many of the Go design
principles. Like other features of Go, packages are designed with simplicity and pragmatism.
Go source files are organized into directories called packages, and the name of the package must be the
name of the directory containing the Go source files. You organize Go source files with the .go extension into
directories in which the package name will be same for the source files that belong to a directory. Packages
from the standard library belong to the GOROOT directory, which is the Go installation directory. You write
Go programs in the GOPATH directory as packages that are easily reusable from other packages.
Package main
In Go, you can write two types of programs: executable programs and a shared library. When you write
executable programs, you must give main as the package name to make the package an executable program:
the package main tells the Go compiler that the package should compile as an executable program. The
executable programs in Go are often referred to as commands in the official Go documentation. The entry
point of the executable program is the main function of the main package; the main function in the package
main is the entry point of the executable program. When you write packages as shared libraries, there are no
main packages or main functions in the package.
15
Chapter 2 Go Fundamentals
Package Alias
When you write Go packages, you dont have to worry about package ambiguity; you can even use the same
package names as those of the standard library. When you import your own packages from the GOPATH
location, you refer the full path of the package location to avoid package name ambiguity. You can use two
packages with the same name from two different locations, but you should avoid name ambiguity when
referencing from your programs. The package alias helps you avoid name ambiguity when you reference
multiple packages with the same name.
Listing 2-2 is an example program that uses the package alias to reference packages.
Listing 2-2. Using the Package Alias to Avoid Name Ambiguity
package main
import (
mongo "lib/mongodb/db"
mysql "lib/mysql/db"
)
func main() {
mongo.Get() //calling method of package "lib/mongodb/db"
mysql.Get() //calling method of package "lib/mysql/db"
}
Two packages are imported with the same name, db, but they are referenced with different aliases, and
their exported identifiers are accessed using an alias name.
Function init
When you write packages, you may need to provide some initialization logic for the packages, such as
initializing package variables, initializing database objects, and providing some bootstrapping logic for
the packages. The init function, which helps provide initialization logic into packages, is executed at the
beginning of the execution.
16
Chapter 2 Go Fundamentals
Listing 2-3 is an example program that uses the init function to initialize a database session object.
Listing 2-3. Using the init Function
package db
import (
"gopkg.in/mgo.v2"
)
var Session *mgo.Session //Database Session object
func init() {
// initialization code here
Session, err := mgo.Dial("localhost")
}
func get() {
//logic for get that uses Session object
}
func add() {
//logic for add that uses Session object
}
func update() {
//logic for update that uses Session object
}
func delete() {
//logic for delete that uses Session object
}
In this code block, a MongoDB session object is created in the init function. When you import the
package db into other packages, the function init will be invoked at the beginning of the execution in which
the initialization logic for the package is included. Suppose that you reference the package db from a main
package; the init function will be invoked before the main function executes.
17
Chapter 2 Go Fundamentals
Listing 2-4. Using a Blank Identifier ( _ ) to Call Only the init Method
package main
import (
"fmt"
_ "lib/mongodb/db"
)
func main() {
//implementation here
}
In Listing 2-4, the db package was imported with a package alias as a blank identifier (_). Here you want
the init function of package db to be invoked, but not use other package identifiers.
Importing Packages
Go source files are organized into directories as packages that provide code reusability into other packages.
If you want to reuse package code into other shared libraries and executable programs, you must import
the packages into your programs. You can import packages into your Go programs by using the keyword
import. The statement import tells the Go compiler that you want to reference the code provided by that
particular package. When you import a package into a program, you can reuse all the exported identifiers
of the referenced packages. If you want to export variables, constants, and functions to be used with other
programs, the name of the identifier must start with an uppercase letter. See Listing 2-5.
Listing 2-5. The import Statement to the imports Package
import (
"bytes"
"fmt"
"unicode"
)
In this listing, the packages bytes, fmt, and unicode are imported. The idiomatic way to import multiple
packages in Go is to write the import statements in an import block, as shown here.
When a package is imported, the Go compiler will search the GOROOT directory and then look for the
GOPATH directory if it cant find the package in GOROOT. If the Go compiler cant find a package in either the
GOROOT or GOPATH location, it will generate an error when you try to build your program.
18
Chapter 2 Go Fundamentals
The go get command fetches the package and dependent packages recursively from the repository
location. Once the package is fetched into the GOPATH, you can import and reuse these packages from all the
programs located in the GOPATH location. In many other developer ecosystems, you have to import these
packages at a project level; you have to install packages for each individual project separately. When you
import a package in Go, you actually import from a common location: the GOPATH pkg directory, so you can
appreciate the simplicity and pragmatism in many of the Go features, including the package ecosystem.
Writing Packages
Lets write a sample package to reuse with other programs. Listing 2-6 is a simple package that swaps
characters case from upper- to lowercase or lower- to uppercase.
Listing 2-6. Library Package
package strcon
import (
"bytes"
"unicode"
)
// Swap characters case from upper to lower or lower to upper.
func SwapCase(str string) string {
buf := &bytes.Buffer{}
for _, r := range str {
if unicode.IsUpper(r) {
buf.WriteRune(unicode.ToLower(r))
} else {
buf.WriteRune(unicode.ToUpper(r))
}
}
return buf.String()
}
The package is named strcon. The idiomatic way to provide a package name is to give short and simple
lowercase names without underscores or mixed capital letters. The package names of the standard library
are a great reference for naming packages.
Lets build and install the package strcon to be used with other programs. The package provides a
method named SwapCase that swaps the character case of a string from upper- to lowercase or lower- to
uppercase. Reuse the packages bytes and unicode from the standard library to swap character case. Because
the name of the SwapCase method starts with an uppercase letter, it will be exported to other programs when
19
Chapter 2 Go Fundamentals
this package is referenced package. The SwapCase method iterates through a string and changed the case of
each character:
for _, r := range str {
if unicode.IsUpper(r) {
buf.WriteRune(unicode.ToLower(r))
} else {
buf.WriteRune(unicode.ToUpper(r))
}
}
The keyword range allows you to iterate through arrays and collections. By iterating through a string
value, you can extract each character as a value and swap the character case. On the left side of the range
block, you can provide two variables for getting the key and value of each item in the collection. In this
code block, the value for getting the character value is used, but you dont use the key in the program. In
this context, you can use a blank identifier (_) to avoid compiler errors. It is common practice to use a blank
identifier with range whenever you want to ignore a key or value variable declaration from the left side.
With the following command at the location of the package directory, build the package and install it on
the pkg subdirectory of GOPATH:
go install
Lets write a sample program to reuse the code of the strconv package (see Listing 2-7).
Listing 2-7. Reusing the strconv Package in main.go
package main
import (
"fmt"
"strcon"
)
func main() {
s := strconv.SwapCase("Gopher")
fmt.Println("Converted string is :", s)
}
We import the package strcon to reuse the code for swapping character case in a string. Lets run the
program by typing the following command in the terminal from the package directory:
go run main.go
You should see the following result when running the program:
gOPHER
Because the program in Listing 2-7 is written in package main, the Go build command generates an
executable binary into the package directory. The Go install command builds the package and installs the
resulting binary into the GOPATH bin subdirectory.
20
Chapter 2 Go Fundamentals
Go Tool
The Go tool is a very important component of the Go ecosystem. In the previous sections, you used the Go
tool to build and run Go programs. In the terminal, type the go command without any parameters to get
documentation on the commands provided by the Go tool.
Here is the documentation on Go commands:
Go is a tool for managing Go source code.
Usage:
go command [arguments]
The commands are:
build
clean
doc
env
fix
fmt
generate
get
install
list
run
test
tool
version
vet
Use "go help [topic]" for more information about that topic.
For documentation on any specific command type:
go help [command]
21
Chapter 2 Go Fundamentals
Here is the command for getting documentation for the install command:
go help install
Here is the documentation for the install command:
usage: go install [build flags] [packages]
Install compiles and installs the packages named by the import paths,
along with their dependencies.
For more about the build flags, see 'go help build'.
For more about specifying packages, see 'go help packages'.
See also: go build, go get, go clean.
Formatting Go Code
The Go tool provides the fmt command to format Go code. It is a good practice to format Go programs
before committing source files into version control systems. The go fmt command applies predefined styles
to the source code for format source files, which ensures the right placement of curly brackets, ensures the
proper usage of tabs and spaces, and alphabetically sorts package imports. The go fmt command can be
applied at the package level or on a specific source file.
Listing 2-8 shows the import block before applying go fmt.
Listing 2-8. import Package Block Before go fmt
import (
"log"
"net/http"
"encoding/json"
)
Listing 2-9 shows the import block after applying go fmt:
Listing 2-9. import Package Block After go fmt
import (
"encoding/json"
"log"
"net/http"
)
The import package block rearranges in an alphabetical order after the go fmt command executes.
Note The idiomatic way of writing the import block is to start with standard library packages in
alphabetical order and follow with custom packages in alphabetical order by using one line space between
standard library packages and custom packages.
22
Chapter 2 Go Fundamentals
Go Documentation
Documentation is a huge part of making software accessible and maintainable. It must be well-written
and accurate, of course, but it must also be easy to write and maintain. Ideally, the documentation should
be coupled to the code so it evolves along with the code. The easier it is for programmers to produce good
documentation, the better the situation for everyone.
Go provides the godoc tool, which provides documentation for Go packages. It parses Go source
code, including comments, and generates documentation as HTML or plain text. In short, the godoc tool
generates the documentation from the comments included in the source files. If you want to access the
documentation from the command prompt, type:
godoc [package]
For example, if you want to get documentation for the fmt package, type the following command in
the terminal:
godoc fmt
This command displays the fmt package documentation onto the terminal.
The godoc tool also provides browsable documentation on a web interface. To access the
documentation through a web-based interface, start the web server provided by the godoc tool. Type the
following command in the terminal:
godoc -http=:3000
This command starts a web server at port 3000, which allows you to access the documentation on the
web browser. You can then easily navigate to the package documentation from both the standard library and
the GOPATH location. See Figure2-1.
23
Chapter 2 Go Fundamentals
Arrays
An array is a fixed-length data type that contains the sequence of elements of a single type. An array is
declared by specifying the data type and the length.
Listing 2-10 is a code block that declares an array.
Listing 2-10. Declaring an Integer Array of Five Elements
var x [5]int
An array x is declared for storing five elements of the int type, so the array x will be composed of five
integer elements.
Listing 2-11 is an example program that declares an array and assigns values.
Listing 2-11. Declaring an Array and Assigning Values
package main
import (
"fmt"
)
func main() {
var x [5]int
x[0] = 10
x[1] = 20
x[2] = 30
x[3] = 40
x[4] = 50
fmt.Println(x)
}
You should see the following output:
[10 20 30 40 50]
You can use an array literal to declare and initialize arrays, as shown in Listing 2-12.
Listing 2-12. Initializing an Array with an Array Literal
x := [5]int{10, 20, 30, 40, 50}
24
Chapter 2 Go Fundamentals
You can also initialize an array with a multiline statement (see Listing 2-13).
Listing 2-13. Array Declaration with a Multiline statement
x := [5]int{
10,
20,
30,
40,
50,
}
Note that a comma has been added, even after the last element, because Go requires it. Doing
so enables usability benefits such as being able to easily remove or comment one element from the
initialization block without removing a comment.
When you declare arrays using an array literal, you can use ... instead of specifying the length. The
Go compiler can identify the length of the array, based on the elements you have specified in the array
declaration.
Listing 2-14 is a code block that declares and initializes an array with ....
Listing 2-14. Initializing an Array with ...
x := [...]int{10, 20, 30, 40, 50}
When arrays are initialized using an array literal, you can initialize values for specific elements.
Listing 2-15 is an example program that assigns values for a specific location.
Listing 2-15. Initializing Values for Specific Elements
package main
import "fmt"
func main() {
x := [5]int{2: 10, 4: 40}
fmt.Println(x)
}
You should see the following output:
[0 0 10 0 40]
In Listing 2-15, a value of 10 is assigned to the third element (index 2) and a value of 40 is assigned to the
fifth element (index 4).
Slices
A slice is a data structure that is very similar to an array, but has no specified length. It is an abstraction built
on top of an array type that provides a more convenient way of working with collections. Unlike regular
arrays, slices are dynamic arrays in which the length of the slices can be changed at a later stage as data
increases or shrinks. Slices are very useful data structures when the number of elements to be stored into a
collection cant be predicted.
25
Chapter 2 Go Fundamentals
When you develop applications in Go, you often see slices in the code. If you want to read a database
table and put data into a collection type, use slices instead of arrays because you cant predict the length of
the collection. Slices provide a built-in function called append, which can append elements to a slice quickly.
Listing 2-16 is a code block that declares a nil slice.
Listing 2-16. Declaring a Nil Slice
var x []int
A slice x is declared without specifying the length. It will create a nil slice of integers with a length of
zero. Because slices are dynamic arrays, you can modify their length later on.
There are several ways to create and initialize slices in Go: you can use the built-in function make or a
slice literal.
26
Chapter 2 Go Fundamentals
This code creates a slice with a length of 5 and a capacity of 5. There is a zero value provided for the
index 4.
You can create empty slices with a slice literal, as shown in Listing 2-21.
Listing 2-21. Creating an Empty Slice
x:= []int{}
This code creates an empty slice with zero elements of value. Empty slices are useful when you want to
return empty collections from functions.
Slice Functions
Go provides two built-in functions to easily work with slices: append and copy. The append function creates a
new slice by taking an existing slice and appending all the following elements into it.
Listing 2-22 shows an example of append.
Listing 2-22. Slice with the append Function
package main
import "fmt"
func main() {
x := []int{10,20,30}
y := append(x, 40, 50)
fmt.Println(x, y)
}
You should see the following output:
[10 20 30] [10 20 30 40 50]
The copy function creates a new slice by copying elements from an existing slice into another slice.
Listing 2-23 shows an example of the copy function.
Listing 2-23. Slice with the copy Function
package main
import "fmt"
func main() {
x := []int{10, 20, 30}
y := make([]int, 2)
copy(y, x)
fmt.Println(x, y)
}
27
Chapter 2 Go Fundamentals
28
Chapter 2 Go Fundamentals
When you run the program, you should get the following output:
[10 20]
Length is 2
Capacity is 5
[10 20 30 40 50]
Length is 5
Capacity is 5
[10 20 30 40 50]
Length is 6
Capacity is 12
[10 20 30 40 50 60]
In this output, the slice capacity gets increased to 12 when the append function is used for the
second time.
0
1
2
3
4
Value:
Value:
Value:
Value:
Value:
10
20
30
40
50
Maps
A map is a data structure that provides an unordered collection of key-value pairs. (A data structure similar
to a map is a hash table or dictionary in other programming languages.) Remember that a map is an
unordered collection, so you cant predict the data order when it is iterated over the collection.
29
Chapter 2 Go Fundamentals
There are several ways to create and initialize maps in Go. Similar to slices, the built-in function make or
the map literal can be used to create and initialize maps.
Listing 2-26 is an example program that creates and initializes a map and iterates it over the collection.
Listing 2-26. Creating a Map and Iterating it Over a Collection
package main
import "fmt"
func main() {
dict := make(map[string]string)
dict["go"] = "Golang"
dict["cs"] = "CSharp"
dict["rb"] = "Ruby"
dict["py"] = "Python"
dict["js"] = "JavaScript"
for k, v := range dict {
fmt.Printf("Key: %s Value: %s\n", k, v)
}
}
A map named dict is declared, where the string type is specified for the key (type within the []
operator) and value:
dict := make(map[string]string)
Values are assigned to the map with the given key (here the key "go" is for the value "Golang"):
dict["go"] = "Golang"
Finally, iterate over the collection using the range and print key and value of each element in the
collection:
for k, v := range dict {
fmt.Printf("Key: %s Value: %s\n", k, v)
}
You should see the following output:
Key:
Key:
Key:
Key:
Key:
cs
rb
py
js
go
Value:
Value:
Value:
Value:
Value:
CSharp
Ruby
Python
JavaScript
Golang
Note The data order will vary every time because a map is an unordered collection.
30
Chapter 2 Go Fundamentals
You can access the value of an element from a map by providing the key (see Listing 2-27):
Listing 2-27. Accessing the Value of an Element from a Map
lan, ok := dict["go"]
When an element is accessed by providing a key, it will return two values: The first value is the result
(the value of the element); the second is a Boolean value that indicates whether the lookup was successful.
Go provides a convenient way to write this, as shown in Listing 2-28.
Listing 2-28. Accessing the Element Value from a Map in an Idiomatic Way
if lan, ok := dict["go"]; ok {
fmt.Println(lan, ok)
}
Defer
If you have used try/catch/finally blocks in any programming language such as C# and Java, you may
have used the finally block to clean up the resources that are allocated in a try block. The statements of
a finally block run when the execution flow of control leaves a try statement. This finally block will
invoke even when the flow of control goes to a catch block due to a handled exception. Using defer, you
can implement cleanup code in Go, which is more efficient than using a finally block in other languages.
Though you would primarily use defer for implementing cleanup code, it is not used only for that purpose.
For example, by using conjunction with recover, you regain control from a panicking function.
A defer statement pushes a function call (or a code statement) onto a list. The list of saved function
calls is executed after the surrounding function returns. The last added functions are invoked first from the
list of deferred functions. Suppose you add function f1 first, then f2, and finally f3 onto the deferred list; the
order of the execution will be f3, f2, and then f1.
Listing 2-29 is a code block that uses defer to clean up a database session object.
Listing 2-29. Defer Statements for Cleaning up Resources
session, err := mgo.Dial("localhost") //MongoDB Session object
defer session.Close()
c := session.DB("taskdb").C("categories")
//code statements using session object
This code block creates a session object for a MongoDB database. In the next line, the code statement
session.Close() is added onto the deferred list to clean up the resources of the database session object
after returning the surrounding function. You can add any number of code statements and functions onto
the deferred list.
31
Chapter 2 Go Fundamentals
Panic
The panic function is a built-in function that lets you stop the normal flow of control and panic a function.
When you call panic from a function, it stops the execution of the function, any deferred functions are
executed, and the caller function gets a panicking function. Keep in mind that all deferred functions are
executed normally before the execution stops. When developing applications, you will rarely call the panic
function because your responsibility is to provide proper error messages rather than stopping the normal
control flow. But in some scenarios, you may need to call the panic function if there are no possibilities to
continue the normal flow of control. For example, if you cant connect to a database server, it doesnt make
any sense to continue executing the application.
Listing 2-30 is the code block that calls panic if there is an error while connecting to a database.
Listing 2-30. Using the panic Function to Panic a Function
session, err := mgo.Dial("localhost") // Create MongoDB Session object
if err != nil {
panic(err)
}
defer session.Close()
This code block tries to establish a connection to a MongoDB database and create a session object. You
call panic if there is an error while establishing a connection to the database. It stops the execution, and the
caller function gets a panicking function.
Recover
The recover function is a built-in function that is typically used inside deferred functions that regain control
of a panicking function. The recover function is useful only inside deferred functions because the differing
statements are the only way to execute something when a function is panicking.
Listing 2-31 is an example program that demonstrates panic recovery.
Listing 2-31. Recovering from a Panicking Function Using recover
package main
import "fmt"
func doPanic() {
defer func() {
if e := recover(); e != nil {
fmt.Println("Recover with: ", e)
}
}()
panic("Just panicking for the sake of demo")
fmt.Println("This will never be called")
}
func main() {
fmt.Println("Starting to panic")
doPanic()
fmt.Println("Program regains control after panic recover")
}
32
Chapter 2 Go Fundamentals
In the preceding program, the function doPanic is called from the main function. Inside the function
doPanic, an anonymous function has been added to the deferred list, in which recover is called to regain
control from the panicking function. For the sake of the demo, the panic function is called by providing
a string value. When a function is panicking, any deferred functions are executed. Because the recover
function is called inside the deferred function, control of the program execution is regained. When recover is
called, the value provided by the panic function is received.
NoteStatements provided after the panic call in the doPanic function dont execute, but statements
after the call to the doPanic function in the main function do execute as control is regained from the panicked
function.
You should see the following output:
Starting to panic
Recover with: Just panicking for the sake of demo
Program regains control after panic recover
Error Handling
Error handling in Go is different from that of other programming languages. Most programming languages
use a try/catch block to handle exceptions; in Go, a function can return multiple values. By leveraging this
feature, functions in Go typically return a value of a built-in error type, along with other values returned from
a function. An idiomatic way to return an error value is to provide the value after other values return. When
you look on the standard library packages, you can see that many functions return an error value. So when
you call the functions of standard library packages, you can see whether the error value is nil. If a non-nil
error value returns, you can identify that you are getting an exception. You can use the same approach for Go
functions where you can return multiple values from a function including an error value.
Listing 2-32 is the code block that demonstrates error handling by calling the standard library function.
Listing 2-32. Error Handling in Go
f, err := os.Open("readme.ext")
if err != nil {
log.Fatal(err)
}
In this code block, the function Open of the os package is called to open a file. The function Open returns
two values: the File object and the error value. If the function returns a non-nil error value, there is an error,
and the file wont open. Here the error value is logged if an error occurred.
Listing 2-33 is a custom function that returns multiple values, including an error value.
Listing 2-33. Defining Functions with an Error Value
func GetById(id string) (models.Task, error) {
var task models.Task
// Implementation here
return task,nil // multiple return values
}
33
Chapter 2 Go Fundamentals
When you write functions to provide an error value, you can return a nil value if any error has occurred.
The caller function can check whether the error value is nil; if the error value is not nil, the function
receives an error.
Listing 2-34 is the code block that demonstrates how to call a function that provides an error value.
Listing 2-34. A caller Function Checks the Error Value
task, err:= GetById (105)
if err != nil {
log.Fatal(err)
}
//Implementation here if error is nill
Summary
This chapter discussed Go packages, which are important features in the Go ecosystem. Go provides
modularity, composability, and code reusability through its package ecosystem. Go source files are
organized into directories called packages. In Go, you can write two types of packages: package main that
results in an executable program (often known as a command in Go documentation) and shared library
packages that reuse code with other packages. You can give package aliases to avoid name ambiguity
when referencing packages with the same name. The packages init function can be used to initialize
packages variables and for other initialization logic. You dont need to explicitly call the init function; it is
automatically executed at the beginning of the execution.
The Go tool is a command-line tool that provides various commands for functionalities such as
compiling, formatting, testing and running Go code.
Go provides three types of data structures to manage collections of data: arrays, slices, and maps. An
array is a fixed-length data type that contains a sequence of elements of a single type. A slice is a dynamic
array that can be grown at a later stage as data increases or shrinks. Go provides two built-in functions for
manipulating slices: append and copy. A map is a data structure that provides an unordered collection of
key-value pairs.
Go provides the defer keyword for cleaning up resources. A defer statement pushes a function call
onto a list of deferred functions, which is executed after the surrounding function returns. The panic
function allows you to stop the normal flow of control and panic a function. The recover function, which
regains control of a panicking function, is useful only inside deferred functions.
Error handling in Go differs from that of most other programming languages. Because Go functions
can return multiple values, an error value can be returned from functions. So from caller functions, you
can easily check whether the function returns an error value and then provide code implementations
accordingly.
34
Chapter 3
35
36
37
38
39
Type Composition
Gos design philosophy is to be a simple language while focusing on real-world practices; it simply ignores
many academic thoughts to maintain it as a minimalistic language. You saw the simplicity of Gos type system
in previous sections. The major decision about its type system is that although it does not support inheritance,
it supports composition through type embedding. Go encourages you to use composition over inheritance.
Note Composition is a design philosophy in which smaller components are combined into larger components.
The Person type was defined in the previous section. You can create bigger and more concrete types by
embedding the Person type. In Listing 3-11, two more types are created by embedding the Person type.
Listing 3-11. Type Embedding for Composition
type Admin struct {
Person //type embedding for composition
Roles []string
}
type Member struct {
Person //type embedding for composition
Skills []string
}
The Person type is embedded into Admin and Member types so that all Person fields and methods will be
available in these new types.
Lets create a sample program to understand the functionality of type embedding (see Listing 3-12).
40
41
42
43
The statement m.Person.PrintDetails() calls the PrintDetails method of the Person type.
Lets run the modified program with the code shown in Listing 3-14.
Listing 3-14. Running the Program with Method Overriding
alex := Admin{
Person{
"Alex",
"John",
time.Date(1970, time.January, 10, 0, 0, 0, 0, time.UTC),
"[email protected]",
"New York"},
[]string{"Manage Team", "Manage Tasks"},
}
shiju := Member{
Person{
"Shiju",
"Varghese",
time.Date(1979, time.February, 17, 0, 0, 0, 0, time.UTC),
"[email protected]",
"Kochi"},
[]string{"Go", "Docker", "Kubernetes"},
}
//call methods for alex
alex.PrintName()
alex.printDetails()
//call methods for shiju
shiju.PrintName()
shiju.PrintDetails()
You should see the following output:
Alex John
[Date of Birth: 1970-01-10 00:00:00 +0000 UTC, Email: [email protected], Location: New York ]
Admin Roles:
Manage Team
Manage Tasks
Shiju Varghese
[Date of Birth: 1979-02-17 00:00:00 +0000 UTC, Email: [email protected], Location: Kochi ]
Skills:
Go
Docker
Kubernetes
44
45
46
Listing 3-17. Example Program with Interface, Composition, and Method Overriding
package main
import (
"fmt"
"time"
)
type User interface {
PrintName()
PrintDetails()
}
type Person struct {
FirstName, LastName string
Dob
time.Time
Email, Location
string
}
//A person method
func (p Person) PrintName() {
fmt.Printf("\n%s %s\n", p.FirstName, p.LastName)
}
//A person method
func (p Person) PrintDetails() {
fmt.Printf("[Date of Birth: %s, Email: %s, Location: %s ]\n", p.Dob.String(),
p.Email, p.Location)
}
type Admin struct {
Person //type embedding for composition
Roles []string
}
//overrides PrintDetails
func (a Admin) PrintDetails () {
//Call person PrintDetails
a.Person. PrintDetails ()
fmt.Println("Admin Roles:")
for _, v := range a.Roles {
fmt.Println(v)
}
}
type Member struct {
Person //type embedding for composition
Skills []string
}
47
//overrides PrintDetails
func (m Member) PrintDetails () {
//Call person PrintDetails
m.Person. PrintDetails()
fmt.Println("Skills:")
for _, v := range m.Skills {
fmt.Println(v)
}
}
type Team struct {
Name, Description string
Users
[]User
}
func (t Team) GetTeamDetails() {
fmt.Printf("Team: %s - %s\n", t.Name, t.Description)
fmt.Println("Details of the team members:")
for _, v := range t.Users {
v.PrintName()
v.PrintDetails()
}
}
func main() {
alex := Admin{
Person{
"Alex",
"John",
time.Date(1970, time.January, 10, 0, 0, 0, 0, time.UTC),
"[email protected]",
"New York"},
[]string{"Manage Team", "Manage Tasks"},
}
shiju := Member{
Person{
"Shiju",
"Varghese",
time.Date(1979, time.February, 17, 0, 0, 0, 0, time.UTC),
"[email protected]",
"Kochi"},
[]string{"Go", "Docker", "Kubernetes"},
}
chris := Member{
Person{
"Chris",
"Martin",
time.Date(1978, time.March, 15, 0, 0, 0, 0, time.UTC),
"[email protected]",
"Santa Clara"},
[]string{"Go", "Docker"},
}
48
team := Team{
"Go",
"Golang CoE",
[]User{alex, shiju, chris},
}
//get details of Team
team.GetTeamDetails()
}
You should see the following output:
Team: Go
- Golang CoE
49
Concurrency
When larger applications are developed, multiple tasks might be needed to complete program execution.
Other programs are composed of many smaller subprograms. When you develop these kinds of applications,
you can achieve performance improvements if you can execute these tasks and subprograms concurrently.
Lets say you are developing a web-based, back-end API in which many concurrent users are accessing the
API. If you can concurrently execute these concurrent web requests on the web server, you can dramatically
improve the performance and efficiency of the system.
When you develop web applications and web APIs, managing a large set of concurrent users is really a
challenge. Go is designed to solve the challenges of modern programming and larger systems. It provides
support for concurrency at the core language level and implements concurrency directly into its language
and runtime. This helps you easily build high-performance systems.
Many programming environments provide concurrency support with the help of an extra library,
but not as a built-in feature of the core language. Concurrency is one of the major selling points of the Go
language, along with its simplicity and pragmatism. In Go, concurrency is implemented by using two unique
features: goroutines and channels.
Goroutines
In Go, a goroutine is the primary mechanism for running programs concurrently. Goroutines let you run
functions concurrently with other functions, and you can run a function as a goroutine to access the concurrency
capability of Go. When you create a function as a goroutine, it works as an independent task unit that runs
concurrently with other goroutines. In short, a goroutine is a lightweight thread managed by the Go runtime.
The most powerful capability of Gos concurrency is that everything related to concurrency is fully
managed by the Go runtime, but not the OS resources. Go runtime has a powerful piece of software
component called the scheduler that controls and manages everything related to the scheduling and
running of goroutines. Because the Go runtime has the full control over the concurrent tasks running with
goroutines, it enables high performance and better control to your applications when you leverage the
concurrency capabilities of Go.
50
51
When you run the program, you see the following output. It will vary each time because of the random
wait during the program execution:
Start Goroutines
Waiting To Finish
Count: 1 from A
Count: 1 from B
Count: 2 from B
Count: 2 from A
Count: 3 from B
Count: 3 from A
Count: 4 from A
Count: 5 from A
Count: 4 from B
Count: 6 from A
Count: 5 from B
Count: 7 from A
Count: 6 from B
Count: 7 from B
Count: 8 from A
Count: 8 from B
Count: 9 from B
Count: 9 from A
Count: 10 from A
Count: 10 from B
Terminating Program
A function named printCounts is created that is called two times as a goroutine:
//launch a goroutine with label "A"
go printCounts("A")
//launch a goroutine with label "B"
go printCounts("B")
To ensure that all goroutines are executed before the program is terminated, use WaitGroup, which is
provided by the sync package:
var wg sync.WaitGroup
In the main function, add a count of 2 into the WaitGroup for the two goroutines:
wg.Add(2)
Launch two goroutines by using the keyword go:
//launch a goroutine with label "A"
go printCounts("A")
//launch a goroutine with label "B"
go printCounts("B")
52
In the printCounts function, the values from 1 to 10 are printed. For the sake of the demo, the execution
is randomly delayed.
func printCounts(label string) {
// Schedule the call to WaitGroup's Done to tell we are done.
defer wg.Done()
// Randomly wait
for count := 1; count <= 10; count++ {
sleep := rand.Int63n(1000)
time.Sleep(time.Duration(sleep) * time.Millisecond)
fmt.Printf("Count: %d from %s\n", count, label)
}
}
In the beginning of the printCounts function, the Done method of the WaitGroup type is scheduled
to call to tell the main program that the goroutine has executed. The Done method of the WaitGroup type
is scheduled to call using the keyword defer (as discussed in Chapter 2). This keyword allows you to
schedule other functions to be called when the function returns. In this example, the Done method will be
invoked when the goroutine is executed, which ensures that the value of WaitGroup is decremented so
the main function can check whether any goroutine is yet to be executed. In the main function, the Wait
method of WaitGroup is called, which will check the count of WaitGroup and will block the program until
it becomes zero. When the Done method of WaitGroup is called, the count will be decremented by one. At
the beginning of the execution, the count is added as 2. When Wait is called, it will wait for the count to turn
zero, thereby ensuring that both the goroutines are executed before the program terminates. When the count
becomes zero, the program terminates:
wg.Wait()
53
You can control the GOMAXPROCS setting in your program based on the context of your applications.
Listing 3-19 is a code block that modifies the GOMAXPROCS to one from its default setting:
Listing 3-19. Explicitly Set GOMAXPROCS Setting
import "runtime"
// Set the value of GOMAXPROCS.
runtime.GOMAXPROCS(1)
Note Check out Rob Pikes excellent presentation, Concurrency Is Not Parallelism to understand the
difference between concurrency and parallelism: www.youtube.com/watch?v=cN_DpYBzKso
Channels
Listing 3-18 created two goroutines that were running independently and didnt need to communicate with
each other. However, sometimes there is a need for communication among goroutines for sending and
receiving data, hence the need for synchronization among goroutines. In many programming environments,
communication among concurrent programs is complex or limited with features. Go allows you to
communicate among goroutines using channels that enable the synchronization of goroutine execution.
The built-in make function is used to declare a channel with the help of the keyword chan, followed by
the type for specifying the type of data you are using for exchanging data:
count := make(chan int)
A channel of integer type is declared, so integer values will be passed into channels. There are two types
of channels available for synchronization of goroutines:
Buffered channels
Unbuffered channels
Listing 3-20 is the code block that declares unbuffered and buffered channels.
Listing 3-20. Declaring Unbuffered and Buffered Channels
// Unbuffered channel of integers.
count := make(chan int)
// Buffered channel of integers for buffering up to 10 values.
count:= make(chan int, 10)
When buffered channels are declared, the capacity of channels to hold the data must be specified. If you
try to send more data than its capacity, you get an error.
Unbuffered Channel
Unbuffered channels provide synchronous communication among goroutines, which ensures message
delivery among them. With unbuffered channels, message sending is permitted only if there is a
corresponding receiver that is ready to receive the messages. In this case, both sides of the channel have
to wait until the other side is ready for sending and receiving messages. With buffered channels, a limited
number of messages can be sent into the channel without a corresponding concurrent receiver for receiving
those messages. After the messages are sent into buffered channels, those messages from the channel are
received. Unlike unbuffered channels, message delivery cant be guaranteed with buffered channels.
54
The <- operator is used to send values into channels (see Listing 3-21).
Listing 3-21. Sending Values into a Channel
// Buffered channel of strings.
messages := make(chan string, 2)
// Send a message into the channel.
messages <- "Golang"
To receive messages from channels, the <- operator is used as a unary operator (see Listing 3-22).
Listing 3-22. Receiving Values from a Channel
// Receive a string value from the channel.
value := <-messages
Listing 3-23 is an example program that demonstrates how to communicate and synchronize data
among goroutines using unbuffered channels.
Listing 3-23. Example Program with Unbuffered Chanel
package main
import (
"fmt"
"sync"
)
// wg is used to wait for the program to finish.
var wg sync.WaitGroup
func main() {
count := make(chan int)
// Add a count of two, one for each goroutine.
wg.Add(2)
fmt.Println("Start Goroutines")
//launch a goroutine with label "A"
go printCounts("A", count)
//launch a goroutine with label "B"
go printCounts("B", count)
fmt.Println("Channel begin")
count <- 1
// Wait for the goroutines to finish.
fmt.Println("Waiting To Finish")
wg.Wait()
fmt.Println("\nTerminating Program")
}
55
56
Buffered Channels
An unbuffered channel provides a synchronous way of data communication among goroutines that ensures
guaranteed message delivery. A buffered channel is different from this approach. Unlike an unbuffered
channel, it is created by specifying the number of values it can contain. Buffered channels accept the
specified number of values before they are received.
Listing 3-24 is a basic example program that demonstrates a buffered channel.
57
Summary
Gos type system has two fundamental types: concrete types and interface types. You can create concrete
types by using built-in types such as bool, int, string, and float64. You can also create composite types
such as arrays, slices, maps and channels, and your own user-defined types.
Structs are used to create user-defined types in Go. Structs are analogous to classes in classical objectoriented languages, but the struct design is unique when compared with other languages. A struct is a
lightweight version of a class. Gos type system does not support inheritance. Instead, you can compose your
types using type embedding.
The interface type is a powerful feature; it enables you to provide lots of extensibility and composability
when you build software systems. Interface provides contracts to user-defined concrete types. In Go, you
dont need to explicitly implement interfaces into concreate types; you can implement interfaces into
concrete types by simply providing the implementation of methods into your concrete types based on the
definition of methods defined in the interface type.
Concurrency in Go is the capability to run functions concurrently with other functions. Concurrency
is a built-in feature of the Go language, and the Go runtime manages the execution of concurrent functions
using a scheduler. Concurrency in Go is implemented with two features: goroutines and channels. A
goroutine is a function that can run concurrently with other functions, working as an independent unit.
Channels are used to synchronize goroutines to send and receive messages. In Go, you can create two type
of channels: buffered channels and unbuffered channels. Unbuffered channels block receivers of goroutines
until the data is available on a channel and block senders of goroutines until a receiver is available. Buffered
channels block a sender only when the buffer is filled to capacity.
In Chapters 2 and 3, you learned the basics of the Go programming language. From Chapter 4 onward,
you will learn about web programming in Go and how to develop web applications and RESTful services.
58
Chapter 4
net/http Package
When you think about building web applications and web APIs, or simply building HTTP servers in Go, the
most important package is net/http, which comes from the Go standard library and provides all essential
functionalities necessary for developing full-fledged web applications. The design philosophy of Go is to
develop bigger programs by composing small pieces of components. The net/http package provides a
greater level of composability and extensibility so you can easily replace or extend functionalities of the
standard library with your own package or a third-party package. In other programming environments
such as Ruby, you use a full-fledged web application framework such as Rails to develop web applications.
In Go, you can find many full-fledged web application frameworks such as Beego, Revel, and Martini.
But the idiomatic way of developing web applications in Go is to leverage standard library packages as
the fundamental pieces of the programming block, along with other libraries (not frameworks) that are
compatible with the http package. For web development, net/http and html/template are the major
packages provided by the standard library. By simply using these two packages, you can build fully
functional web applications without leveraging any third-party packages.
59
Note You should start web development with standard library packages before diving into third-party
packages and frameworks so that you understand Gos web development ecosystem. If you start web
development with third-party packages and frameworks, you will miss many core fundamentals because these
frameworks provide lots of spoon-feeding kinds of functionalities and syntactic sugars.
The http package provides implementations for HTTP clients and servers, including various structs and
functions for client and server implementations. Various functionalities of the http package will be explored
throughout this chapter.
60
ServeMux
Handler
ServeMux
The ServeMux is a multiplexor (or simply an HTTP request router) that compares incoming HTTP requests
against a list of predefined URI resources and then calls the associated handler for the resource requested by
the HTTP client.
Handler
The ServeMux provides a multiplexor and calls corresponding handlers for HTTP requests. Handlers are
responsible for writing response headers and bodies. In Go, any object can become a handler, thanks to Gos
excellent interface implementation provided by its type system. If any object satisfies the implementation of
the http.Handler interface, it can be a handler for serving HTTP requests.
Listing 4-1 shows the definition of the http.Handler interface.
Listing 4-1. http.Handler Interface
type Handler interface {
ServeHTTP(ResponseWriter, *Request)
}
The ServeHTTP method has two arguments: an http.ResponseWriter interface and a pointer to an
http.Request struct. The ResponseWriter interface writes response headers and bodies into the HTTP
response. You can use Request to extract information from the incoming HTTP requests. For example, if you
want to read querystring values, use the Request object.
The http package provides several functions that implement the http.Handler interface and are used
as common handlers:
FileServer
NotFoundHandler
RedirectHandler
StripPrefix
TimeoutHandler
61
A static web site application is created in the GOPATH location with the folder structure specified in
Figure4-2. The implementation of the static web server is written in the main.go source file, and the static
contents are put into the public folder that provides the contents for the static web site.
Listing 4-2 shows the implementation in main.go that provides a static web server by serving the
contents of public folder.
Listing 4-2. Static Web Server Using the FileServer Function
package main
import (
"net/http"
)
func main() {
mux := http.NewServeMux()
fs := http.FileServer(http.Dir("public"))
mux.Handle("/", fs)
http.ListenAndServe(":8080", mux)
}
In the main function, the http.NewServeMux function is called to create an empty ServeMux object. The
http.FileServer function is then called to create a new handler for serving the static contents of a public
folder in the web site. The ServeMux.Handle function is called to register the URL path "/" with the handler
created with the http.FileServer function. Finally, the http.ListenAndServe function is called to create
a HTTP server that starts listening at :8080 for incoming requests. The address and ServeMux objects are
passed into the ListenAndServe function.
Listing 4-3 shows the signature of the ListenAndServe function.
Listing 4-3. ListenAndServe Signature
func ListenAndServe(addr string, handler Handler) error
The ListenAndServe function listens on the TCP network address and then calls Serve with
http.Handler to handle requests on incoming connections. The second argument of the ListenAndServe
function is an http.Handler, but a ServeMux object was passed. The ServeMux type also has a ServeHTTP
method, which means that it satisfies the http.Handler interface so that a ServeMux object can be passed
as a second argument for the ListenAndServe function. Keep in mind that an instance of a ServeMux is an
implementation of the http.Handler interface. If you pass nil as the second argument for ListenAndServe,
a DefaultServeMux will be used for the http.Handler. DefaultServeMux is an instance of ServeMux, so it is
also a handler.
When you run the program, you can access the static web page about.html by navigating to
http://localhost:8080/about.html (see Figure4-3). The about.html page was put into the public folder
for serving as static content.
62
63
http.HandlerFunc type
Instead of creating custom handler types by implementing the http.Handler interface, you can use the
http.HandlerFunc type to serve as an HTTP handler. You can convert any function into a HandlerFunc type
if the function has the signature func(http.ResponseWriter, *http.Request). The HandlerFunc type
works as an adapter that allows you to use normal functions as HTTP handlers. The HandlerFunc type has a
built-in method ServeHTTP(http.ResponseWriter, *http.Request), so it also satisfies the http.Handler
interface and can work as an HTTP handler.
Listing 4-6 is an example program that uses the HandlerFunc type to create HTTP handlers.
Listing 4-6. Using the HandlerFunc Type to Create Ordinary Functions as Handlers
package main
import (
"fmt"
"log"
"net/http"
)
64
65
This program works the same way as Listing 4-5. The messageHandler function returns an http.Handler.
Within the messageHandler function, http.HandlerFunc is returned by calling an anonymous function
that has the signature func(http.ResponseWriter, *http.Request) so that it satisfies the http.Handler,
and the messageHandler function can return http.Handler. As discussed in the previous section, the
http.HandlerFunc type is an implementation of http.Handler. Here, a closure is formed with the variable
"message", and the function is put inside the messageHandler function. This approach is useful when you
are working on real-world applications; you can use this approach to provide values of application context
level types into handler functions.
ServeMux.HandleFunc Function
In the previous section, a normal function was converted into a HandlerFunc type and used as an HTTP
handler by registering it with ServeMux.Handle. Because ordinary functions are frequently used as HTTP
handlers in this way, the http package provides a shortcut method: ServeMux.HandleFunc. The HandleFunc
registers the handler function for the given pattern. (This is just a shortcut method for your convenience.)
It internally (inside the http package) converts into a HandlerFunc type and registers the handler into
ServeMux.
Listing 4-8 is an example program that uses ServeMux.HandleFunc.
Listing 4-8. Using ServeMux.HandleFunc
package main
import (
"fmt"
"log"
"net/http"
)
func messageHandler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Welcome to Go Web Development")
}
func main() {
mux := http.NewServeMux()
// Use the shortcut method ServeMux.HandleFunc
mux.HandleFunc("/welcome", messageHandler)
log.Println("Listening...")
http.ListenAndServe(":8080", mux)
}
When you run the program, navigate to http://localhost:8080/welcome for the output.
DefaultServeMux
In the example programs in this chapter, the ServeMux object was created by calling the function
http.NewServeMux. DefaultServeMux is same as the ServeMux objects from the previous programs.
DefaultServeMux is the default ServeMux used by the Serve method, and the ServeMux object is instantiated
when the http package is used.
66
http.Server Struct
In previous examples, http.ListenAndServe was called to run HTTP servers, which does not allow you to
customize HTTP server configuration. The http package provides a struct named Server that enables you to
specify HTTP server configuration.
Listing 4-11 shows the Server struct.
67
{
string
Handler
time.Duration
time.Duration
int
*tls.Config
map[string]func(*Server, *tls.Conn, Handler)
func(net.Conn, ConnState)
*log.Logger
This struct allows you to configure many values, including error logger for the server, maximum
duration before timing out read of the request, maximum duration before timing out write of the response,
and maximum size of request headers.
Listing 4-12 is an example program that uses the Server struct to customize server behavior.
Listing 4-12. Using the http.Server Struct
package main
import (
"fmt"
"log"
"net/http"
"time"
)
func messageHandler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Welcome to Go Web Development")
}
func main() {
http.HandleFunc("/welcome", messageHandler)
server := &http.Server{
Addr:
":8080",
ReadTimeout:
10 * time.Second,
WriteTimeout:
10 * time.Second,
MaxHeaderBytes: 1 << 20,
}
log.Println("Listening...")
server.ListenAndServe()
}
The server behavior is customized by creating a Server type object and calling the Server
ListenAndServe method. In previous examples, the http.ListenAndServe function was used to start the
HTTP server. When the http.ListenAndServe function is called, it internally creates a Server type instance
and calls the ListenAndServe method.
Listing 4-13 is the implementation of http.ListenAndServe from Go source.
68
Gorilla Mux
The http.ServeMux is an HTTP request multiplexer that works well for most common scenarios. It was used
in the example programs as the request multiplexer. If you want more power for your request multiplexer,
you might consider a third-party routing package that is compatible with standard http.ServeMux. For
example, if you want to specify RESTful resources with proper HTTP endpoints and HTTP methods, it is
difficult to work with the standard http.ServeMux.
The mux package from the Gorilla web toolkit (github.com/gorilla/mux) is a powerful request router
that allows you to configure the multiplexer in your own way. This package is very useful when you build
RESTful services and it implements the http.Handler interface so it is compatible with the standard
http.ServeMux. With the mux package, requests can be matched based on URL host, path, path prefix,
schemes, header and query values, and HTTP methods. You can also use custom matchers and routes as
subrouters with this package.
To install the mux package, run the following command in the terminal:
go get github.com/gorilla/mux
Lets configure routes with the mux package (see Listing 4-14).
Listing 4-14. Routing with the mux Package
func main() {
r := mux.NewRouter().StrictSlash(false)
r.HandleFunc("/api/notes", GetNoteHandler).Methods("GET")
r.HandleFunc("/api/notes", PostNoteHandler).Methods("POST")
r.HandleFunc("/api/notes/{id}", PutNoteHandler).Methods("PUT")
r.HandleFunc("/api/notes/{id}", DeleteNoteHandler).Methods("DELETE")
server := &http.Server{
Addr:
":8080",
Handler: r,
}
server.ListenAndServe()
}
Here a mux.Router object is created by calling the NewRouter function and then specifying the routes
for the resources. You can match with HTTP methods when specifying URI patterns, so it is useful when
building RESTful applications. Because the mux package implements the http.Handler interface, you can
easily work with the http standard package. It provides lot of extensibility so that you can easily replace or
extend many of its functionalities with your own packages and third-party packages.
Unlike other web-programming ecosystems, the idiomatic way of web development in Go is to
use standard library packages and third-party packages if required to extend the capabilities of existing
functionalities. When you choose third-party packages, it is important to choose those that are compatible
with the standard library package. The mux package is a great example for this approach, which is compatible
with the http package because it provides the http.Handler interface.
69
NoteRepresentational State Transfer (REST): If you want to know more about REST, I recommend that
you read Martin Fowlers article, Richardson Maturity Model: Steps Toward the Glory of REST. Access it here:
http://martinfowler.com/articles/richardsonMaturityModel.html
70
note.CreatedOn = time.Now()
id++
k := strconv.Itoa(id)
noteStore[k] = note
j, err := json.Marshal(note)
if err != nil {
panic(err)
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
w.Write(j)
}
//HTTP Get - /api/notes
func GetNoteHandler(w http.ResponseWriter, r *http.Request) {
var notes []Note
for _, v := range noteStore {
notes = append(notes, v)
}
w.Header().Set("Content-Type", "application/json")
j, err := json.Marshal(notes)
if err != nil {
panic(err)
}
w.WriteHeader(http.StatusOK)
w.Write(j)
}
//HTTP Put - /api/notes/{id}
func PutNoteHandler(w http.ResponseWriter, r *http.Request) {
var err error
vars := mux.Vars(r)
k := vars["id"]
var noteToUpd Note
// Decode the incoming Note json
err = json.NewDecoder(r.Body).Decode(¬eToUpd)
if err != nil {
panic(err)
}
if note, ok := noteStore[k]; ok {
noteToUpd.CreatedOn = note.CreatedOn
//delete existing item and add the updated item
delete(noteStore, k)
noteStore[k] = noteToUpd
} else {
log.Printf("Could not find key of Note %s to delete", k)
}
w.WriteHeader(http.StatusNoContent)
}
71
72
Here the struct fields are represented in uppercase letters; encode these fields in lowercase letters for
JSON representation.
This sample does not use any database storage, so a map is used as the persistence storage for the sake
of the demo. An integer variable id is used to generate a key for the map:
//Store for the Notes collection
var noteStore = make(map[string]Note)
//Variable to generate key for the map
var id int = 0
URI
HTTP Method
Handler Function
/api/notes
/api/notes
/api/notes/{id}
/api/notes/{id}
Get
Post
Put
Delete
GetNoteHandler
PostNoteHandler
PutNoteHandler
DeleteNoteHandler
73
74
Lets have a look at the handler function for HTTP Post for creating a new Note resource:
//HTTP Post - /api/notes
func PostNoteHandler(w http.ResponseWriter, r *http.Request) {
var note Note
// Decode the incoming Note json
err := json.NewDecoder(r.Body).Decode(¬e)
if err != nil {
panic(err)
}
note.CreatedOn = time.Now()
id++
k := strconv.Itoa(id)
noteStore[k] = note
j, err := json.Marshal(note)
if err != nil {
panic(err)
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
w.Write(j)
}
A pointer to the http.Request object is used to get information about HTTP Request. Here the incoming
JSON data is accessed from Request.Body and decoded into the Note resource using the json package.
The NewDecoder function creates a Decoder object, and its Decode method decodes the JSON string into
the given type (the Note type in this example). The id variable is incremented to generate a key value for
the noteStore map. The string type is used as the key for the noteStore map, so the int type is converted
to string using the strconv.Itoa function. The new Note resource is added into the noteStore map with
the key created with the id variable. Finally, the response is sent as JSON data for the newly created Note
resource with the appropriate response header back to the HTTP client. json.Marshal is used to convert the
Note object into JSON data.
Figure4-5 shows testing of HTTP Post for the resource "/api/nodes". You see the newly created
resource in the body with the HTTP status code 201 that represents the HTTP status "Created".
75
76
if note, ok := noteStore[k]; ok {
noteToUpd.CreatedOn = note.CreatedOn
//delete existing item and add the updated item
delete(noteStore, k)
noteStore[k] = noteToUpd
} else {
log.Printf("Could not find key of Note %s to delete", k)
}
w.WriteHeader(http.StatusNoContent)
}
Similar to the HTTP Put operation for the Note resource, the route value of id is taken and the Note
object is removed from the noteStore map by using the key value from the route variable id:
//HTTP Delete - /api/notes/{id}
func DeleteNoteHandler(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
k := vars["id"]
// Remove from Store
if _, ok := noteStore[k]; ok {
//delete existing item
delete(noteStore, k)
} else {
log.Printf("Could not find key of Note %s to delete", k)
}
w.WriteHeader(http.StatusNoContent)
}
This example application demonstrated the fundamental concepts for building RESTful APIs in Go by
using its standard library package net/http and the mux third-party package.
Summary
Go is a great technology stack for building web-based, back-end systems; and is especially excellent for
building RESTful APIs. The net/http package from the standard library provides the fundamental blocks
for building web applications in Go. The Go philosophy discourages using bigger frameworks for building
web applications; it encourages the use of the net/http package as the fundamental block to use third-party
packages and your own packages to extend the functionalities provided by the net/http package.
The http package has two major components for processing HTTP requests: http.ServeMux and
http.Handler. http.ServeMux is a request multiplexor that compares incoming HTTP requests against a
list of predefined URI resources and calls the associated handler for the resource requested by HTTP clients.
Handlers are responsible for writing response headers and bodies. By using third-party packages such as
mux, you can extend the capabilities of http.ServeMux.
The final section of this chapter explored the fundamental pieces of web development in Go by showing
the development of a JSON-based RESTful API.
77
Chapter 5
text/template Package
Using templates is a great way to build dynamic contents; you provide data at runtime to generate dynamic
contents with a predefined format. The Go standard library package html/template allows you to build
dynamic HTML pages by combining static contents with dynamic contents where it parses the templates
with the data structure that is provided at runtime.
You will have a look at the standard library package text/template before diving into the html/
template package. The html/template package provides the same interface as the text/template package;
the only difference between the two is that the html/template package parses the template and generates
the output in HTML, and the text/template package generates the output in text format. You can start
with the text/template package to understand the syntax of Go templates and can easily work with the
html/template without any syntactical difference. The text/template package allows you to build
data-driven templates for generating textual output.
79
Listing 5-1 is an example program that generates textual output with struct object fields.
Listing 5-1. Applying Struct Fields into a Template
package main
import (
"log"
"os"
"text/template"
)
type Note struct {
Title
string
Description string
}
const tmpl = `Note - Title: {{.Title}}, Description: {{.Description}}`
func main() {
//Create an instance of Note struct
note := Note{"text/templates", "Template generates textual output"}
//create a new template with a name
t := template.New("note")
//parse some content and generate a template
t, err := t.Parse(tmpl)
if err != nil {
log.Fatal("Parse: ", err)
return
}
//Applies a parsed template to the data of Note object
if err := t.Execute(os.Stdout, note); err != nil {
log.Fatal("Execute: ", err)
return
}
}
You should get the following output:
Note - Title: text/templates, Description: Template generates textual output
In the previous listing, a struct named Note is declared, and a template is declared as a string constant:
const tmpl = `Note - Title: {{.Title}}, Description: {{.Description}}`
In the template, the Title and Description fields of the Note struct are mapped so the textual output
with the values of the Note object can be rendered when the template is executed. The template block {{ . }} is
a context-aware block that will be executed based on the execution context. Here the Note object is provided
when the template is executed, so the names after the dot (.) maps the field names of the Note object.
80
A new template with the name "note" is created. The New function returns the type *Template:
t := template.New("note")
The Parse method parses a string into a template:
t, err := t.Parse(tmpl)
Here the template is parsed from a string that has declared with a constant variable. To parse the
template from template files, use the ParseFiles method of *Template:
func (t *Template) ParseFiles(filenames ...string) (*Template, error)
The ParseGlob method parses the template definitions in the files identified by the pattern. Here is a
sample for parsing all template definition files in a folder with the extension .tmpl:
t, err := template.ParseGlob("templates/*.tmpl")
The previous code block parses all template definitions in the folder templates if the files have an
extension of .tmpl.
The Execute method applies a parsed template to the specified data object (here a Note object) and
writes the output to an output writer. If an error occurs during the template execution or between writing its
output, execution stops, but partial results may already have been written to the output writer:
err1 := t.Execute(os.Stdout, note)
Heres a summary of the steps for generating the textual output using text/template:
1.
Declare a template for mapping with a data object.
2.
Create a template (*Template) by calling the template.New function.
3.
Parses a string into a template by calling the Parse method.
4.
Executes the parsed template with the specified data object for rendering the
textual contents with the values of the data object.
In the previous program, a simple struct object was applied to the template for generating the output.
Lets have a look at how to apply a collection of objects to templates for generating the textual output.
Listing 5-2 is an example program that renders a text template with a collection object.
Listing 5-2. Applying a Slice of Objects into a Template
package main
import (
"log"
"os"
"text/template"
)
81
82
In this listing, the slice of the Note struct as the data object is provided. Here the template definition
block {{.}} represents the collection object where we can iterate through the collection using the action
{{range .}}. All control structures (if, with, or range) definitions must close with {{end}}.
Declaring Variables
Variables can be declared in template definitions that can be referenced in the template definitions for later
use. To declare a variable, use $variable inside the {{ }} block. Listing 5-4 is an example.
Listing 5-4. Declaring a Variable and Referencing it Later
{{ $note := "Sample Note"}}
{{ $note }}
Here a $note variable is declared and referenced later by simply specifying the variable name.
The {{ $note }} command prints the value of the $note variable.
83
When you declare a variable with a range action, the variable value would be successive elements of the
each iteration. A range action can declare two variables for a key and value element, separated by a comma
(see Listing 5-5).
Listing 5-5. Declaring Variables with a range Action
{{range $key,$value := . }}
If you use a range action with a map, the $key variable is the store key of the map, and the $value
variable is the store value element of each iteration.
Using Pipes
When you work with Go templates, you can perform actions one after another by using pipes: each
pipelines output becomes the input of the following pipe. Listing 5-6 shows an example.
Listing 5-6. Using a Pipe in a Template
{{ eq $a $b | if }} a nnd b are equal{{ end }}
Here an output value is printed if the values of variables $a and $b are equal.
84
if err != nil {
log.Fatal("Execute: ", err)
}
}
You should get the following output:
Hello, <script>alert('XSS Injection')</script>!
The template securely encodes the output where it replaces the script blocks with corresponding text.
85
main function
Listing 5-10 shows the entry point of the program in which the HTTP request multiplexer is configured and
starts the HTTP server.
Listing 5-10. Entry Point of the Program in main.go
//Entry point of the program
func main() {
r := mux.NewRouter().StrictSlash(false)
fs := http.FileServer(http.Dir("public"))
r.Handle("/public/", fs)
r.HandleFunc("/", getNotes)
r.HandleFunc("/notes/add", addNote)
r.HandleFunc("/notes/save", saveNote)
r.HandleFunc("/notes/edit/{id}", editNote)
r.HandleFunc("/notes/update/{id}", updateNote)
r.HandleFunc("/notes/delete/{id}", deleteNote)
server := &http.Server{
Addr:
":8080",
Handler: r,
}
log.Println("Listening...")
server.ListenAndServe()
}
86
Four template definition files are created to render the HTML views:
index.html: Template definition file for generating contents for the Index page.
add.html: Template definition file for generating contents for the Add page.
edit.html: Template definition file for generating contents for the Edit page.
base.html: A nested template definition file used for generating all pages of the web
application. You provide the appropriate content page for rendering each web page.
87
88
<p>
<a href="/notes/add" >Add Note</a>
</p>
<div>
<table border="1">
<tr>
<th>Title</th>
<th>Description</th>
<th>Created On</th>
<th>Actions</th>
</tr>
{{range $key,$value := . }}
<tr>
<td> {{$value.Title}}</td>
<td>{{$value.Description}}</td>
<td>{{$value.CreatedOn}}</td>
<td>
<a href="/notes/edit/{{$key}}" >Edit</a> |
<a href="/notes/delete/{{$key}}" >Delete</a>
</td>
</tr>
{{end}}
</table>
</div>
{{end}}
Named definitions of two templates are defined: head and body. Both will be used by the base template
defined in base.html. When the Index page is executed, the map object that contains the object of the Note
struct in each element is passed. The range action is used to iterate through the map object. In the range
action, two variables are declared for the key and value element that is referenced inside the range action.
A range action must be closed with an end action.
When users request the route "/", the Index page is rendered by calling the getNotes request handler in
main.go. Listing 5-15 shows the getNotes function.
Listing 5-15. Handler Function for Route / in main.go
func getNotes(w http.ResponseWriter, r *http.Request) {
renderTemplate(w, "index", "base", noteStore)
}
In the getNotes handler function, the renderTemplate helper function is called for rendering the Index
page. To call renderTemplate, the http.ResponseWriter object is provided as the io.Writer, the index for
getting the parsed template for executing the Index page, the base for specifying the template definition to
be executed, and the map object as the data object to apply to the template definitions.
When templates["index"] is called, you get the compiled template that was parsed by using the
template definition files index.html and base.html. The base template definition is finally executed from
the parsed template that is a nested template in which it takes the template definition from head and page
defined in the index.html file.
89
Figure5-2 shows the Index page with the list of Note objects.
90
The Add page is used to add a new Note object when you dont need to provide any data object to the
template definition. So nil is passed as the data object. The string "add" is provided as the key of the map
object for getting the parsed template, which was parsed by using the template definition files add.html and
base.html, for rendering the Add page.
Figure5-3 shows the Add page that provides the user interface for creating a new Note.
91
The saveHandler function parses the form values from the *http.Request object by calling the
ParseForm method; then form field values are read by calling PostFormValue("element_name"). In
Listing 5-18, the values are read from the HTML form elements title and description. The value of id is
incremented for generating a key for the noteStore map object, and finally the newly added Note object is
added into the noteStore map object with the key that was generated. The request is redirected to "/" for
redirecting to the Index page, in which the newly added data can be seen.
92
if note, ok := noteStore[k]; ok {
viewModel = EditNote{note, k}
}else {
http.Error(w, "Could not find the resource to edit.", http.StatusBadRequest)
}
renderTemplate(w, "edit", "base", viewModel)
}
An EditNote struct instance is provided as the data object to the template definition. The string "edit"
is provided as the key of the map object for getting the parsed template, which was parsed by using template
definition files edit.html and base.html, for rendering the Edit page.
Figure5-4 shows the Edit page that provides the user interface to edit an existing Note.
93
94
95
96
Summary
This chapter showed how to work with Go templates by developing a web application. When you work
with data-driven web applications, you have to leverage templates to render HTML pages in which you can
combine static contents with dynamic contents by applying a data object to the templates.
The html/template package is used to render HTML pages. It also provides a security mechanism
against various code injections while rendering the HTML output. Both html/template and text/template
provide the same interface for the template authors, in which html/template generates HTML output,
and text/template generates textual output. By leveraging the standard library packages net/http and
html/template, you can build full-fledged web applications in Go.
97
Chapter 6
HTTP Middleware
The last two chapters explored various aspects of building web applications and web APIs. This chapter
takes a look at HTTP middleware, which simplifies development efforts when real-world web applications
are built. The Go developer community has not been too interested in adopting full-fledged web application
frameworks for building web applications. Instead, they prefer to use standard library packages such as the
fundamental block, along with a few essential third-party libraries such as Gorilla mux. Writing and using
HTTP middleware is an essential approach for staying with this strategy. You can implement many crosscutting behaviors such as security, HTTP request and response logging, compressing HTTP responses, and
caching as middleware components; and these middleware components can be applied to many application
handlers or across-the-application handlers.
99
100
invokes the given handler (the application handler or another wrapper handler) by calling the ServeHTTP
method:
if p := strings.TrimPrefix(r.URL.Path, prefix); len(p) < len(r.URL.Path) {
r.URL.Path = p
h.ServeHTTP(w, r)
} else {
NotFound(w, r)
}
You can easily write HTTP middleware in the same way as the http package implements wrapper
handler functions. You can write middleware functions by implementing functions with the signature
func(http.Handler) http.Handler. If you want to pass any values into the middleware functions, you can
provide the same as function parameters along with http.Handler, as the StripPrefix function does in the
http package.
101
102
19:40:10
19:40:10
19:40:10
19:40:18
19:40:18
19:40:18
Started GET /
Executing index handler
Completed / in 0
Started GET /about
Executing about handler
Completed /about in 1.0005ms
In the previous program, loggingHandler is the HTTP middleware that decorates into the application
handlers for the "/" and "/about" routes. Middleware allows you to reuse the shared behavior functionality
onto multiple handlers. In the logging middleware handler, the log messages are written before and after
executing the application handler. The middleware function logs the HTTP method and URL path of the
requests before invoking the application handler, and logs the time it takes to execute the application
handler after executing the application handler. By calling next.ServeHTTP(w, r), the middleware function
can execute the application handler function:
func loggingHandler(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
start := time.Now()
log.Printf("Started %s %s", r.Method, r.URL.Path)
next.ServeHTTP(w, r)
log.Printf("Completed %s in %v", r.URL.Path, time.Since(start))
})
}
The application handler functions are converted to the HandlerFunc func type and passed into the
logging middleware handler function:
indexHandler := http.HandlerFunc(index)
aboutHandler := http.HandlerFunc(about)
http.Handle("/", loggingHandler(indexHandler))
http.Handle("/about", loggingHandler(aboutHandler))
103
104
When you make a request for "http://localhost:8080/message" by providing the wrong value for
the querystring variable password (lets say you provide "http://localhost:8080/message?password=
wrongpass"), you should get the following output:
2015/04/30 11:11:06 MiddlewareFirst - Before Handler
2015/04/30 11:11:06 MiddlewareSecond - Before Handler
2015/04/30 11:11:06 Failed to authorize to the system
2015/04/30 11:11:06 MiddlewareFirst - After Handler
When you make a request for "http://localhost:8080/message" by providing the correct value for the
querystring variable password ("http://localhost:8080/message?password=pass123"), you should get the
following output:
2015/04/30 11:11:35 MiddlewareFirst - Before Handler
2015/04/30 11:11:35 MiddlewareSecond - Before Handler
2015/04/30 11:11:35 Authorized to the system
2015/04/30 11:11:35 Executing message Handler
2015/04/30 11:11:35 MiddlewareSecond - After Handler
2015/04/30 11:11:35 MiddlewareFirst - After Handler
You can easily understand the control flow of middleware handlers by looking at the log messages
generated by the program. Here middlewareFirst and middlewareSecond are called as the wrapper
handlers; you can apply this into the application handler. In the middleware function middlewareSecond,
the value of the querystring variable password is validated if the request URL path is "/message".
Here is the control flow that happens when the program is run and you make requests to the "/" and
"/message" routes:
The control flow goes to the middlewareFirst middleware function.
1.
After a log message is written (before executing the next handler) in the
2.
middlewareFirst function, the control flow goes to the middlewareSecond
middleware function when next.ServeHTTP(w, r) is called.
After a log message is written (before executing the next handler) in the
3.
middlewareSecond function, the control flow goes to the application handler
when next.ServeHTTP(w, r) is called:
a.
If the request URL path is "/", the index application handler is invoked
without any authorization.
b.
105
4.
After invoking the application handler from the middlewareSecond function (if
the request gets validated), the control flow goes back to the middlewareSecond
function and invokes the logic after the next.ServeHTTP(w, r) code block.
5.
After returning from the middlewareSecond handler, the control flow goes
back to the middlewareFirst function and invokes the logic after the
next.ServeHTTP(w, r) code block.
You can exit from middleware handler chains at any time, as the middlewareSecond handler does if the
request is not valid. In this context, the control flow goes back to the previous handler if there is any handler
in the request-handling cycle. When "/message" is requested without any valid querystring value, the
application handler will not be invoked when you return from the middleware handler. The most important
thing is that you can execute some logic before or after invoking a middleware handler function.
106
107
Installing Alice
To install Alice, run the following command in the terminal:
$ go get github.com/justinas/alice
108
109
log.Println("Listening...")
server.ListenAndServe()
}
In this program, two Gorilla handlers are used for logging requests and compressing responses:
LoggingHandler and CompressHandler. The HTML string is provided in the responses to verify the
compression impact of the HTTP responses. Run the program, make requests to "/" and "/about", and
watch the log file and HTTP responses.
Figure6-1 shows the screenshot in Fiddler: an HTTP debugging tool showing that the web responses
are compressing with gzip encoding.
110
Here commonHandlers is defined as the common handlers for use with multiple application handlers:
indexHandler := http.HandlerFunc(index)
aboutHandler := http.HandlerFunc(about)
commonHandlers := alice.New(loggingHandler, handlers.CompressHandler)
http.Handle("/", commonHandlers.ThenFunc(indexHandler))
http.Handle("/about", commonHandlers.ThenFunc(aboutHandler))
The Alice package provides a fluent API for working with middleware functions that allows you to
chain middleware functions in an elegant way. The Alice package is a very lightweight library that has fewer
than 100 lines of code.
Installing Negroni
To install Negroni, run the following command in the terminal:
$ go get github.com/codegangsta/negroni
To work with the Negroni package, the github.com/codegangsta/negroni package must be added to
the import list:
import "github.com/codegangsta/negroni"
111
Negroni instances can also be created by calling the negroni.New function, which returns a new
Negroni instance without any middleware preconfigured:
n := negroni.New()
The Run method of a Negroni instance is a convenience function that runs the Negroni stack as an
HTTP server. The address string takes the same format as http.ListenAndServe:
n.Run(":8080")
112
Listing 6-9. Simple HTTP Server with Negroni and Gorilla mux
package main
import (
"fmt"
"net/http"
"github.com/codegangsta/negroni"
"github.com/gorilla/mux"
)
func index(w http.ResponseWriter, req *http.Request) {
fmt.Fprintf(w, "Welcome!")
}
func main() {
router := mux.NewRouter()
router.HandleFunc("/", index)
n := negroni.Classic()
n.UseHandler(router)
n.Run(":8080")
}
The Gorilla Mux.Router object is provided as the handler to be used with Negroni. You can provide any
object of the http.Handler interface to the UseHandler method of the Negroni instance.
Registering Middleware
Negroni manages middleware flow through the negroni.Handler interface.
Listing 6-10 shows the definition of the negroni.Handler interface:
Listing 6-10. negroni.Handler Interface
type Handler interface {
ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc)
}
Listing 6-11 provides the pattern for writing middleware handler functions for Negroni to work with the
negroni.Handler interface.
Listing 6-11. negroni.Handler Interface
func myMiddleware(w http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
// logic before executing the next handler
next(w, r)
// logic after running next the handler
}
113
The function signature of a Negroni-compatible middleware function is different from the functions
written in the previous sections. The Negroni middleware stack uses the following signature to write
middleware functions:
func myMiddleware(w http.ResponseWriter, r *http.Request, next http.HandlerFunc)
Here you can call the next handler in the middleware stack by invoking the http.HandlerFunc object by
passing the values of the http.ResponseWriter object and the *http.Request object:
// logic before executing the next handler
next(w, r)
// logic after running next the handler
You can map the middleware function to the Negroni handler chain with the Use function, which takes
an argument of negroni.Handler. The Use function adds a negroni.Handler into the middleware stack
(see Listing 6-12). Handlers are invoked in the order in which they are added to a Negroni instance.
Listing 6-12. Registering a Middleware Function with Negroni
n := negroni.New()
n.Use(negroni.HandlerFunc(myMiddleware))
The middleware function is converted into a negroni.HandlerFunc type and added to the Negroni
middleware stack. HandlerFunc is an adapter that allows ordinary functions to be used as Negroni handlers.
114
115
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/favicon.ico", iconHandler)
mux.HandleFunc("/", index)
mux.HandleFunc("/message", message)
n := negroni.Classic()
n.Use(negroni.HandlerFunc(middlewareFirst))
n.Use(negroni.HandlerFunc(middlewareSecond))
n.UseHandler(mux)
n.Run(":8080")
}
Run the program and make requests to "/", and make requests to "/message" by providing the wrong
value to the querystring variable password and providing the value "pass123" to the querystring variable
password, respectively. You should get log messages similar to these:
[negroni] listening on :8080
[negroni] Started GET /
2015/04/30 14:45:44 MiddlewareFirst - Before Handler
2015/04/30 14:45:44 MiddlewareSecond - Before Handler
2015/04/30 14:45:44 Executing index Handler
2015/04/30 14:45:44 MiddlewareSecond - After Handler
2015/04/30 14:45:44 MiddlewareFirst - After Handler
[negroni] Completed 200 OK in 1.0008ms
[negroni] Started GET /message
2015/04/30 14:45:52 MiddlewareFirst - Before Handler
2015/04/30 14:45:52 MiddlewareSecond - Before Handler
2015/04/30 14:45:52 Failed to authorize to the system
2015/04/30 14:45:52 MiddlewareFirst - After Handler
[negroni] Completed 0
in 1.0008ms
116
To rewrite an existing program to work with Negroni, the middleware handler functions must be
modified to be compatible with the negroni.Handler interface:
func middlewareFirst(w http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
//logic before next handler
next(w, r)
//logic after next handler
}
func middlewareSecond(w http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
//logic before next handler
next(w, r)
//logic after next handler
}
After the middleware handler functions are compatible with the negroni.Handler interface, they need
to be added to the Negroni middleware stack by using the Use function of the Negroni instance. Handlers are
invoked in the order in which are added to a Negroni middleware stack:
n := negroni.Classic()
n.Use(negroni.HandlerFunc(middlewareFirst))
n.Use(negroni.HandlerFunc(middlewareSecond))
As the Negroni instance is created with the Classic function, the following built-in middleware
functions will be available on the middleware stack:
negroni.Recovery
negroni.Logging
negroni.Static
You can also add middleware functions when you create Negroni instances by using the New function:
n := negroni.New(
negroni.NewRecovery(),
negroni.HandlerFunc(middlewareFirst),
negroni.HandlerFunc(middlewareSecond),
negroni.NewLogger(),
negroni.NewStatic(http.Dir("public")),
)
Negroni provides a very simple and elegant library to work with HTTP middleware functions. This is a
very tiny library, but really helpful when you build real-world web applications and RESTful services in Go.
One of the major advantages of Negroni is that it is fully compatible with the net/http library. If you dont
like to use full-fledged web development frameworks to build web applications in Go, making and using
HTTP middleware with Negroni is a good choice for building efficient web applications, which helps you to
achieve better reusability and maintainability.
117
118
119
Summary
Using HTTP middleware is an important practical approach for building real-world applications in Go.
Middleware is a pluggable and self-contained piece of code that wraps application handlers, which can
be used for implementing shared behaviors into across-the-application handlers or into some specific
application handlers.
HTTP middleware allows you to build applications with pluggable logic that obtains a greater level
of reusability maintainability. Using HTTP middleware, you can execute some logic before or after HTTP
request handlers. Because HTTP middleware are pluggable components, you can add or remove them at
any time.
The Alice third-party package allows you to implement chaining of middleware handlers with an
elegant syntax using its fluent interface. The Negroni third-party package is a great library for handling
middleware functions, which also come with some default middleware functions. Negroni provides an
idiomatic approach to using HTTP middleware in Go.
When you build real-world applications, you may need to share values among various middleware
handlers and application handlers. The third-party context package from the Gorilla web toolkit can be
used for sharing values during a request lifetime. It is a great web development stack in Go to use
net/http as the fundamental programming block for web development, Negroni as the handler for working
with HTTP middleware, Gorilla mux as the router, and Gorilla context as the mechanism for sharing
values during the request lifetime. With this web development stack, you dont need a full-fledged web
development framework.
120
Chapter 7
Authentication Approaches
There are various approaches available for implementing authentication into applications. Typically, user
credentials are stored in a database of an application. The web server takes the username and password
through an HTML form and then validates these credentials with the credentials stored in the database. But in
modern applications, people also use social identity providers such as Facebook, Twitter, LinkedIn, and Google
as social identities for authentication, which helps applications avoid maintaining separate user identity
systems for each individual application. End users dont need to remember their user ID and password for
individual applications; they can use their existing social identities to authenticate to applications.
121
Modern web development is moving toward an API-based approach in this mobility era, in which these
APIs are being consumed from both mobile clients and web clients. More-reliable security systems must
be provided to modern web applications. APIs are developed based on a stateless design, which should be
considered when authentication systems for APIs are designed. So you cant use the same approach for APIs,
which you have been using for traditional web applications.
Once users have been logged in to the system, they must be able to access web server resources in
subsequent HTTP requests without providing user credentials for each HTTP request. There are two kinds of
approaches available for keeping the user as a logged-in user for subsequent HTTP requests. A conventional
approach is to use an HTTP session and cookies, and a modern approach is to use an access token generated
by the web server. A token-based approach is a convenient solution for web APIs; an HTTP session and
cookies are appropriate for traditional web applications.
Cookie-Based Authentication
A cookie-based approach is the most widely used method of implementing authentication into web
applications. In this approach, HTTP cookies are used to authenticate users on every HTTP request after
they have logged in to the system with user credentials.
Figure7-1 illustrates the cookie-based authentication workflow.
122
In cookie-based authentication, the web server first validates the username and password, which
are sent through an HTML form. Once the user credentials are validated with the credentials stored in
the database, the HTTP server sets a session cookie that typically contains the user information. For each
subsequent HTTP request, the web server can validate the HTTP request based on the value contained in
the cookie. Some server-side technologies provide a very rich infrastructure for implementing this kind of
authentication, in which you can easily implement cookie-based authentication by simply calling its API
methods. In other server-side technologies and frameworks, you can manually write some code for writing
cookies and storing values into session storage to implement the authentication.
A cookie-based approach that combines with sessions is a good fit for traditional web applications in
which you implement everything at the server side, including the logic for UI rendering. The web application
is accessed from normal desktop browsers.
In Go, you can use packages such as sessions (www.gorillatoolkit.org/pkg/sessions), which is
provided by the Gorilla web toolkit, to implement authentication using cookie-based sessions.
Using a cookie-based approach to implement authentication into a web API is not a good idea for
several reasons. When you build APIs, a stateless design is an ideal design choice. If you use a cookie-based
approach, you need to maintain a session store for your API, which violates the design choice of being a
stateless API. The cookie-based approach also doesnt work well when web server resources are accessed
from different domains due to cross-origin resource sharing (CORS) constraints.
Token-Based Authentication
Methods of developing web applications have changed in the past few years. The era of mobile application
development has also changed the way web-based systems are developed. Modern web development is
moving toward an API-driven approach in which a web API (often a RESTful API) is provided on the server
side, and web applications and mobile applications are built by consuming the web API.
A token-based approach is a modern approach for implementing authentication into web applications
and web APIs. In a token-based approach (see Figure7-2), an access token ID used for authentication on
every HTTP request. In this approach, you can also use usernames and passwords to log in to the system. If
the user gets access to the system, the authentication system generates an access token for the subsequent
HTTP requests to get authentication into the web server. These access tokens are a securely signed string
that can be used for accessing HTTP resources on every HTTP request. Typically, the access tokens are sent
with an HTTP Authorization header as a bearer token that can be validated at the web server.
123
124
4.
The client application provides the access token on every HTTP request into
the web server. An HTTP header is used to transmit the access token into the
web server.
5.
The web server validates the access token provided by the client application and
then provides the web server resources if the access token is valid.
A token-based approach is very convenient when you build mobile applications for which you dont
need to leverage cookies on the client. When you use APIs as your server-side implementation, you dont
need to maintain session stores that allow you to build stateless APIs on the server side, which can be easily
consumed from variety of client applications without any hurdles. Another benefit of using a token-based
approach is that you can easily make AJAX calls to any web server, regardless of domains, because you use
an HTTP header to make the HTTP requests.
A token-based approach is an ideal solution for providing security to RESTful APIs. On the web
technology space, Go is primarily used for building back-end APIs (often RESTful APIs), so the focus of this
chapter is primarily on the token-based approach.
Understanding OAuth 2
OAuth 2 is an open specification for authentication. The OAuth 2.0 authorization framework enables a
third-party application to obtain limited access to an HTTP service such as Facebook, Twitter, GitHub, and
Google. The most important thing is that OAuth 2 is a specification for authentication flow.
OAuth 2 provides the authorization flow for web applications, desktop applications, and mobile
applications. When you build applications, you can delegate user authentication to the social identity
providers such as Facebook, Twitter, GitHub, and Google. You can register your applications into an identity
provider to authorize the application to access the user account. When you register your application into the
identity provider, typically it gives you a client ID and a client secret key to obtain access to the user account
of the identity provider. Once you are logged in with client ID and client secret key, the authentication server
gives you an access token that can be used for accessing protected resources of the web server.
Section 7.2.2 discussed the token-based approach workflow for authentication. The use of a bearer
token is a specification defined in the OAuth 2 authorization framework, which defines how to use bearer
tokens in HTTP requests to access protected resources in OAuth 2.
Several OAuth 2 service providers are available for authentication. Because OAuth 2 is an open standard
for authorization, you can implement these standards for your web APIs as an authentication mechanism for
various client applications, including mobile and web applications.
NoteOAuth 2.0 is the next evolution of the OAuth protocol that was originally created in 2006. OAuth 2.0
focuses on client developer simplicity while providing specific authorization flows for web applications,
desktop applications, and mobile applications. The final version of the OAuth 2 specification can be found at
http://tools.ietf.org/html/rfc6749.
125
126
if err != nil {
log.Fatal(err)
}
}
func callbackAuthHandler(res http.ResponseWriter, req *http.Request) {
user, err := gothic.CompleteUserAuth(res, req)
if err != nil {
fmt.Fprintln(res, err)
return
}
t, _ := template.New("userinfo").Parse(userTemplate)
t.Execute(res, user)
}
func indexHandler(res http.ResponseWriter, req *http.Request) {
t, _ := template.New("index").Parse(indexTemplate)
t.Execute(res, nil)
}
func main() {
//Register providers with Goth
goth.UseProviders(
twitter.New(config.TwitterKey, config.TwitterSecret, "http://localhost:8080/auth/
twitter/callback"),
facebook.New(config.FacebookKey, config.FacebookSecret, "http://localhost:8080/auth/
facebook/callback"),
)
//Routing using Pat package
r := pat.New()
r.Get("/auth/{provider}/callback", callbackAuthHandler)
r.Get("/auth/{provider}", gothic.BeginAuthHandler)
r.Get("/", indexHandler)
server := &http.Server{
Addr:
":8080",
Handler: r,
}
log.Println("Listening...")
server.ListenAndServe()
}
//View templates
var indexTemplate = `
<p><a href="/auth/twitter">Log in with Twitter</a></p>
<p><a href="/auth/facebook">Log in with Facebook</a></p>
`
var userTemplate = `
<p>Name: {{.Name}}</p>
<p>Email: {{.Email}}</p>
<p>NickName: {{.NickName}}</p>
127
<p>Location: {{.Location}}</p>
<p>AvatarURL: {{.AvatarURL}} <img src="{{.AvatarURL}}"></p>
<p>Description: {{.Description}}</p>
<p>UserID: {{.UserID}}</p>
<p>AccessToken: {{.AccessToken}}</p>
`
Twitter and Facebook are used to log in to the example application. To do this, register the application
with the corresponding identity provider. When you register an application with an identity provider, you get
a client ID and secret key. Twitter and Facebook providers are registered with the Goth package by providing
a client ID, client secret key, and callback URL.
After a successful login with an OAuth2 service provider, the server redirects to the callback URL:
//Register OAuth2 providers with Goth
goth.UseProviders(
twitter.New(config.TwitterKey, config.TwitterSecret, "http://localhost:8080/auth/
twitter/callback"),
facebook.New(config.FacebookKey, config.FacebookSecret, "http://localhost:8080/auth/
facebook/callback"),
)
The client ID and client secret key are read from a configuration file in the init function.
Run the program and navigate to http://localhost:8080/. Figure7-3 shows the home page of the
application, which provides authentication with Twitter and Facebook.
128
129
Here is the definition of User struct from the source of the Goth package:
type User struct {
RawData
Email
Name
NickName
Description
UserID
AvatarURL
Location
AccessToken
AccessTokenSecret
}
map[string]interface{}
string
string
string
string
string
string
string
string
string
Finally, the view template is rendered by providing the User struct. Here is the view template used to
render the UI to display user information:
var userTemplate = `
<p>Name: {{.Name}}</p>
<p>Email: {{.Email}}</p>
<p>NickName: {{.NickName}}</p>
<p>Location: {{.Location}}</p>
<p>AvatarURL: {{.AvatarURL}} <img src="{{.AvatarURL}}"></p>
<p>Description: {{.Description}}</p>
<p>UserID: {{.UserID}}</p>
<p>AccessToken: {{.AccessToken}}</p>
`
Figure7-5 shows the user information obtained from Twitter.
130
The jwt-go library supports the signing algorithms of RSA256 and HMAC SHA256.
To install the jwt-go package, enter the following command in the terminal:
go get github.com/dgrijalva/jwt-go
To work with the jwt-go package, you must add github.com/dgrijalva/jwt-go to the import list:
import "github.com/dgrijalva/jwt-go"
Lets write an example API to work with JWT tokens using the jwt-go package. Steps of the
authentication flow in the example program are the following:
1.
The API server validates the user credentials (username and password) provided
by the client application.
2.
If the login credentials are valid, the API server generates a JWT token and sends
it to the client application as an access token.
3.
The client applications can store the JWT token in client storage. HTML 5 local
storage is generally used for storing JWT tokens.
4.
To obtain access to protected resources of the API server, the client application
sends an access token as a bearer token in the HTTP header Authorization
(Authorization: Bearer "Access_Token") on every HTTP request.
131
Before starting the example program, lets generate RSA keys for the application to use to sign the
tokens. The RSA keys can be generated by using the openssl command-line tool.
Run the following commands:
openssl genrsa -out app.rsa 1024
openssl rsa -in app.rsa -pubout > app.rsa.pub
These commands generate a private and public key. 1024 is the size of the key that was generated.
Listing 7-2 shows the example program.
Listing 7-2. JWT Token-based Authentication with the jwt-go Package
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
"time"
jwt "github.com/dgrijalva/jwt-go"
"github.com/gorilla/mux"
)
// using asymmetric crypto/RSA keys
// location of the files used for signing and verification
const (
privKeyPath = "keys/app.rsa"
// openssl genrsa -out app.rsa 1024
pubKeyPath = "keys/app.rsa.pub" // openssl rsa -in app.rsa -pubout > app.rsa.pub
)
// verify key and sign key
var (
verifyKey, signKey []byte
)
//struct User for parsing login credentials
type User struct {
UserName string `json:"username"`
Password string `json:"password"`
}
// read the key files before starting http handlers
func init() {
var err error
signKey, err = ioutil.ReadFile(privKeyPath)
if err != nil {
log.Fatal("Error reading private key")
return
}
132
133
134
135
136
The ParseFromRequest function automatically looks at the HTTP header for the access token and
validates the string with a corresponding verification key. By checking the error, you can see whether the
token has expired. The jwt-go package is very helpful when you work with JWT tokens.
Figure 7-7. API server sends JWT access token as the response
137
If the login is valid, the API server sends a JWT token to the client as an access token for subsequent
HTTP requests.
Lets authorize into the API server with the access token received from the HTTP Post to "/login".
Lets send a request to "/auth" by providing the access token in the HTTP Authorization header as
shown in the following format (see Figure7-8):
Authorization: Bearer "JWT Access Token"
Figure 7-8. Authenticating into the API server with a JWT access token
Figure7-9 displays the response that shows a successful authentication into the system.
138
Summary
When you build web applications and RESTful APIs, security is one of the most important factors for a
successful application. If you cant protect your application from unauthorized access, the entire application
doesnt make any sense, despite providing a good user experience.
You usually use two kinds of authentication models for authorizing HTTP requests to access the
protected resources of a server: cookie-based authentication and token-based authentication. A cookiebased authentication is the conventional approach that works well for traditional and stand-alone web
applications in which server-side implementation provides everything, including UI rendering.
139
A token-based approach is great for adding authentication into RESTful APIs that can be easily accessed
from various client applications, including mobile and web applications. A token-based approach is a very
convenient model for mobile applications because tokens can be sent through an HTTP header as bearer
tokens. When a token-based approach is used, Ajax requests can be sent to any API server, regardless of
domain constrains, thus avoiding CORS issues.
OAuth 2 is an open standard for authentication that allows applications to delegate authentication
to various OAuth 2 service providers such as LinkedIn, GitHub, Twitter, Facebook, and Google. Several
OAuth 2 service providers are available as identity providers. The Go third-party package Goth allows you to
implement authentication with various OAuth 2 service providers. In the OAuth 2 authentication flow, an
access token is used to obtain access to protected resources.
JSON Web Token (JWT) is an open standard for generating and using bearer tokens for authentication
between two parties. JWT is a compact, URL-safe way to represent claims to be transferred between two
parties. The third-party Go package jwt-go provides various utility functions for working with JWT tokens.
It allows you to easily generate JWT tokens and verify tokens for authentication. When you use token-based
authentication, you might have to apply authentication logic into multiple application handlers. In this
context, you can use HTTP middleware to implement authentication logic, which can decorate into multiple
application handlers.
140
Chapter 8
Introduction to MongoDB
Introduction to MongoDB
MongoDB is a popular NoSQL database that has been widely used for modern web applications. MongoDB
is an open-source document database that provides high performance, high availability, and automatic
scaling. MongoDB is a nonrelational database that stores data as documents in a binary representation
called BSON Binary JSON (BSON). In short, MongoDB is the data store for BSON documents. Go structs can
be easily serialized into BSON documents.
For more details on MongoDB and to get instructions for download and install, check out the MongoDB
web site here: www.mongodb.org/.
Note A NoSQL (often interpreted as Not Only SQL) database provides a mechanism for data storage and
retrieval. It provides an alternative approach to the tabular relations used in relational databases to design
data models. A NoSQL database is designed to cope with modern application development challenges such
as dealing with large volumes of data with easier scalability and performance. When compared with relational
databases, the NoSQL database can provide high performance, better scalability, and cheaper storage.
NoSQL databases are available in different types: document databases, graph stores, key-value stores, and
wide-column stores. MongoDB is a popular document database.
A MongoDB database holds a set of collections that consists of documents. A document comprises one
or more fields in which you store data as a set of key-value pairs. In MongoDB, you persist documents into
collections, which are analogous to tables in a relational database. Documents are analogous to rows, and
fields within a document are analogous to columns.
141
Unlike relational databases, MongoDB provides a greater level of flexibility for the schema of
documents, which enables you to change the document schema whenever the data model is evolving.
When you work on MongoDB with Go, your Go structs can become the schema of your documents; you can
change the schema any time you want to alter the structure of application data. Documents in a collection
dont need to have the same set of fields or schema, and common fields in a collections documents can
hold different types of data. This dynamic schema feature is very useful when you build applications in an
evolving way with many development iterations.
Installing mgo
To install the mgo package, run the following command in the terminal:
go get gopkg.in/mgo.v2
To work with mgo, you must add gopkg.in/mgo.v2 and its subpackage gopkg.in/mgo.v2/bson to the
import list:
import (
"gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson"
)
Connecting to MongoDB
To get started working with MongoDB, you have to obtain a MongoDB Session using the Dial function
(see Listing 8-1). The Dial function establishes the connection with the MongoDB server defined by the url
parameter.
Listing 8-1. Connecting to MongoDB Server and Obtaining a Session
session, err := mgo.Dial("localhost")
142
The Dial function can also be used to connect with a cluster of servers (see Listing 8-2). This is useful
when you scale a MongoDB database into a cluster of servers.
Listing 8-2. Connecting to a Cluster of MongoDB Servers and Obtaining a Session
session, err := mgo.Dial("server1.mongolab.com,server2.mongolab.com")
You can also use the DialWithInfo function to establish connection to one server or a cluster of servers
(see Listing 8-3). The difference from the Dial function is that you can provide extra information to the
cluster by using the value of the DialInfo type. DialWithInfo establishes a new Session to the cluster of
MongoDB servers identified by the DialInfo type. The DialWithInfo function lets you customize values
when you establish a connection to a server. When you establish a connection using the Dial function, the
default timeout value is 10 seconds, so it times out after 10 seconds if Dial cant reach to a server. When you
establish a connection using the DialWithInfo function, you can specify the value for the Timeout property.
Listing 8-3. Connecting to a Cluster of MongoDB Servers Using DialWithInfo
mongoDialInfo := &mgo.DialInfo{
Addrs:
[]string{"localhost"},
Timeout: 60 * time.Second,
Database: "taskdb",
Username: "shijuvar",
Password: "password@123",
}
session, err := mgo.DialWithInfo(mongoDialInfo)
The mgo.Session object handles a pool of connections to MongoDB. Once you obtain a Session object,
you can perform write and read operations with MongoDB. MongoDB servers are queried with multiple
consistency rules. SetMode of the Session object changes the consistency mode for the session. Three types
of consistency modes are available: Eventual, Monotonic, and Strong.
Listing 8-4 establishes a Session object and sets a consistency mode.
Listing 8-4. Establishing a Session Object and a Consistency Mode
session, err := mgo.Dial("localhost")
if err != nil {
panic(err)
}
defer session.Close()
//Switch the session to a monotonic behavior.
session.SetMode(mgo.Monotonic, true)
It is important to close the Session object at the end of its lifetime by calling the Close method. In the
previous listing, the Close method is called by using the defer function.
143
Accessing Collections
To perform CRUD operations into MongoDB, an object of *mgo.Collection is created, which represents the
MongoDB collection. You can create an object of *mgo.Collection by calling method C of *mgo.Database.
The mgo.Database type represents the named database that can be created by calling the DB method of
*mgo.Session.
Listing 8-5 accesses the MongoDB collection named "categories".
Listing 8-5. Accessing a MongoDB Collection
c := session.DB("taskdb").C("categories")
The DB method returns a value representing a database named taskdb, which gets an instance of type
Database. Method C of type Database returns a value representing a collection named "categories". The
Collection object can be used for performing CRUD operations.
Inserting Documents
You can insert documents into MongoDB using the Insert method of mgo.Collection. The Insert method
inserts one or more documents into the collection. Using this method, you can insert values of structs, maps,
and document slices.
144
145
When you insert a new document, provide an _id field with a unique ObjectId. If an _id field isnt
provided, MongoDB will add an _id field that holds an ObjectId. When you insert records into a MongoDB
collection, you can call bson.NewObjectId() to generate a unique value for bson.ObjectId. Tag the _id field
to be serialized as _id when the mgo driver serializes the values into a BSON document and also specifies the
omitempty tag to omit values when serializing into BSON if the value is empty.
The Insert method of mgo.Collection is used for persisting values into MongoDB. The Collection
object is created by specifying the name "categories" and inserting values into the "categories" collection
by calling the Insert method. The Insert method inserts one or more documents into the collection. First,
one document with the values of the Category struct is inserted and then two documents are inserted into
the collection. The mgo driver automatically serializes the struct values into BSON representation and inserts
them into the collection.
In this example program, three documents are inserted. The Count method of the Collection object is
called to get the number of records in the collection and finally print the count.
When you run the program for the first time, you should get the following output:
3 records inserted
Inserting Map and Document SliceGo structs are an idiomatic way of defining a data model and
persisting data into MongoDB. But in some contexts, you may need to persist values from maps and slices.
Listing 8-7 shows how to persist values of map objects and document slices (bson.D).
Listing 8-7. Inserting Values from Map and Document Slices (bson.D)
package main
import (
"log"
"gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson"
)
func main() {
session, err := mgo.Dial("localhost")
if err != nil {
panic(err)
}
defer session.Close()
session.SetMode(mgo.Monotonic, true)
//get collection
c := session.DB("taskdb").C("categories")
docM := map[string]string{
"name": "Open Source",
"description": "Tasks for open-source projects",
}
146
Note In Chapter 9, you will learn how to make relationships using a reference among documents. Check
out the MongoDB documentation for more information on data model concepts: https://docs.mongodb.org/
manual/data-modeling/
Listing 8-8 is an example that shows a data model that uses embedded documents to describe
relationships among connected data. In this one-to-many relationship between Category and Tasks data,
Category has multiple Task entities.
147
148
A Category struct is created, in which a slice of the Task struct is specified for the Tasks field to embed
child collection. Embedding documents enables you to get the parent document and associated child
documents by using a single query (you dont need to execute another query to get the child details).
Reading Documents
The Find method of Collection allows you to query MongoDB collections. When you call the Find method,
you can provide a document to filter the collection data. The Find method prepares a query using the
provided document. To provide the document for querying the collection, you can provide a map or a struct
value capable of being marshalled with BSON. You can use a generic map such as bson.M to provide the
document for querying the data. To query all documents in the collection, you can use nil as the argument
that is equivalent to an empty document such as bson.M{}. The Find method returns the value of the mgo.
Query type, in which you can retrieve results using methods such as One, For, Iter, or Tail.
Sorting Records
Documents can be sorted using the Sort method provided by the Query type. The Sort method prepares the
query to order returned documents according to the provided field names.
Listing 8-10 sorts the collection documents.
Listing 8-10. Ordering Documents Using the Sort Method
iter := c.Find(nil).Sort("name").Iter()
result := Category{}
for iter.Next(&result) {
fmt.Printf("Category:%s, Description:%s\n", result.Name, result.Description)
tasks := result.Tasks
149
150
Updating Documents
The Update method of the Collection type allows you to update existing documents.
Here is the method signature of the Update method:
func (c *Collection) Update(selector interface{}, update interface{}) error
The Update method finds a single document from the collection, matches it with the provided selector
document, and modifies it based on the provided update document. A partial update can be done by using
the keyword "$set" in the update document.
Listing 8-14 updates an existing document.
Listing 8-14. Updating a Document
err := c.Update(bson.M{"_id": id},
bson.M{"$set": bson.M{
"description": "Create open-source projects",
"tasks": []Task{
Task{"Evaluate Negroni Project", time.Date(2015, time.August, 15, 0, 0, 0,
0, time.UTC)},
Task{"Explore mgo Project", time.Date(2015, time.August, 10, 0, 0, 0, 0,
time.UTC)},
Task{"Explore Gorilla Toolkit", time.Date(2015, time.August, 10, 0, 0, 0, 0,
time.UTC)},
},
}})
A partial update is performed for the fields descriptions and tasks. The Update method finds the
document with the provided _id value and modifies the fields based on the provided document.
Deleting Documents
The Remove method of the Collection type allows you to remove a single document.
Here is the method signature of the Remove method:
func (c *Collection) Remove(selector interface{}) error
Remove finds a single document from the collection, matches it with the provided selector document,
and removes it from the database.
Listing 8-15 removes a single document from the collection.
Listing 8-15. Deleting a Single Document
err := c.Remove(bson.M{"_id": id})
The single document matching the _id field is removed.
The RemoveAll method of the Collection type allows you to remove multiple documents.
Here is the method signature of the RemoveAll method:
func (c *Collection) RemoveAll(selector interface{}) (info *ChangeInfo, err error)
151
RemoveAll finds all documents from the collection, matches the provided selector document, and
removes all documents from the database. If you want to remove all documents from a collection, pass the
selector document as a nil value.
Listing 8-16 removes all documents from a collection.
Listing 8-16. Removing All Documents from a Collection
c.RemoveAll(nil)
Indexes in MongoDB
MongoDB databases can provide high performance on read operations as compared with relational
databases. In addition to the default performance behavior of MongoDB, you can further improve
performance by adding indexes to MongoDB collections. Indexes in collections provide high performance
on read operations for frequently used queries. MongoDB defines indexes at the collection level and
supports indexes on any field or subfield of the documents in a MongoDB collection.
All MongoDB collections have an index on the _id field that exists by default. If you dont provide an
_id field, the MongoDB process (mongod) creates an _id field with a unique ObjectId value. The _id index is
unique (you can think of it as a primary key). In addition to the default index on the _id field, you can add an
index to any field.
If you frequently query collections by filtering with a specific field, you should apply an index to ensure
better performance for read operations. The mgo driver provides support for creating indexes using the
EnsureIndex method of Collection, in which you can add the mgo.Index type as the argument.
Listing 8-17 applies a unique index to the field name and later queries with the Name field.
Listing 8-17. Creating an Index in a MongoDB Collection
package main
import (
"fmt"
"log"
"gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson"
)
type Category struct {
Id
bson.ObjectId `bson:"_id,omitempty"`
Name
string
Description string
}
func main() {
session, err := mgo.Dial("localhost")
if err != nil {
panic(err)
}
defer session.Close()
152
session.SetMode(mgo.Monotonic, true)
c := session.DB("taskdb").C("categories")
c.RemoveAll(nil)
// Index
index := mgo.Index{
Key:
[]string{"name"},
Unique:
true,
DropDups:
true,
Background: true,
Sparse:
true,
}
//Create Index
err = c.EnsureIndex(index)
if err != nil {
panic(err)
}
//insert three category objects
err = c.Insert(
&Category{bson.NewObjectId(), "Open-Source", "Tasks for open-source projects"},
&Category{bson.NewObjectId(), "R & D", "R & D Tasks"},
&Category{bson.NewObjectId(), "Project", "Project Tasks"},
)
if err != nil {
panic(err)
}
result := Category{}
err = c.Find(bson.M{"name": "Open-Source"}).One(&result)
if err != nil {
log.Fatal(err)
} else {
fmt.Println("Description:", result.Description)
}
}
An instance of mgo.Index type is created, and the EnsureIndex function is called by providing an Index
type instance as the argument:
index := mgo.Index{
Key:
[]string{"name"},
Unique:
true,
DropDups:
true,
Background: true,
Sparse:
true,
}
err = c.EnsureIndex(index)
if err != nil {
panic(err)
}
153
The Key property of the Index type allows you to specify a slice of field names to be applied as indexes
on the collections. Here, the field name is specified as an index. Because field names are provided as a slice,
you can provide multiple fields along with a single instance of the Index type. The Unique property of the
Index type prevents two documents from having the same index key.
By default, the index is in ascending order. To create an index in descending order, the field name
should be specified with a prefix dash (-) as shown here:
Key:
[]string{"-name"}
Managing Sessions
The Dial method of the mgo package establishes a connection to the cluster of MongoDB servers, which returns
an mgo.Session object. You can manage all CRUD operations using the Session object, which manages
the pool of connections to the MongoDB servers. A connection pool is a cache of database connections, so
connections can be reused when new connections to the database are required. When you develop web
applications, using a single global Session object for all CRUD operations is a really bad practice.
A recommended process for managing the Session object in web applications is shown here:
1.
Obtain a Session object using the Dial method.
2.
Create a new Session object during the lifetime of an individual HTTP request by
using the New, Copy, or Clone methods on the Session object obtained from the
Dial method. This approach enables the Session object to use the connection
pool appropriately.
3.
Use the newly obtained Session object to perform all CRUD operations during
the lifetime of an HTTP request.
The New method creates a new Session with the same parameters as the original Session. The Copy
method works just like New, but the copied Session preserves the exact authentication information from the
original Session. The Clone method works just like Copy, but it also reuses the same socket (connection to
the server) as the original Session.
Listing 8-18 is an example HTTP server that uses a copied Session object during the lifetime of an
HTTP request. In this example, a struct type is created to hold the Session object for easily managing
database operations from the application handlers.
Listing 8-18. HTTP Server Using a New Session for Each HTTP Request
package main
import (
"encoding/json"
"log"
"net/http"
"github.com/gorilla/mux"
"gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson"
)
var session *mgo.Session
154
type (
Category struct {
Id
bson.ObjectId `bson:"_id,omitempty"`
Name
string
Description string
}
DataStore struct {
session *mgo.Session
}
)
//Close mgo.Session
func (d *DataStore) Close() {
d.session.Close()
}
//Returns a collection from the database.
func (d *DataStore) C(name string) *mgo.Collection {
return d.session.DB("taskdb").C(name)
}
//Create a new DataStore object for each HTTP request
func NewDataStore() *DataStore {
ds := &DataStore{
session: session.Copy(),
}
return ds
}
//Insert a record
func PostCategory(w http.ResponseWriter, r *http.Request) {
var category Category
// Decode the incoming Category json
err := json.NewDecoder(r.Body).Decode(&category)
if err != nil {
panic(err)
}
ds := NewDataStore()
defer ds.Close()
//Getting the mgo.Collection
c := ds.C("categories")
//Insert record
err = c.Insert(&category)
if err != nil {
panic(err)
}
w.WriteHeader(http.StatusCreated)
}
155
156
//Close mgo.Session
func (d *DataStore) Close() {
d.session.Close()
}
//Returns a collection from the database.
func (d *DataStore) C(name string) *mgo.Collection {
return d.session.DB("taskdb").C(name)
}
The NewDataStore function creates a new DataStore object by providing a new Session object using
the Copy method of the Session obtained from the Dial method:
func NewDataStore() *DataStore {
ds := &DataStore{
session: session.Copy(),
}
return ds
}
The NewDataStore function is called to create a DataStore object for providing a copied Session
object being used in the lifetime of an HTTP request. For each handler for a route, a new Session object
is used through the DataStore type. In short, using a global Session object is not a good practice; it is
recommended to use a copied Session object for the lifetime of each HTTP request. This approach allows
you to having multiple Session objects if required.
Summary
When working with web applications, it is important that you persist application data into persistence
storage. In this chapter, you learned how to persist data into MongoDB using the mgo package, which is a
MongoDB driver for Go.
MongoDB is an open-source document database that provides high performance, high availability, and
automatic scaling. MongoDB is the most popular NoSQL database; it has been widely used by modern web
applications. The mgo driver provides a simple API that follows the Go idioms.
You can add indexes to fields of MongoDB collections, which provide high performance on read
operations. When developing web applications in Go, using a global mgo.Session object is not a good
practice. The right approach for managing Session is to create a new Session object by calling Copy, Clone,
or New on the obtained Session object, which was obtained by the Dial method of the mgo package.
The next chapter will show you more about working with the MongoDB database using the mgo package.
157
Chapter 9
Uniform interface
Stateless
Cacheable
Client-server
Layered system
Note
For a better understanding of the REST architectural style, read Roy Fieldings doctoral dissertation at
http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm.
159
The greatest advantages of REST architecture is that it uses the basic components of web programming.
If you have basic knowledge of HTTP programming, you can easily adopt the REST architecture style for your
applications. It uses HTTP as a transport system to communicate with remote servers. You can use XML and
JSON as the data format to communicate among clients and servers. Resource is a key concept in RESTful
systems, as described by Fielding in his dissertation:
The key abstraction of information in REST is a resource. Any information that can be
named can be a resource: a document or image, a temporal service (e.g. todays weather
in Los Angeles), a collection of other resources, a non-virtual object (e.g. a person), and so
on. In other words, any concept that might be the target of an authors hypertext reference
must fit within the definition of a resource. A resource is a conceptual mapping to a set of
entities, not the entity that corresponds to the mapping at any particular point in time.
By using uniform resource identifiers (URIs) and HTTP verbs, you can perform actions against the
resources. Lets say you define the resource "/api/employees". Now you can retrieve the information about
an employee resource using the HTTP verb Get, create a new employee resource using the HTTP verb Post,
update an existing employee resource using the HTTP verb Put, and delete an employee resource using the
HTTP verb Delete.
Web service APIs that follow the REST architecture style are known as RESTful APIs. When you build
RESTful APIs, you use XML and JSON as the data format for exchanging data among client applications and
API servers. Some APIs support both XML and JSON formats; others support a single data format: either
XML or JSON. When you build APIs targeted for mobile applications, the most commonly used data format
is JSON because it is lightweight and easier to consume than it compared to XML. In this chapter, you will
build a JSON-based RESTful API.
160
applications, but I strongly believe that Go is a good choice for building highly scalable RESTful API systems
to power the back end for modern applications. Among the technology stacks available for building RESTful
APIs, I highly recommended Go, which gives high performance, greater concurrency support, and simplicity
for building scalable back-end systems. Gos simplicity and package ecosystem are excellent for writing
loosely coupled, maintainable RESTful services.
The standard library package http provides a great foundation for building scalable web systems. It comes
with built-in concurrency support that executes each HTTP request in a separate goroutine. The http package
comes with a greater number of extensibility points that work well with third-party package ecosystems. If you
want to extend or replace any functionality of the http package, you can easily build your own package without
destroying the design goals of the package. Several useful third-party packages extend the functionality of the
http package, enabling you to build highly scalable web applications and RESTful APIs using Go.
When you build web-based systems, the ecosystem is very important, and there are many mature
database driver packages available for Go. Chapter 8 discussed how to work with MongoDB using the mgo
package. Database driver packages are available for major relational databases and NoSQL databases. If your
back-end systems need to work with middleware messaging systems, you can find client library packages
to work with those systems. NSQ, a real-time distributed messaging platform, is built with Go. In short, the
Go ecosystem, including third-party packages and tools, provides everything required for building scalable
RESTful APIs and other web-based, back-end systems using Go.
161
Third-Party Packages
The following third-party packages will be used for building the TaskManager application:
Application Structure
Figure9-1 illustrates the high-level folder structure of the RESTful API application.
162
Figure9-2 illustrates the folder structure and associated files of the completed version of the RESTful
API application.
163
The RESTful API application has been divided into the following packages:
common: Implements some utility functions and provides initialization logic for the
application
routers: Implements the HTTP request routers for the RESTful API
Data Model
The application provides the API for managing tasks. A user can add tasks and can provide updates and
notes against the individual task. Lets define the data model for this application to be used with the
MongoDB database.
Listing 9-1 defines the data model for the RESTful API application.
Listing 9-1. Data Model for the Application in models.go
package models
import (
"time"
"gopkg.in/mgo.v2/bson"
)
type (
User struct {
Id
bson.ObjectId `bson:"_id,omitempty" json:"id"`
FirstName
string
`json:"firstname"`
LastName
string
`json:"lastname"`
Email
string
`json:"email"`
Password
string
`json:"password,omitempty"`
HashPassword []byte
`json:"hashpassword,omitempty "`
}
Task struct {
Id
bson.ObjectId `bson:"_id,omitempty" json:"id"`
CreatedBy
string
`json:"createdby"`
Name
string
`json:"name"`
Description string
`json:"description"`
CreatedOn
time.Time
`json:"createdon,omitempty"`
Due
time.Time
`json:"due,omitempty"`
Status
string
`json:"status,omitempty"`
Tags
[]string
`json:"tags,omitempty"`
}
TaskNote struct {
Id
bson.ObjectId `bson:"_id,omitempty" json:"id"`
TaskId
bson.ObjectId `json:"taskid"`
Description string
`json:"description"`
CreatedOn
time.Time
`json:"createdon,omitempty"`
}
)
164
Three structs are created: User, Task, and TaskNote. The User struct represents users of the application.
A user should register at the application to create tasks. An authenticated user can add tasks, which will be
represented with the Task struct. A user can add notes against each task, which will be represented with the
TaskNote struct. The TaskNote entity holds the child details of its parent entity: Task.
Chapter 8 showed how to make a parentchild relationship by embedding child documents into the
parent document. That approach is good for some scenarios, but document references are also appropriate
in other contexts. Here, the parent-child relationships are made with document references. Whenever a
TaskNote object is created, a reference is put to the parent Task document by specifying the TaskId in the
TaskNote object. In this approach, you have to execute separate queries into the MongoDB database to get
the documents of the Task object and TaskNote object. When you follow embedded documents to make a
one-to-many relationship, you can retrieve the information for both parent and child entities by executing a
single query because child documents are embedded into the parent entity.
URI
HTTP Verb
Functionality
/users/register
Post
/users/login
Post
/tasks
Post
/tasks/{id}
Put
/tasks
Get
/tasks/{id}
Get
Gets a single task for a given ID. The value of the ID comes from the
route parameter.
/tasks/users/{id}
Get
Gets all tasks associated with a user. The value of the user ID comes
from the route parameter.
/tasks/{id}
Delete
Deletes an existing task for a given ID. The value of the ID comes
from the route parameter.
/notes
Post
/notes/{id}
Put
/notes
Get
/notes/{id}
Get
Gets a single note for a given ID. The value of the ID comes from the
route parameter.
/notes/tasks/{id}
Get
Gets all task notes for a given task ID. The value of the ID comes
from the route parameter.
/notes/{id}
Delete
Deletes an existing note for a given ID. The value of the ID comes
from the route parameter.
165
166
The third-party package Negroni is used for handling HTTP middleware (refer to Chapter 6). A
middleware handler function named Authorize is in the common package and used to authorize HTTP
requests with the JWT. In the RESTful API application, it isnt necessary to use the authorization middleware
across the routes of the application; when the resources of User Register and Login are accessed, the
middleware function should not be invoked. The authentication middleware function is applied into the
Task and TaskNote entities. Here, the resources of the Task entity are mapped with the URI "/tasks", so the
authorization middleware has to be added to work with the "/tasks" URL path. The Negroni package allows
you to add middleware to route specific URL paths.
Listing 9-3 provides the routes specified for the Tasks resource.
Listing 9-3. Routes for the Tasks Resource in task.go
package routers
import (
"github.com/codegangsta/negroni"
"github.com/gorilla/mux"
"github.com/shijuvar/go-web/taskmanager/common"
"github.com/shijuvar/go-web/taskmanager/controllers"
)
func SetTaskRoutes(router *mux.Router) *mux.Router {
taskRouter := mux.NewRouter()
taskRouter.HandleFunc("/tasks", controllers.CreateTask).Methods("POST")
taskRouter.HandleFunc("/tasks/{id}", controllers.UpdateTask).Methods("PUT")
taskRouter.HandleFunc("/tasks", controllers.GetTasks).Methods("GET")
taskRouter.HandleFunc("/tasks/{id}", controllers.GetTaskById).Methods("GET")
taskRouter.HandleFunc("/tasks/users/{id}", controllers.GetTasksByUser).Methods("GET")
taskRouter.HandleFunc("/tasks/{id}", controllers.DeleteTask).Methods("DELETE")
router.PathPrefix("/tasks").Handler(negroni.New(
negroni.HandlerFunc(common.Authorize),
negroni.Wrap(taskRouter),
))
return router
}
167
168
router = SetTaskRoutes(router)
// Routes for the TaskNote entity
router = SetNoteRoutes(router)
return router
}
The InitRoutes function in main.go is called when the HTTP server starts, as discussed in a later
section of the chapter.
169
Listing 9-6 decodes the JSON string from config.json and puts the values into AppConfig.
Listing 9-6. Initializing AppConfig in utils.go
package common
import (
"encoding/json"
"log"
"os"
)
type configuration struct {
Server, MongoDBHost, DBUser, DBPwd, Database string
}
// AppConfig holds the configuration values from config.json file
var AppConfig configuration
// Initialize AppConfig
func initConfig() {
loadAppConfig()
}
// Reads config.json and decode into AppConfig
func loadAppConfig() {
file, err := os.Open("common/config.json")
defer file.Close()
if err != nil {
log.Fatalf("[loadConfig]: %s\n", err)
}
decoder := json.NewDecoder(file)
AppConfig = configuration{}
err = decoder.Decode(&AppConfig)
if err != nil {
log.Fatalf("[loadAppConfig]: %s\n", err)
}
}
170
This command generates a 1024-bit key named app.rsa. To generate a counterpart public key for the
private key, run the following command on the command-line window:
openssl rsa -in app.rsa -pubout > app.rsa.pub
This code generates a counterpart public key named app.rsa.pub. The RSA keys are stored in the
directory keys.
Listing 9-7 loads the private/public RSA keys from the keys folder and stores them into two variables.
Listing 9-7. Initializing Private/Public Keys in auth.go
package common
import (
"io/ioutil"
)
// using asymmetric crypto/RSA keys
const (
// openssl genrsa -out app.rsa 1024
privKeyPath = "keys/app.rsa"
// openssl rsa -in app.rsa -pubout > app.rsa.pub
pubKeyPath = "keys/app.rsa.pub"
)
// private key for signing and public key for verification
var (
verifyKey, signKey []byte
)
// Read the key files before starting http handlers
func initKeys() {
var err error
signKey, err = ioutil.ReadFile(privKeyPath)
if err != nil {
log.Fatalf("[initKeys]: %s\n", err)
}
verifyKey, err = ioutil.ReadFile(pubKeyPath)
if err != nil {
log.Fatalf("[initKeys]: %s\n", err)
panic(err)
}
}
The private key is used for signing the JWT; the public key verifies the JWT in HTTP requests to access
the resources of the RESTful API. You can use the OpenSSL tool to generate the RSA keys.
171
172
173
174
server := &http.Server{
Addr:
common.AppConfig.Server,
Handler: n,
}
log.Println("Listening...")
server.ListenAndServe()
}
The HTTP server is created in main.go, in which the StartUp function of the common package is called to
execute initialization logic for the RESTful API application. The InitRoutes function of the routers package
is then called to get the *mux.Router, which is used for creating the Negroni handler. The http.Server
object is created by providing the Negroni handler and finally starting the HTTP server. The host URI of the
HTTP server is read from common.AppConfig.
Authentication
Here is the authentication workflow defined in the application:
1.
Users register into the system by sending HTTP requests to the resource
"/users/register".
2.
Registered users can log in to the system by sending HTTP requests to the resource
"/users/login". The server validates the login credential and generates a JWT as
an access token for accessing the protected resources of the RESTful API server.
3.
Users can use the JWT to access the protected resources of the RESTful API.
Users must send this token as a bearer token with the "Authorization" HTTP
header.
The authentication workflow is described in more detail in the following sections.
175
176
177
Generating JWT
The GenerateJWT function generates the JWT by using the private key. Various claims are set onto the JWT,
including expiration information for the token. The go-jwt package is used for signing the encoded security
token. The GenerateJWT function is invoked from the application handler for the request "/users/login"
and is called only if the login process is successful.
When a user logs in to the system, the server sends back a JSON response, as shown here:
{"data":
{"user":{"id":"55b9f7e13f06221910000001","firstname":"Shiju","lastname":"Varghese",
"email":"[email protected]"},
"token":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJVc2VySW5mbyI6eyJOYW1lIjoic2hpanVAZ21haWwu
Y29tIiwiUm9sZSI6Im1lbWJlciJ9LCJleHAiOjE0MzgyNTI5MzgsImlzcyI6ImFkbWluIn0.WtdM55KE0cNlj5c2
VYwtIUQS8L6UI_ViLiwe0wH_0cpDj0dKkMTMtZ6LSHoIxtZyt92z19WX5gQCi3z-7Mly4kPe5Yvp3IXuDNdgJvB
kQvEd_xg0-Vx9bhm_ztf0Hb2CInsVgux49EIxgjFoinwdzrxmM9ZbY7msBYSKutcRKLU"}
}
From the JSON response, users can take the JWT from the JSON field "token", which can be used for
authorizing the HTTP requests to access protected resources of the RESTful API.
JWT has three parts, separated by a . (period):
Header
Payload
Signature
JWT is a JSON-based security encoding that can be decoded to get JSON representation of these parts.
Figure9-5 shows the decoded representation of the Header and Payload sections.
178
The Header section contains the algorithm used for generating the token, which is RS256, and the type,
which is JWT. Payload carries the JWT claims in which you can provide user information, expiration, and
other information about the JWT.
Authorizing JWT
The Authorize middleware handler function authorizes HTTP requests, which validate whether the HTTP
request has a valid JWT in the "Authorization" header as a bearer token. The ParseFromRequest helper
function of the go-jwt package is used to verify the token with a public key. In this application, one private
key is used to sign the tokens, so its public counterpart is also used to verify the token:
token, err := jwt.ParseFromRequest(r, func(token *jwt.Token) (interface{}, error) {
return verifyKey, nil
})
If the request has a valid token, the middleware function calls the next handler in the middleware stack.
If the token is invalid, the DisplayAppError helper function is called to display HTTP errors in JSON format.
When the DisplayAppError function is called, the error value is provided a custom message for the error
and an HTTP status code. The HTTP status code 401 represents the HTTP status "Unauthorized":
if token.Valid {
next(w, r)
} else {
w.WriteHeader(http.StatusUnauthorized)
DisplayAppError(
w,
err,
"Invalid Access Token",
401,
)
}
179
When next(w,r) is called, it calls the next handler function. The Negroni package is called for
the middleware stack and uses the signature func (http.ResponseWriter, *http.Request, http.
HandlerFunc) to write middleware handler functions to be used with Negroni. All HTTP requests to
the URL "/tasks" and "/notes" paths must be authorized with a valid access token. So the middleware
function Authorize is added into the Negroni middleware stack.
Here is the code block in task.go of the routers package to add authentication middleware function to
the "/tasks" path:
func SetTaskRoutes(router *mux.Router) *mux.Router {
taskRouter := mux.NewRouter()
taskRouter.HandleFunc("/tasks", controllers.CreateTask).Methods("POST")
taskRouter.HandleFunc("/tasks/{id}", controllers.UpdateTask).Methods("PUT")
taskRouter.HandleFunc("/tasks", controllers.GetTasks).Methods("GET")
taskRouter.HandleFunc("/tasks/{id}", controllers.GetTaskById).Methods("GET")
taskRouter.HandleFunc("/tasks/users/{id}", controllers.GetTasksByUser).Methods("GET")
taskRouter.HandleFunc("/tasks/{id}", controllers.DeleteTask).Methods("DELETE")
router.PathPrefix("/tasks").Handler(negroni.New(
negroni.HandlerFunc(common.Authorize),
negroni.Wrap(taskRouter),
))
return router
}
You can add middleware functions to specific routes by using the PathPrefix function of the router
instance.
Here is the code block in note.go of the routers package to add authentication middleware function to
the "/notes" path:
func SetNoteRoutes(router *mux.Router) *mux.Router {
noteRouter := mux.NewRouter()
noteRouter.HandleFunc("/notes", controllers.CreateNote).Methods("POST")
noteRouter.HandleFunc("/notes/{id}", controllers.UpdateNote).Methods("PUT")
noteRouter.HandleFunc("/notes/{id}", controllers.GetNoteById).Methods("GET")
noteRouter.HandleFunc("/notes", controllers.GetNotes).Methods("GET")
noteRouter.HandleFunc("/notes/tasks/{id}", controllers.GetNotesByTask).Methods("GET")
noteRouter.HandleFunc("/notes/{id}", controllers.DeleteNote).Methods("DELETE")
router.PathPrefix("/notes").Handler(negroni.New(
negroni.HandlerFunc(common.Authorize),
negroni.Wrap(noteRouter),
))
return router
}
Middleware functions are great for implementing shared functionalities across application handlers.
Middleware functions can also be applied to specific routes, as was done for URL paths "/tasks" and
"/notes".
180
Application Handlers
Previous sections looked at application data models, RESTful API resource modeling and mapping it with an
applications HTTP routes, setting up the HTTP server with essential initialization logic, and authentication
of RESTful APIs. Lets now take a look at application handlers for serving HTTP requests against each route.
Application handlers are organized in the controllers package. Figure9-6 shows the source files
contained in the controllers package.
181
type (
appError struct {
Error
string `json:"error"`
Message
string `json:"message"`
HttpStatus int
`json:"status"`
}
errorResource struct {
Data appError `json:"data"`
}
)
func DisplayAppError(w http.ResponseWriter, handlerError error, message string, code int) {
errObj := appError{
Error:
handlerError.Error(),
Message:
message,
HttpStatus: code,
}
log.Printf("AppError]: %s\n", handlerError)
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(code)
if j, err := json.Marshal(errorResource{Data: errObj}); err == nil {
w.Write(j)
}
}
A helper function named DisplayAppError is written to provide error messages in JSON as the
HTTP response. Client applications can check the HTTP status code to verify whether the HTTP request is
successful. A struct type named appError is used to create the model object for providing error messages.
In the appError type, the Error property is used for holding the string value of the error object, the Message
property is used for holding a custom message on the error, and the HttpStatus property is used for holding
HTTP status code. An instance of the errorResource type is created by providing the value of appError to
encode the response as JSON.
Figure9-7 shows the error response of an invalid HTTP request that contains an expired access token.
182
The DisplayAppError function is a simple helper function to provide HTTP errors. You can also write
the error-handling logic in HTTP middleware, which can be used to wrap application handlers and is a
more elegant approach to implement error handling in Go web applications. If any error occurred in an
application handler, you can return a model object for holding the error data, and within the middleware
function you can check whether the error model contains any value. You can also provide the HTTP
response for an error if one occurs. This approach is not discussed in this chapter, but you can try it when
you build your own web applications.
183
This source file provides a NewContext function that returns an instance of Context type by providing a
copied version of the MongoDB Session object. The GetSession function of the common package is called to
get the Session object and take a copy from it. Within the application handlers, the NewContext function is
called to get an instance of a Context type in which the MongoDB Session object of the Context type is used
for performing CRUD operations against the MongoDB database.
In the Context type, you are simply storing the MongoDB Session object, but you can use any kind of
data in the Context type to be used with the lifecycle of an HTTP request. In many use cases, you may need
to share these data among various middleware handler functions and application handler functions. In
short, you need to share data among different handler functions.
In this scenario, you can use a mechanism to store objects to work with the HTTP request context.
The context package from the Gorilla web toolkit (www.gorillatoolkit.org/pkg/context) provides
the functionality for putting data in the HTTP Context object for holding the data during the lifecycle of
an HTTP request. You can put the data into HTTP context in one handler, which is accessible from other
handlers. In the RESTful API example, you use the data in the application handler and dont need to share
data among handlers. So you arent putting the Context struct into the HTTP Context object.
184
185
return
}
user := &dataResource.Data
context := NewContext()
defer context.Close()
c := context.DbCollection("users")
repo := &data.UserRepository{c}
// Insert User document
repo.CreateUser(user)
// Clean-up the hashpassword to eliminate it from response
user.HashPassword = nil
if j, err := json.Marshal(UserResource{Data: *user}); err != nil {
common. DisplayAppError(
w,
err,
"An unexpected error has occurred",
500,
)
return
} else {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
w.Write(j)
}
}
// Handler for HTTP Post - "/users/login"
// Authenticate with username and apssword
func Login(w http.ResponseWriter, r *http.Request) {
var dataResource LoginResource
var token string
// Decode the incoming Login json
err := json.NewDecoder(r.Body).Decode(&dataResource)
if err != nil {
common.DisplayAppError(
w,
err,
"Invalid Login data",
500,
)
return
}
loginModel := dataResource.Data
loginUser := models.User{
Email:
loginModel.Email,
Password: loginModel.Password,
}
context := NewContext()
defer context.Close()
c := context.DbCollection("users")
repo := &data.UserRepository{c}
186
187
The Context type is used to access the MongoDB Session object (mgo.Session) in application handlers.
So an instance of Context type and the MongoDB Collection object (mgo.Collection) are created by
calling the DbCollection method:
context := NewContext()
defer context.Close()
c := context.DbCollection("users")
The Close method of the Context type is added into the defer function to close the MongoDB Session
object, which is a copied version of the Session object. In all HTTP handler functions, a copied version of
the MongoDB Session object is created and will use the same instance in a single HTTP request lifecycle
and release the resources using the defer function.
In the Register handler function, an instance of UserRepository is created by providing the MongoDB
Collection object. The CreateUser method of the UserRepository struct is created to persist the User
object into the MongoDB database. All data persistence logic is written in the data package:
repo := &data.UserRepository{c}
// Insert User document
repo.CreateUser(user)
UserRepository provides all CRUD operations against the User entity. (This topic is discussed in the
next section of the chapter.) The Register handler function sends back a response as a JSON representation
of the newly created User entity.
Lets test the functionality of the Users resource by using a RESTful API client tool "Postman"
(www.getpostman.com/), which allows you to test your APIs (also very useful for testing RESTful APIs).
Figure9-8 shows the HTTP Post request sent to the URI endpoint "users/register" using the RESTful
API client tool "Postman".
188
Figure9-9 shows the response from the RESTful API server, indicating that a new resource has been created.
189
w.Header().Set("Content-Type", "application/json")
user.HashPassword = nil
authUser := AuthUserModel{
User: user,
Token: token,
}
j, err := json.Marshal(AuthUserResource{Data: authUser})
if err != nil {
common.DisplayAppError(
w,
err,
"An unexpected error has occurred",
500,
)
return
}
w.WriteHeader(http.StatusOK)
w.Write(j)
}
Figure9-10 shows that login credentials were sent to the system by sending an HTTP Post to the URI
endpoint "users/register" to get the JWT.
190
Figure9-11 shows the HTTP response from the RESTful API server after the successful login to the system.
191
192
The requests to the URL path "/tasks" are decorated with an authorization middleware handler named
Authorize, which is provided by the Negroni stack. Authenticated users can create Tasks by providing the
JWT in the HTTP requests as an access token to the Server.
193
194
195
196
197
198
199
Figure9-15 shows the response from the RESTful API server for the HTTP Post to "/tasks".
200
Figure9-17 shows the response from the RESTful API server for the HTTP Get to "/tasks{id}".
201
Note The source code of the completed version of the TaskManager application is available at
github.com/shijuvar/go-web/tree/master/taskmanager
202
203
204
Introduction to Docker
Docker is a platform for developers and SAs to develop, ship, and run applications in the container
virtualization environment. An application container is a lightweight Linux environment that you can
leverage to deploy and run an independently deployable unit of code. This chapter briefly discussed
Microservice architecture, in which independently deployable service units are composed to build larger
applications. A Docker container (application container) is a perfect fit for running a microservice in the
Microservices architecture. Docker is not just a technology; it is an ecosystem that lets you quickly compose
applications from components (analogous to a microservice in the Microservice architecture).
In traditional computing environments, you develop applications for virtual machines (VMs), in
which you target an idealized hardware environment, including the OS, network infrastructure layers, and
so on. The greatest advantage of using an application container is that it separates applications from the
infrastructure and from where it runs, enabling great opportunities for application developers. With Docker,
developers can now develop applications against an idealized OS a static Linux environment and ship
their applications as quickly as possible.
Docker eliminates complexities and hurdles that can occur when applications are deployed and run.
It also provides a great abstraction on the top of Linux container technology to easily work on container
virtualization, which is becoming a big revolution in the IT industry, thanks to the Docker ecosystem.
Docker was developed with the Go language, which is becoming a language of choice for building
many innovative systems. On the container ecosystem, the majority of systems are being developed with
Go. Kubernetes is an example of a technology developed with Go that is used for clustering application
containers. You can also use Kubernetes to cluster Docker applications.
Although Docker is a Linux technology, you can also run it in both Mac and Windows by using the
Docker Toolbox (www.docker.com/docker-toolbox). To get more information about Docker, check out its
documentation at https://docs.docker.com/.
The Docker ecosystem consists of the following:
205
In Docker, images and containers are important concepts. A container is a stripped-to-basics version
of a Linux OS in which an image is loaded. A container is created from an image, which is an immutable file
that provides a snapshot of a container. You can make changes to an existing image, but you persist it as a
new image. Images are created with the docker build command. When these images run using the docker
run command, a container is produced. Images are stored in Docker registry systems such as Docker Hub
(provided by Docker) and private registry systems.
Writing Dockerfile
In this chapter, the objective will be to create a Dockerfile to automate the build process of the TaskManager
application to run on Dockerized containers.
When you work with Docker, you create application containers by manually making changes into a base
image or by building a Dockerfile, the text document that contains all instructions and commands to create
a desired image automatically without manually running commands. A Dockerfile lets you automate your
build to execute several command-line instructions. The docker build command builds an image from a
Dockerfile and a build context.
You create a Dockerfile by naming a text file as a Dockerfile without a file extension. Typically, you put
this file onto the root directory of your project repository.
Listing 9-23 provides a Dockerfile for the TaskManager application.
Listing 9-23. Dockerfile for the TaskManager Application
# golang image where workspace (GOPATH) configured at /go.
FROM golang
# Copy the local package files to the containers workspace.
ADD . /go/src/github.com/shijuvar/go-web/taskmanager
# Setting up working directory
WORKDIR /go/src/github.com/shijuvar/go-web/taskmanager
# Get godeps for managing and restoring dependencies
RUN go get github.com/tools/godep
# Restore godep dependencies
RUN godep restore
# Build the taskmanager command inside the container.
RUN go install github.com/shijuvar/go-web/taskmanager
# Run the taskmanager command when the container starts.
ENTRYPOINT /go/bin/taskmanager
# Service listens on port 8080.
EXPOSE 8080
206
The command FROM golang instructs Docker to start from an official golang
Docker image, which is a Debian image with the latest version of Go installed, and a
workspace (GOPATH) configured at /go.
The Add command copies the local code to the containers workspace.
The RUN command runs commands within the container. Using the RUN command,
the godep tool is installed.
The ENTRYPOINT command instructs Docker to run the taskmanager command from
the /go/bin location when the container is started.
The EXPOSE command instructs Docker that the container listens on the specified
network ports at runtime. It exposes port 8080. The EXPOSE command doesnt open
up the ports of the container to the public. To do that, the publish flag (--publish) is
used to open up the ports by mapping with external HTTP ports.
The Dockerfile for building and running the TaskManager API server in an application container using
Docker is complete. Now lets build the image from the Dockerfile. Run the following command from the
root directory of the TaskManager application:
docker build -t taskmanager
A local image is built by executing the instructions defined in the Dockerfile. The resulting image tags as
taskmanager.
An image named taskmanager (the resulting image from the Docker build) is created. Use the following
to run a container from this image:
docker run --publish 80:8080 --name taskmanager_api --rm taskmanager
Lets explore the flags used in the docker run command:
The --publish flag instructs Docker to publish the containers exposed port 8080 on
the external port 80.
The --name flag gives a name to the container created from the taskmanager image.
The name taskmanager_api is given to the container.
The --rm flag instructs Docker to remove the container image when the container
exits. Otherwise, the container image will be there even after the container exits.
The docker run command runs the container by exposing external port 80. You can access the server
application by navigating to http://localhost:80.
207
The Dockerfile for the TaskManager application provides everything for running the HTTP server inside
an application container. When you move the TaskManager application into production environments
with Docker, the pending action is the implementation on MongoDB into an application container. The
TaskManager application uses MongoDB as the persistence store, so you have to run the MongoDB database
in another container. You can use one container for running the HTTP server and another for running the
MongoDB database.
Containerized applications running on containers are referred as jailed services running in a jail. This
kind of virtualization isolates containers from each other. But when you build real-world applications, you
have to compose multiple containers to make them an application. In this case, you need to compose the
containers from the HTTP server and MongoDB database.
In this scenario, you can use Docker Compose (https://docs.docker.com/compose/) to define and
run multicontainer applications on the Docker platform. The design philosophy of Docker is to build
independently deployable microservices into containers and compose these services to build larger
applications.
Go Web Frameworks
In this chapter, you learned how to develop a RESTful API application from scratch without leveraging
a Go web framework. In the Go web development stack, using a full-fledged web framework is not very
popular within the Go developer community. Go developers prefer to use Go standard library packages
as the foundation for building their web applications and web APIs. On top of standard library packages,
developers do use a few third-party library packages for extending the functionality of standard library
packages and getting some utility functions to speed up web development.
This chapter used the same approach: developing a full-fledged application using the net/http
standard library package and a few third-party packages. In my opinion, a web framework is not necessary
for developing RESTful APIs. You can build highly scalable RESTful APIs in Go without using any web
framework. Having said that, using a web framework might be helpful in some contexts, especially when you
develop conventional web applications.
Here are some Go web frameworks to use if you want a full-fledged web framework:
Summary
In this chapter, you learned how to build a production-ready RESTful API with Go. You used MongoDB as
the data store for the RESTful API application, in which you organized the application logic into multiple
packages to easily maintain the application. (Some business validations and best practices were ignored
in the application due to the constraints of a book chapter, but a few best practices were included in the
application.)
208
Negroni was used to handle the middleware stack, and middleware was added to specific routes.
A struct type for holding the values during the lifecycle of an HTTP request was created. A MongoDB
Session object was created before the HTTP server was started, and a copied version of the MongoDB
Session object was taken and closed for each HTTP request after executing the application handler. Go
is a great technology stack for building highly scalable RESTful APIs and is a perfect technology stack for
developing applications with the Microservice architecture pattern.
Go doesnt provide a centralized repository for managing third-party packages, so managing external
dependencies is bit difficult. The godep third-party tool was used to manage dependencies of the RESTful
API application. It allows you to restore the dependencies into the GOPATH system location.
Docker is a revolutionary ecosystem for containerizing applications that enables developers and SAs to
develop, ship, and run applications in the container virtualization environment. An application container is
a lightweight Linux environment. A Dockerfile was created for the RESTful API application to automate the
build process of the application with Docker.
You can build web-based, scalable back-end systems in Go without using any web framework. In this
chapter, a production-ready RESTful API application was created by using the standard library package
net/http and a few third-party libraries. When you build RESTful APIs, you might not need a web
framework, but when you develop conventional web applications, using one might be helpful. Beego is a
fully featured web framework that provides everything, including an ORM.
References
https://docs.docker.com/
https://github.com/tools/godep
209
Chapter 10
Testing Go Applications
Automated testing is an important practice in software engineering that ensures the quality of your
applications. If you are concerned about application quality, you should write automated tests to verify
the behavior of the components of your applications. In your Go applications, automated tests can ensure
that the Go packages behave the way they were designed to work. Go provides the fundamental testing
capabilities through its standard library packages and tooling support. In this chapter, you will learn how to
write unit tests using standard library packages and third-party packages.
Unit Testing
It isnt easy to develop reliable software systems and maintain them long term. When you develop
applications, you must ensure that they work as intended every time. A good software system should be
maintainable so you can modify the functionality of the applications at any time without breaking any parts
of the application. When you modify some parts of an application, it should not destroy the functionality
of other parts. So you need to adopt good software engineering practices to ensure the quality of your
applications. Unit testing is an important software development practice that can be used for ensuring this
quality.
Unit testing is a kind of automated testing process in which the smallest pieces of testable software in
the application, called units, are individually and independently tested to determine whether they behave
exactly as designed. When you write unit tests, it is important to isolate the units (the smallest testable parts)
from the remaining parts of the code to individually and independently test the application code. When you
write unit tests, you should identify the units to be tested.
In OOP, the smallest unit might be a method that belongs to a class. In procedural programming, the
smallest unit might be a single function. In Go programs, a function or a method might be considered a
unit. But it doesnt necessarily make a function as a unit because it is a situation in which the context of the
application functionality and the software design will decide which part can be treated as a unit.
211
In TDD, you first write a unit test that defines a newly identified requirement or a desired improvement
before writing the production code, which gives you an understanding about what you will develop. You
write a unit test against a newly identified functional requirement, and the development process starts with
the newly added unit test. A developer must clearly understand the functional requirement in order to write
a new unit test. In a pure TDD approach, you run all unit tests along with the newly added test before writing
the implementation and see the newly added test fail. After writing a unit test, you write just enough code
to pass the test. Once your test is successful, you can fearlessly refactor the application code because it is
covered by the unit tests.
The TDD approach is highly recommended by those who are using agile development methodologies
for their software delivery. Agile methodologies emphasize developing software based on an evolutionary
approach rather than following an upfront design. When you develop software based on an evolutionary
design approach, TDD gives you lots of values as you continuously refactor application code for a newly
identified requirement or desired improvement. Because your application code is covered by unit tests, you
can run the suite of unit tests at any time to ensure that your application is working as you designed. Unit
tests define the design of your application.
Here are the steps involved in TDD:
Add a unit test to define a new functional requirement.
1.
Run all tests and see whether the new unit test gets a fail.
2.
Write some code to pass the tests.
3.
Run the tests.
4.
Refactor the code.
5.
These steps will continue for the entire evolution of the software development process.
This chapter doesnt use the test-first approach or TDD as the design approach for the examples. It
focuses more on writing automated unit tests. TDD is an advanced technique of using automated unit tests,
so you can easily practice TDD if you know how to write automated unit tests.
212
Within the test suite (the source file ends with _test.go), write functions with
signature func TestXxx(*testing.T).
You can put the test suite files in the same package that is being tested. These test files are excluded
when you build the packages, but are included when you run the go test command, which recompiles each
package along with any files with names matching the file pattern "*_test.go".
To get help with go test, run the following command:
go help test
To get help with the various flags used by the go test command, run the following command:
go help testflag
213
214
If the test results dont match with the expected results, Error, Fail, or related functions can be called
to signal failure of the test cases. The Error and Fail functions signal the failure of a test case, but it will
continue the execution for the rest of the test cases. If you want to stop the execution when any test case
fails, you can call the FailNow or Fatal functions. The FailNow function calls the Fail function and stops the
execution. Fatal is equivalent to Log followed by FailNow. In these test cases, the Errorf function is called to
signal their failure:
if result != expected {
t.Errorf("SwapCase(%q) == %q, expected %q", input, result, expected)
}
Lets run the test cases. Navigate to the package directory and then run the following command on the
command-line window:
go test
The go test command will execute all _test.go files in the package directory, and you should see
output something like this:
PASS
ok
github.com/shijuvar/go-web/chapter-10/stringutils
0.524s
The output of the previous test result is not very descriptive. The verbose (-v) flag is provided to get
descriptive information about the test cases. Lets run the go test command by providing the verbose flag:
go test v
When you run go test with the verbose flag, you should see output something like this:
=== RUN
TestSwapCase
--- PASS: TestSwapCase (0.00s)
=== RUN
TestReverse
--- PASS: TestReverse (0.00s)
PASS
ok
github.com/shijuvar/go-web/chapter-10/stringutils
0.466s
215
TestSwapCase
TestSwapCase (0.00s)
TestReverse
TestReverse (0.00s)
100.0% of statements
This output shows that there is 100% test coverage against the code written in the stringutils package.
In the stringutils package, two utility functions that were covered in the utils_test.go test suite file are
written.
For the sake of the demonstration, lets comment out the test function TestSwapCase and run the go
test command with the coverage flag. You should see output something like this:
=== RUN
TestReverse
--- PASS: TestReverse (0.00s)
PASS
coverage: 40.0% of statements
ok
github.com/shijuvar/go-web/chapter-10/stringutils
0.371s
This output shows that the test coverage is 40% because the SwapCase function is not covered
(the TestSwapCase function was commented out) in the test suite file.
216
function
*testing.B) {
i++ {
World")
4.471s
This output shows that the loop within the BenchmarkSwapCase benchmark function ran 3,000,000
times at a speed of 562 ns per loop. The loop within the BenchmarkReverse function ran 3,000,000 times at
a speed of 404 ns per loop. In the output, the BenchmarkReverse function performed a bit better than the
BenchmarkSwapCase function.
Example()
ExampleF()
ExampleT()
ExampleT_M()
//
//
//
//
Example
Example
Example
Example
test
test
test
test
for
for
for
for
package
function F
type T
M on type T
Within the example test functions is a concluding line comment that begins with "Output:" and is
compared with the standard output of the function when the tests are run.
Listing 10-4 provides the example code for the Reverse and SwapCase functions.
217
0.359s
This output shows that the example tests have successfully passed. In addition to verifying the example
code, example tests are available as examples for package documentation. When documentation is
generated with the godoc tool, the example code in the example test functions is available as an example in
the documentation.
218
Figure10-1 illustrates the documentation for the Reverse function, showing that the example is taken
from the ExampleReverse function.
Figure 10-1. Documentation for the Reverse function generated by the godoc tool
219
Figure10-2 illustrates the documentation for the SwapCase function, showing that the example is taken
from the ExampleSwapCase function.
Figure 10-2. Documentation for the SwapCase function generated by the godoc tool
220
5.457s
Here, the tests are run normally without the short flag, so the test function ExampleSwapCase executes
normally and takes 5 seconds to complete the execution. Now lets run the tests by providing the short flag:
go test v cover -short
221
RUN
TestSwapCase
PASS: TestSwapCase (0.00s)
RUN
TestReverse
PASS: TestReverse (0.00s)
RUN
TestLongRun
SKIP: TestLongRun (0.00s)
utils_test.go:61: Skipping test in short mode
RUN
ExampleReverse
PASS: ExampleReverse (0.00s)
RUN
ExampleSwapCase
PASS: ExampleSwapCase (0.00s)
===
--===
--PASS
coverage: 100.0% of statements
ok
github.com/shijuvar/go-web/chapter-10/stringutils
0.449s
This output shows that the test case TestLongRun skipped during the execution:
--- SKIP: TestLongRun (0.00s)
utils_test.go:61: Skipping test in short mode
222
RUN
TestSwapCaseInParallel
RUN
TestReverseInParallel
RUN
TestSwapCase
PASS: TestSwapCase (0.00s)
RUN
TestReverse
PASS: TestReverse (0.00s)
RUN
TestLongRun
SKIP: TestLongRun (0.00s)
utils_test.go:91: Skipping test in short mode
PASS: TestSwapCaseInParallel (1.00s)
PASS: TestReverseInParallel (2.00s)
RUN
ExampleReverse
PASS: ExampleReverse (0.00s)
RUN
ExampleSwapCase
PASS: ExampleSwapCase (0.00s)
----===
--===
--PASS
coverage: 100.0% of statements
ok
github.com/shijuvar/go-web/chapter-10/stringutils
2.345s
This output shows that test cases TestSwapCaseInParallel and TestReverseInParallel ran in parallel:
=== RUN
=== RUN
TestSwapCaseInParallel
TestReverseInParallel
223
Within these functions, the execution time is delayed by using the time.Sleep function for the
sake of this demonstration. Both tests complete in different order by taking 1 second for
TestSwapCaseInParallel and 2 seconds for TestReverseInParallel:
--- PASS: TestSwapCaseInParallel (1.00s)
--- PASS: TestReverseInParallel (2.00s)
If you look at the logs generated by go test, you see that other test cases ran sequentially one by one
after completing each test case:
===
--===
--===
--===
--===
---
RUN
TestSwapCase
PASS: TestSwapCase (0.00s)
RUN
TestReverse
PASS: TestReverse (0.00s)
RUN
TestLongRun
SKIP: TestLongRun (0.00s)
utils_test.go:91: Skipping test in short mode
RUN
ExampleReverse
PASS: ExampleReverse (0.00s)
RUN
ExampleSwapCase
PASS: ExampleSwapCase (0.00s)
224
The utils.go file is put into the stringutils package directory, and the test suite file utils_test.go is
put into the stringutils_test package directory.
Listing 10-7 provides the combined source version of the utils_test.go file used in previous examples.
Listing 10-7. Source of Test Suite File utils_test.go
package stringutils_test
import (
"fmt"
"testing"
"time"
. "github.com/shijuvar/go-web/chapter-10/stringutils"
)
// Test case for the SwapCase function to execute in parallel
func TestSwapCaseInParallel(t *testing.T) {
t.Parallel()
// Delaying 1 second for the sake of demonstration
time.Sleep(1 * time.Second)
input, expected := "Hello, World", "hELLO, wORLD"
result := SwapCase(input)
if result != expected {
t.Errorf("SwapCase(%q) == %q, expected %q", input, result, expected)
}
}
// Test case for the Reverse function to execute in parallel
func TestReverseInParallel(t *testing.T) {
t.Parallel()
// Delaying 2 seconds for the sake of demonstration
time.Sleep(2 * time.Second)
input, expected := "Hello, World", "dlroW ,olleH"
result := Reverse(input)
if result != expected {
t.Errorf("Reverse(%q) == %q, expected %q", input, result, expected)
}
}
225
function
*testing.B) {
i++ {
World")
226
RUN
TestSwapCaseInParallel
RUN
TestReverseInParallel
RUN
TestSwapCase
PASS: TestSwapCase (0.00s)
RUN
TestReverse
PASS: TestReverse (0.00s)
RUN
TestLongRun
SKIP: TestLongRun (0.00s)
utils_test.go:93: Skipping test in short mode
PASS: TestSwapCaseInParallel (1.00s)
PASS: TestReverseInParallel (2.00s)
RUN
ExampleReverse
PASS: ExampleReverse (0.00s)
RUN
ExampleSwapCase
PASS: ExampleSwapCase (0.00s)
----===
--===
--PASS
coverage: 0.0% of statements
ok
github.com/shijuvar/go-web/chapter-10/stringutils_test
2.573s
Although unit tests were run from a package separate from the one being tested, the go test command
gave the proper output. Here, the only difference is that the test coverage is 0%:
coverage: 0.0% of statements
If you arent concerned about the percentage of test coverage you get from the go test command,
putting unit tests into separate packages is a recommended approach.
227
ResponseRecorder
Server
HTTP Verb
Path
Functionality
GET
/users
POST
/users
Creates a user
228
229
Two HTTP endpoints are written: HTTP Post on "/users" and HTTP Get on "/users". Gorilla mux
is used to configure the request multiplexer. When a new User entity is created, you validate whether the
e-mail ID already exists. For the sake of the example demonstration, the User objects are persisted into a
slice named userStore.
In TDD, a developer starts the development cycle by writing unit tests based on the requirements
identified. Developers normally write user stories to write unit tests. Even though this book doesnt follow a
test-first approach or the purest form of TDD, lets write user stories for writing unit tests:
1.
Users should be able to view a list of User entities.
2.
Users should be able to create a new User entity.
3.
The e-mail ID of a User entity should be unique.
Listing 10-9 provides the unit tests for the HTTP server application (refer to Listing 10-8). The tests are
based on the user stories defined previously.
Listing 10-9. Unit Tests for HTTP API Server using ResponseRecorder in main_test.go
package main
import (
"fmt"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/gorilla/mux"
)
// User Story - Users should be able to view list of User entity
func TestGetUsers(t *testing.T) {
r := mux.NewRouter()
r.HandleFunc("/users", getUsers).Methods("GET")
req, err := http.NewRequest("GET", "/users", nil)
if err != nil {
t.Error(err)
}
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code != 200 {
t.Errorf("HTTP Status expected: 200, got: %d", w.Code)
}
}
// User Story - Users should be able to create a User entity
func TestCreateUser(t *testing.T) {
r := mux.NewRouter()
r.HandleFunc("/users", createUser).Methods("POST")
userJson := `{"firstname": "shiju", "lastname": "Varghese", "email":
"[email protected]"}`
230
231
if err != nil {
t.Error(err)
}
if res.StatusCode != 200 {
t.Errorf("HTTP Status expected: 200, got: %d", res.StatusCode)
}
}
func TestCreateUserClient(t *testing.T) {
r := mux.NewRouter()
r.HandleFunc("/users", createUser).Methods("POST")
server := httptest.NewServer(r)
defer server.Close()
usersUrl := fmt.Sprintf("%s/users", server.URL)
fmt.Println(usersUrl)
userJson := `{"firstname": "Rosmi", "lastname": "Shiju", "email": "[email protected]"}`
request, err := http.NewRequest("POST", usersUrl, strings.NewReader(userJson))
res, err := http.DefaultClient.Do(request)
if err != nil {
t.Error(err)
}
if res.StatusCode != 201 {
t.Errorf("HTTP Status expected: 201, got: %d", res.StatusCode)
}
}
Three test cases are written against the user stories. Follow these steps to write each test case:
Create a router instance using the Gorilla mux package and configure the
1.
multiplexer.
Create an HTTP request using the http.NewRequest function.
2.
Create a ResponseRecorder object using the httptest.NewRecorder function.
3.
Send the ResponseRecorder object and Request object to the multiplexer by
4.
calling the ServeHTTP method.
Inspect the ResponseRecorder object to inspect the returned HTTP response.
5.
Lets explore the code of the test function TestGetUsers:
The multiplexer is configured to perform an HTTP Get request on "/users":
r := mux.NewRouter()
r.HandleFunc("/users", getUsers).Methods("GET")
The HTTP request object is created using http.NewRequest to send this object to the multiplexer:
req, err := http.NewRequest("GET", "/users", nil)
if err != nil {
t.Error(err)
}
232
A ResponseRecorder object is created using the httptest.NewRecorder function to record the returned
HTTP response:
w := httptest.NewRecorder()
The ServeHTTP method of the multiplexer is called by providing the ResponseRecorder and Request
objects to invoke the HTTP Get request on "/users", which invokes the getUsers handler function:
r.ServeHTTP(w, req)
The ResponseRecorder object records the returned response so the behavior of the HTTP response can
be verified. Here, the returned HTTP response of a status code of 200 is verified:
if w.Code != 200 {
t.Errorf("HTTP Status expected: 200, got: %d", w.Code)
}
In the test function TestCreateUser, JSON data is provided to create a User entity. Here, the returned
HTTP response of a status code of 201 is verified:
if w.Code != 201 {
t.Errorf("HTTP Status expected: 201, got: %d", w.Code)
}
The test function TestUniqueEmail verifies that the behavior of the e-mail ID of a User entity
is unique. To test this behavior, the same JSON data used for the TestCreateUser function is provided.
Because test cases run sequentially, the TestUniqueEmail function is run after the TestCreateUser function
is executed. Because a duplicate e-mail is provided, a status code of 400 should be received:
if w.Code != 400 {
t.Error("Bad Request expected, got: %d", w.Code)
}
233
234
235
If the request on "/users" is successful, the HTTP status code 201 should appear. The verification is
shown here:
if res.StatusCode != 201 {
t.Errorf("HTTP Status expected: 201, got: %d", res.StatusCode)
}
BDD Testing in Go
The Go testing and httptest standard library packages provide a great foundation for writing automated
unit tests. The advantage of these packages is that they provide many extensibility points, so you can easily
use these packages with other custom packages.
This section discusses two third-party packages: Ginkgo and Gomega. Ginkgo is a behavior-driven
development (BDD) based testing framework that lets you write expressive tests in Go to specify
application behaviors. If you are practicing BDD for your software development process, Ginkgo is a great
choice of package. Gomega is a matcher library that is best paired with the Ginkgo package. Although Gomega
is a preferred matching library for Ginkgo, it is designed to be matcher-agnostic.
236
Figure 10-4. Directory structure of the refactored application from Listing 10-8
Listing 10-8 implemented everything in a single source file: main.go in the main package. In the
refactored application, code is implemented in two packages: lib and main. The main package contains the
main.go file that provides the entry point of the application, and all application logic is moved into the lib
package.
The lib package contains the following source files:
handlers.go contains the HTTP handlers and provides the implementation for
setting up routes with Gorilla mux.
Other source files in the lib directory provide implementation for automated tests for BDD, but they
are put in the lib_test package. (This topic is discussed later in the chapter.)
Listing 10-11 provides the handlers.go code in the lib package.
Listing 10-11. handlers.go in the lib Package
package lib
import (
"encoding/json"
"net/http"
"github.com/gorilla/mux"
)
func GetUsers(repo UserRepository) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
userStore := repo.GetAll()
users, err := json.Marshal(userStore)
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(users)
})
}
237
238
239
An HTTP server in main.go is started. Listing 10-13 provides the source code of main.go in the main
package.
Listing 10-13. main.go in the main Package
package main
import (
"net/http"
"github.com/shijuvar/go-web/chapter-10/httptestbdd/lib"
)
func main() {
routers := lib.SetUserRoutes()
http.ListenAndServe(":8080", routers)
}
The main function in the main package starts the HTTP server.
240
Bootstrapping a Suite
To write tests with Ginkgo for a package, you must first create a test suite file by running the following
command on the command-line window:
ginkgo bootstrap
Lets navigate to the lib directory and then run the ginkgo bootstrap command. It generates a file
named lib_suite_test.go that contains the code shown in Listing 10-14.
Listing 10-14. Test Suite File lib_suite_test.go in the lib_test Package
package lib_test
import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"testing"
)
func TestLib(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Lib Suite")
}
The generated source file will be put into a package named lib_test, which isolates the tests from the
application code sitting on the lib package. Go allows you to directly put the lib_test package inside the
lib package directory. You can also change the package name to lib for the test suite file and tests.
The source of lib_suite_test.go shows that Ginkgo leverages the Go existing testing infrastructure.
You can run the suite by running "go test" or "ginkgo" on the command-line window.
Lets explore the suite file:
The ginkgo and gomega packages are imported with a dot (.) import, which allows
you to call exported identifiers of ginkgo and gomega packages without using a
qualifier.
The RunSpecs(t, "Lib Suite") statement tells Ginkgo to start the test suite. Ginkgo
automatically fails the testing.T if any of the specs fail.
241
This command generates a test file named users_test.go that contains the code shown in Listing 10-15.
As discussed earlier, tests are written in the lib_test packages under the lib directory, and Go allows you to
do this. If you want to use the lib package for tests, you can also do so.
Listing 10-15. Test File users_test.go Generated by ginkgo
package lib_test
import (
. "lib"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("Users", func() {
})
The generated test file contains the code for importing ginkgo and gomega packages using the dot (.)
import. Because test files are written in the lib_test package, the lib package has to be imported. Because
the dot (.) import is used for packages, the exported identifiers of these packages can be called directly
without needing a qualifier.
In BDD-style tests, specs are written to define code behavior. With Ginkgo, specs are written inside
a top-level Describe container using the Ginkgo Describe function. Ginkgo uses the "var _ =" trick to
evaluate the Describe function at the top level without requiring an init function.
242
243
In the application code, a concrete implementation for the UserRepository interface is provided
(InMemoryUserRepository), which provides persistence onto in-memory data of a collection. When you
develop real-world applications, you might persist your application data onto a database. When you write
tests, you may want to avoid persistence on to the database by providing a mocked implementation. Because
the handler functions expect a concrete implementation of UserRepository as a parameter value, you can
provide a separate version of UserRepository in the tests for invoking the handler functions.
Listing 10-17 provides a concrete implementation of the UserRepository interface for use in the tests.
Listing 10-17. Implementation of UserRepository in users_tests.go
type FakeUserRepository struct {
DataStore []User
}
func (repo *FakeUserRepository) GetAll() []User {
return repo.DataStore
}
func (repo *FakeUserRepository) Create(user User) error {
err := repo.Validate(user)
if err != nil {
return err
}
repo.DataStore = append(repo.DataStore, user)
return nil
}
func (repo *FakeUserRepository) Validate(user User) error {
for _, u := range repo.DataStore {
if u.Email == user.Email {
return errors.New("The Email is already exists")
}
}
return nil
}
func NewFakeUserRepo() *FakeUserRepository {
return &FakeUserRepository{
DataStore: []User{
User{"Shiju", "Varghese", "[email protected]"},
User{"Rosmi", "Shiju", "[email protected]"},
User{"Irene", "Rose", "[email protected]"},
},
}
}
FakeUserRepository provides an implementation of the UserRepository interface that is written for
use with tests. You can create an instance of FakeUserRepository by calling the NewFakeUserRepo function,
which also provides fake data for three User objects. The FakeUserRepository type is a kind of test double,
a generic term used in unit testing for any circumstance in which a production object is replaced for testing
purposes. Here, InMemoryUserRepository is replaced with FakeUserRepository for testing purposes.
Listing 10-18 provides the completed version of users_test.go, in which all the specs are
implemented.
244
245
246
Individual behaviors are written in the Describe block. Here, behaviors for
"Get Users" and "Post a new User" are defined on "Users".
Within the Describe block, the Context blocks are written to define circumstances
under a behavior.
Individual specs are written in the It block within the Describe and Context
containers.
Within the "Get Users" behavior, a "Getall Users" context is defined, which
maps the functionality of HTTP Get on the "/users" endpoint. Within this context,
an It block is defined as "should get list of Users", which checks to see
whether the returned HTTP response has the status code of 200. Dummy data of
three Users are defined by creating an instance of FakeUserRepository so that the
returned HTTP response shows having three Users.
For the "Post a new User" behavior, two circumstances are defined: "Provide a
valid User data" and "Provide a User data that contains duplicate email
id". This maps the functionality of HTTP Post on the "/users" endpoint. A new
User should be able to be created if valid User data is provided. An error occurs
if a User data with a duplicate e-mail ID is provided. These specs are specified in the
It block.
The Ginkgo preferred matcher library Gomega is used for assertion. Gomega provides a
variety of functions for writing assertion statements. The Expect function is also used
for assertion.
247
Running Specs
You can run the test suite using the go test or gingko commands.
Lets run the suite using the go test command:
go test -v
The go test command generates the output shown in Figure10-5.
248
Summary
Automated testing is an important practice in software engineering that ensures application quality.
Unit testing is a kind of automated testing process in which the smallest pieces of testable software in the
application, called units, are individually and independently tested to determine whether they behave
exactly as designed.
Test-driven development (TDD) is a software development process that follows a test-first development
approach, in which unit tests are written before the production code. TDD is a design approach that
encourages developers to think about their implementation before writing the code.
Go provides the core functionality to write automated unit tests through its testing standard library
package. The testing package provides all the essential functionality required for writing automated tests
with tooling support. It is intended to be used with the go test command. Besides the testing package, the
Go standard library provides two more packages: httptest provides utilities for HTTP testing, and quick
provides utility functions to help with black box testing.
The following naming conventions and patterns are used to write a test suite:
Within the test suite (the source file ends in _test.go), write functions with the
signature func TestXxx(*testing.T).
Test functions are run sequentially when tests using the go test command are run. In addition to
providing support for code testing behavior, the testing package can also be used for benchmarking tests
and testing example code.
You can test HTTP applications using the httptest package. When you test HTTP applications, you can
use the ResponseRecorder and Server struct types provided by the httptest package. ResponseRecorder
records the response of a returned HTTP response so it can be inspected. A Server is an HTTP server for
testing to perform end-to-end HTTP tests.
This chapter showed you third-party packages Ginkgo and Gomega for testing. Ginkgo is a behavior-driven
development (BDD) style testing framework that lets you write expressive tests in Go to specify application.
BDD is an extension of TDD, with an emphasis on behavior instead of test. In BDD, you specify behaviors in
an expressive way in your automated tests, and you write code based on the behaviors.
249
Chapter 11
251
Container as a Service
Container as a Service (CaaS) is an evolutionary computing model from both IaaS and PaaS. It is a relatively
new model that brings you the best of both IaaS and PaaS. This model allows you to run software containers
on the Cloud platform by using popular container technologies such as Docker and Kubernetes. Application
containers are gradually becoming a standard for deploying and running applications due to their benefits.
When you run application containers on Cloud platforms, this model gives you many capabilities. Google
Container Engine is a CaaS platform provided by Google Cloud.
Figure11-1 shows an infographic of the various services provided by the Google Cloud platform.
252
253
Python
Java
PHP
Go
User Authentication
You can use User Authentication services to sign on users with a Google account or OpenID.
Cloud Datastore
Cloud Datastore is a schema-less NoSQL database that can be used for persisting data of your App Engine
applications.
Cloud Bigtable
Cloud Bigtable is a fast, fully managed, massively scalable NoSQL database that is ideal for using the data
store for large-scale web, mobile, Big Data, and IoT applications that deal with large volumes of data. If the
App Engine application requires a massively scalable data store with high performance, Cloud Bigtable is a
better choice than Cloud Datastore.
254
Memcache
When you develop high-performance applications, caching your application data is an important strategy
for improving application performance. Memcache is a distributed, in-memory data cache that can be used
to cache application data to improve the performance of App Engine applications.
Search
The Search service allows you to perform Google-like searches over structured data such as plain text,
HTML, atom, numbers, dates, and geographic locations.
Traffic Splitting
The Traffic Splitting service allows you to route incoming requests to different application versions, run A/B
tests, and do incremental feature rollouts.
Logging
The Logging service allows App Engine applications to collect and store logs.
Task Queues
The Task Queues service enables App Engine applications to perform work outside of user requests by using
small discrete tasks that are executed later.
Security Scanning
The Security Scanning service scans applications for security vulnerabilities such as XSS attacks.
255
Go Development Environment
You can develop, test, and deploy Go applications on App Engine using the App Engine SDK for Go, which
provides the tools and APIs for developing, testing, and running applications on Google Cloud. The App
Engine Go SDK includes a development web server that allows you to run an App Engine application on a
local computer to test Go applications before uploading them into the Cloud.
The development web server simulates many Cloud services in the development environment so that
you can test App Engine applications on your local computer. This is very useful because you dont need
to deploy applications into the Cloud whenever you want to test an App Engine application during the
development cycle. You can test your App Engine application locally, and whenever you want to deploy the
application into the Cloud platforms production environment, you can do so by using the tools provided by
App Engine. The development server application simulates the App Engine environment, including a local
version of the Google Accounts data store, and gives the ability to fetch URLs and send e-mail directly from a
local computer using the App Engine APIs.
The Go SDK uses a modified version of the development tools from the Python SDK and runs on Mac
OS X, Linux, and Windows computers with Python 2.7. So you can download and install Python 2.7 for your
platform from the Python web site. Most Mac OS X users already have Python 2.7 installed. To develop Go
applications on App Engine, download and install the App Engine SDK for Go for your OS.
App Engine SDK for Go provides a command-line tool named goapp that provides the following
commands:
goapp deploy: The command goapp deploy is used to upload a Go application into
the App Engine production environment.
You can find the goapp tool in the go_appengine directory of the zip archive of App Engine SDK for Go.
To invoke the goapp tool from a command line, add the go_appengine directory to the PATH environment
variable. This command adds the go_appengine directory to the PATH environment variable:
export PATH=$HOME/go_appengine:$PATH
256
257
258
The application identifier is gae-demo. When you deploy an application into App
Engine, you must specify a unique identifier as the application identifier. When
you run the application in the development server, you can set any value as the
application identifier. Here, it is set as gae-demo during the time the application runs
on the development server.
The version number of the application code is 1. If you properly update the versions
before uploading a new application version of application into App Engine, you can
roll back to a previous version of using the administrative console.
This Go program runs in the Go runtime environment with the API version go1.
Every request to a URL whose path matches the regular expression /.* (all URLs)
should be handled by the Go program. The _go_app value is recognized by the
development web server and ignored by the production App Engine servers.
259
Note You can get information about App Engine instances on the admin web server here:
http://localhost:8000/.
Figure11-5 shows that the admin web server is providing the information about App Engine instances
running in the development web server.
260
261
Figure11-7 shows details about the newly created project from the Google Developers Console.
262
Figure 11-9. Response page of the Task form in the App Engine app
Note A build constraint, also known as a build tag, specifies conditions in which a file should be
included in the package. A build constraint must appear near the top of the source files as a line comment that
begins with // +build. Build constraints can appear in any kind of source file that is not restricted with Go
source files.
The App Engine SDK provides a new build constraint, appengine, that can be used to differentiate
the source code in stand-alone and App Engine environments when compiling the source code. Using the
appengine build constraint, you can exclude some source files during the build process based on the build
environment. For example, when you build the application source using the App Engine SDK, you can
ignore the source code written in the main package.
263
If you want to build source files using the App Engine SDK to be ignored by the Go tool, add the
following to the top of the source file:
// +build appengine
The following build constraint specifies that you want to build with the Go tool, so the source files will
not be compiled in the App Engine SDK:
// +build !appengine
Lets rewrite the web application from Listing 11-1 to make it a hybrid application for both App Engine
and stand-alone environments. In this hybrid implementation, youll create separate source files for
handling the HTTP handler logic using build constraints for both App Engine and stand-alone applications
and putting common logic in a shared library.
Figure11-10 illustrates the directory structure of the hybrid application.
264
const taskForm = `
<html>
<body>
<form action="/task" method="post">
<p>Task Name: <input type="text" name="taskname" ></p>
<p> Description: <input type="text" name="description" ></p>
<p><input type="submit" value="Submit"></p>
</form>
</body>
</html>
`
const taskTemplateHTML = `
<html>
<body>
<p>New Task has been created:</p>
<div>Task: {{.Name}}</div>
<div>Description: {{.Description}}</div>
</body>
</html>
`
var taskTemplate = template.Must(template.New("task").Parse(taskTemplateHTML))
func init() {
http.HandleFunc("/", index)
http.HandleFunc("/task", task)
}
func index(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, taskForm)
}
func task(w http.ResponseWriter, r *http.Request) {
task := Task{
Name:
r.FormValue("taskname"),
Description: r.FormValue("description"),
}
err := taskTemplate.Execute(w, task)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
HTTP handler logic is put in the init function of handler.go. The code in the handler.go source file is
common for both stand-alone and App Engine applications.
Listing 11-5 provides the main.go source file of the main package that provides the main function. It is
built with the Go tool.
265
266
Google Cloud Datastore: A NoSQL data store that provides scalable storage.
Google Cloud Bigtable: A NoSQL database that can scale to billions of rows and
thousands of columns, allowing petabytes of data. It is an ideal data store for Big
Data solutions.
Google Cloud provides both NoSQL and relational databases. Google Cloud offers two options for
NoSQL: Google Cloud Datastore and Google Cloud Bigtable. Both Datastore and Bigtable are designed to
provide a massively scalable data store.
Even though both are designed to be massively scalable, Bigtable is a better choice when you deal
with terabytes of data. Bigtable is designed for HBase compatibility and is accessible through extensions to
the HBase 1.0 API, so it is compatible with the Big Data ecosystem. In Big Data solutions, you can store the
massive volume of data in Bigtable and analyze it with analytics tools on the Hadoop ecosystem.
Cloud Datastore is built on the top of Bigtable. Datastore provides high availability with replication and
data synchronization; Bigtable doesnt replicate data and runs in a single datacenter region. Datastore provides
support for ACID transactions and SQL-like queries using GQL, which is a SQL-like language for retrieving
entities from Datastore. Cloud Datastore is a great data store choice for App Engine web applications.
The following section describes how to use the Google Cloud Datastore with App Engine applications.
No planned downtime
Atomic transactions
267
The most important thing about Datastore is that it replicates data across multiple datacenter regions,
providing a high level of availability for reads and writes. When you compare Datastore with Bigtable, note
that Bigtable runs on a single datacenter.
Entities
The Cloud Datastore holds data objects known as entities. The values of the Go struct are persisted into
entities, which hold one or more properties. The Go structs fields become the properties of the entity. The
type of property values are taken from the structs fields.
Like many NoSQL databases, Cloud Datastore is schema-less database, which means that data objects
of the same entity can have different properties, and properties with the same name can have different
value types.
Cloud Datastore is an evolutionary NoSQL that has lot of advantages over traditional NoSQL databases.
Cloud Datastore allows you to store hierarchically structured data using ancestor paths in a tree-like
structure.
268
Users can create a new task by choosing the Create task option, which displays a
form for creating a new task.
Listing 11-7 provides an example App Engine application using Cloud Datastore.
Listing 11-7. App Engine Application with Cloud Datastore
package task
import (
"fmt"
"html/template"
"net/http"
"time"
"google.golang.org/appengine"
"google.golang.org/appengine/datastore"
)
type Task struct {
Name
string
Description string
CreatedOn
time.Time
}
const taskForm = `
<html>
<body>
<form action="/save" method="post">
<p>Task Name: <input type="text" name="taskname" ></p>
<p> Description: <input type="text" name="description" ></p>
<p><input type="submit" value="Submit"></p>
</form>
</body>
</html>
`
const taskListTmplHTML = `
<html>
<body>
<p>Task List</p>
{{range .}}
<p>{{.Name}} - {{.Description}}</p>
{{end}}
<p><a href="/create">Create task</a> </p>
</body>
</html>
`
269
google.golang.org/appengine
google.golang.org/appengine/datastore
270
Zero or more filters based on the entities property values, keys, and ancestors
271
In the preceding query, "tasks" was specified as the entity kind, but no filter conditions were specified.
You can specify the filter condition using the Filter function:
q := datastore.NewQuery("tasks").
Filter("Name =", "GAE").
Order("-CreatedOn")
The Order condition is applied to get results sorted by descending order in the CreatedOn field. The - is
used to sort in descending order.
The Query type instance prepares the query, so you need to call various Query type methods to execute
the query and fill in the data for the Go object. The GetAll method executes the query and fills the data into
a slice of Task type:
c := appengine.NewContext(r)
var tasks []Task
_, err := q.GetAll(c, &tasks)
272
You can query Datastore data from the web interface of the Google Developers Console. Figure11-13
displays the data of the "tasks" entity from the Google Developers Console.
273
274
`json:"id" datastore:"-"`
`json:"name" endpoints:"req"`
`json:"description" datastore:",noindex" endpoints:"req"`
`json:"createdon,omitempty"`
A struct type named Task is created to describe application data for the Cloud Endpoints application.
The value of the Task struct is persisted into Datastore, and the necessary tags are provided to the fields of
the Task struct to work with JSON encoding and the Datastore.
Two API methods are provided in the example application: the List method provides a list of Task data
from the Datastore, and the Add method allows you to create a new Task entity into the Datastore. Lets write
these methods through a struct type that can be registered with Endpoints later.
Listing 11-9 provides the Go source file that contains the struct type and API methods to be exposed as
methods of the API provided by Cloud Endpoints.
Listing 11-9. TaskService Exposing Methods for the API
package cloudendpoint
import (
"time"
"golang.org/x/net/context"
"google.golang.org/appengine/datastore"
)
// Task is a datastore entity
type Task struct {
Key
*datastore.Key
Name
string
Description string
CreatedOn
time.Time
}
`json:"id" datastore:"-"`
`json:"name" endpoints:"req"`
`json:"description" datastore:",noindex" endpoints:"req"`
`json:"createdon,omitempty"`
275
276
277
The information is provided to the methods of HTTP service. It is used to provide discovery
documentation for the back-end APIs:
// Get ServiceMethod's MethodInfo for List method
info := api.MethodByName("List").Info()
// Provide values to MethodInfo - name, HTTP method, and path.
info.Name, info.HTTPMethod, info.Path = "listTasks", "GET", "List Tasks"
// Get ServiceMethod's MethodInfo for Add method
info = api.MethodByName("Add").Info()
info.Name, info.HTTPMethod, info.Path = "addTask", "POST", "Add a new Task"
Finally, the HandleHTTP function of the endpoints package is called, which calls the DefaultServer
HandleHttp method using the default http.ServeMux:
// Calls DefaultServer's HandleHttp method using default serve mux
endpoints.HandleHTTP()
Now the TaskService methods are made available as HTTP endpoints of a RESTful API, which can be
used for building web and mobile client applications. Because the Cloud Endpoints application is an App
Engine application, lets add an app.yaml file to be is used for the goapp tool and as the configuration for the
App Engine application.
Listing 11-11 provides the app.yaml file for the App Engine application.
Listing 11-11. app.yaml file for the App Engine Application
application: go-endpoints
version: v1
threadsafe: true
runtime: go
api_version: go1
handlers:
- url: /.*
script: _go_app
#
#
#
-
The Cloud Endpoints application is now ready for running on both a development web server and an
App Engine production environment.
278
279
When you click any operation, you navigate to a window on which you can test the API operation by
providing request data in an input window and clicking the Execute button.
Figure11-17 shows the input window for testing the addTask operation.
Figure 11-18. HTTP Request and Response from the addTask operation
The addTask operation is an HTTP Post request to the URI endpoint: http://localhost:8080/_ah/
api/tasks/v1/tasks.
Figure11-19 shows the HTTP Request and Response for the listTasks operation. The listTasks
operation is executed after the addTask operation is executed three times so that you can see three records.
280
Figure 11-19. HTTP Request and Response for the listTasks operation
The listTasks operation is an HTTP Get request to the URI endpoint: http://localhost:8080/_ah/
api/tasks/v1/tasks.
The application has been tested in the local web development server provided by Google App Engine
SDK for Go. You can deploy the Cloud Endpoints application into the production environment of App
Engine using the goapp tool. Deploying a Cloud Endpoints application into a production environment is
281
exactly the same process as that of a normal App Engine application. You must provide the application ID
in the app.yaml file to deploy the application. Section 11.4.4 provides instructions for creating a project in
Google Developer Console and getting an application ID. To deploy the application into production, run the
goapp tool from the root directory of the application:
goapp deploy
This command deploys the Cloud Endpoints application into the App Engine production environment,
powered by the Google Cloud platform.
The endpointscfg.py tool is run from the go_appengine directory (the directory of the Google App
Engine SDK for Go) to generate the Java library for Android applications. It should generate a zip file named
tasks.rest.zip.
Summary
Go is a programming language designed to work with modern hardware and IT infrastructures to build
massively scalable applications. Go is a perfect language choice for building scalable applications in the
Cloud computing era. With the Google Cloud platform, Go developers can easily build massively scalable
applications on the Cloud by leveraging the tools and APIs provided by the Google Cloud platform.
Google Cloud provides App Engine, a Platform as a Service (PaaS) offering that lets Go developers
build highly scalable and reliable web applications using the App Engine SDK for Go and Go APIs to various
Cloud services. When you build App Engine applications, you dont need to spend time managing the IT
infrastructure. Instead, you can fully focus on building your application, which is available for automatic
scaling whenever there is a need to increase computing resources. With App Engine, developers are freed
from system administration, load balancing, scaling, and server maintenance.
App Engine SDK for Go provides the goapp tool that allows you to test the App Engine application in a
development web server. The goapp tool can also be used for deploying App Engine web applications into
the Google Cloud production environment that runs on the App Engine sandbox environment.
282
The App Engine environment provides a main package, so you cant use a package called main. Instead,
you can write HTTP handler logic in the package of your choice. App Engine applications easily access many
Cloud services provided by the Google Cloud platform. When you leverage many of Cloud services from
your App Engine application, you dont need to configure these services to be used with App Engine because
you can directly access them from your application by using the corresponding Go API.
The Google Cloud platform provides various managed services for persisting structured data, including
Google Cloud SQL, Google Cloud Datastore, and Google Bigtable. When you build App Engine applications
with large volumes of data, the Google Cloud Datastore is the best database choice. You can build massively
scalable web applications using App Engine and Cloud Datastore without spending time managing the IT
infrastructure.
Google Cloud Endpoints is an App Engine service that builds back-end APIs for mobile client
applications and web client applications. It provides tools, libraries, and capabilities for quickly building
back-end APIs and generating native client libraries from App Engine applications. Although you can create
back-end APIs using a normal App Engine application, Cloud Endpoints makes the development process
easier by providing extra capabilities. Cloud Endpoints allows you to generate native client libraries for iOS,
Android, JavaScript, and Dart, so client application developers are freed from writing wrappers to access
back-end APIs.
References
https://github.com/GoogleCloudPlatform/gcloud-golang
https://cloud.google.com
Google Cloud Services infographic: https://cloud.google.com/
Endpoints architecture diagram: https://cloud.google.com/appengine/docs/python/endpoints/
283
Index
A
Alice package, 108, 110
App Engine applications
configuration file, 258259
goapp deploy command, 262
Google Developers Console
project creation, 261
project details, 262
HTTP server, 257258
task form, 263
testing, 259260
Arrays, 2425
Authentication
API-based approach, 122
and authorization, 121
cookie-based approach, 122123
JWT (see JSON Web Token (JWT))
social identities, 121
token-based approach, 123125
user credentials, 121
B
Behavior-driven development (BDD)
definition, 236
Ginkgo (see Ginkgo, BDD-style testing)
TDD, 236
Blank identifier, 17
BSON Binary JSON (BSON), 141, 144146, 149
Buffered channels, 54, 57
C
CaaS. See Container as a Service (CaaS)
Cloud computing
advantage, 251
autoscaling capabilities, 251
CaaS, 252
Google Cloud platform services, 253
host and run applications, 251
IaaS, 252
PaaS, 252
service-based consumption model, 251
Concurrency
description, 50
Go language, 50
goroutines, 50, 5253
Container as a Service (CaaS), 252
Cookie-based authentication, 122123
CRUD operations, MongoDB
Find method, 149
handler functions, 7477
Insert method
embedded documents, 147, 149
map objects and document
slices, 146147
struct values, 144145
Query object, documents, 149
Remove method, 151
single record, 150
Sort method, 149
Update method, 151
Custom handlers
implemention, 63
messageHandler, 64
type, 63
D
Data collections
array, 2425
map, 2931
slices (see Slices)
DefaultServeMux, 6667
Docker
Dockerfile, TaskManager
application, 206208
Engine, 205
flags and commands, 207
Hub, 205
Linux containers, 205
285
index
E
Embedded type method, overriding, 4344
Error handling, 3334
F
Full-fledged web framework, 208
Function
defer, 31
panic, 32
recover, 3233
G
GAE. See Google App Engine (GAE)
Ginkgo, BDD-style testing
bootstrapping, suite file, 241
Gomega installation, 240
HTTP API Server
directory structure, refactored
application, 236
lib package, 237239
main package, 237, 240
specs
containers, 242243
FakeUserRepository, 244
HTTP handler functions, 243
running, 248
test suite, 241242
UserRepository interface, 244
users_tests.go, lib_test
Package, 244245, 247
Godeps
dependency management system, 202
installation, 203
restore command, 204
TaskManager application, 203204
godoc tool, 23
Go documentation, 23
Go ecosystem
Go tools, 47
language, 4
libraries, 4
GOMAXPROCS, 53
Go Mobile project, 13
Google App Engine (GAE)
Cloud Bigtable, 254
Cloud Datastore, 254
Google Cloud SQL, 255
Go SDK, 256
logging service, 255
memcache, 255
PaaS, 254
sandbox environment, 255
286
Index
H
Handlers
CRUD operations, 7475, 77
definition, 61
ServeHTTP method, 61
writing response headers and bodies, 61
html/template package
add page, 9092
data structure and data store, 86
edit page, 9297
folder structure, web application, 85
helper functions, 8788
index page, 8889
main function, 86
script injection, 84
views and template definition files, 87
HTTP applications
ResponseRecorder
HTTP API Server, 228232
NewRecorder function, 228
ServeHTTP method, 233
TDD, 230
TestGetUsers, 232233
server
HTTP API Server, 234
httptest.NewServer function, 235
TestCreateUserClient, 234235
TestGetUsersClient, 234
http.HandlerFunc type, 6466
HTTP middleware
components, 99
control flow, 103106
Gorilla context, 118119
logging, 99
Negroni (see Negroni)
scenarios, 99
third-party libraries
Alice package, 108, 110
Gorilla handlers, 106107
writing
logging, 101102
pattern, 101
steps of, 101
StripPrefix function, 100
HTTP requests
Handler, 61
request-response paradigm, 60
ServeMux multiplexor, 61
http.Server Struct, 6768
Hybrid stand-alone/App Engine applications
App Engine SDK, 263
directory structure, 264
Go tool, 266
hybridapplib, 264265
task package, 266
I
Infrastructure as a Service (IaaS) model, 252
Interfaces
composition and method overriding, 4749
concrete implementations, 50
example program with, 4546
PrintName and PrintDetails
methods, 46, 49
types, defined, 45
J, K
JSON
API operations, 199201
data persistence, 191192
error handling, 181182
handler functions, 185
HTTP request lifecycle, 183184
login resource, 189191
notes resource, 201
Register handler function, 187188
resource models, 184185, 193
RESTful API, 7072
taskController.go source file, 193197
taskRepository.go, 197198
tasks resource, 192
JSON Web Token (JWT)
API server, 135
DisplayAppError function, 179
encoded and decoded
security token, 178
generating and verifying, 175177
Header and Payload sections, 178179
HTML5 Web Storage/web cookies, 179
HTTP middleware, 139
JSON object, 131
jwt-go package, 131135
287
index
L
ListenAndServe Signature, 6263
M
Maps, data collections, 2931
Microservice architecture, 14, 161
MongoDB
BSON, 141
collections, 144
createDbSession function, 172
CRUD operations (see CRUD operations,
MongoDB)
GetSession, 172
indexes, 152154
mgo driver
battle-tested library, 142
connection, 142143
installation, 142
NoSQL database, 141
Session object
DataStore struct type, 156
HTTP server, 154156
web applications, 154
TaskNote collection, 173
Monolithic architecture approach, 13
Multiplexer configuration, 73
N
Negroni
definition, 111
installation, 111112
middleware functions, specific routes, 114
negroni.Handler interface, 113114
routing, 112113
stack middleware, 115117
net/http package
composability and extensibility, 59
full-fledged web applications, 59
standard library, 59
O
OAuth 2
mobile and web applications, 125
social identity providers, 125, 140
Twitter and Facebook, 126, 128130
Object-oriented programming (OOP) language, 2
288
P, Q
PaaS. See Platform as a Service (PaaS) model
Packages
alias, 16
blank identifier, 17
executable program, 15
GOPATH directory, 15
GOROOT directory, 15
import, 18
init function, 17
library package, 19
main Function, 16
shared libraries, 15
strconv, 20
third-party, installation, 18
Parallelism, 53
Platform as a Service (PaaS) model, 252, 254
Pointer method receivers
ampersand (&) operator, 39
calling methods, 39
ChangeLocation function, 38
description, 38
person struct with, 3940
PrintDetails method, 43, 49
PrintName method, 49
R
RESTful APIs
configuration values, 169170
data model, 164165
digital transformation, 160
Dockerfile, 205208
front /back end applications, 160
Godep, 202204
JSON, 184, 186187
JWT authentication (see JSON Web Token (JWT))
microservice architecture, 161
MongoDB Session object (see MongoDB)
private/public RSA keys, 170171
resource modeling, 165
routers
package directory, 166
TaskNote resource, 168
Tasks resource, 166167
users resource, 166
settings, 169
stack, 160
StartUp function, 174175
TaskManager application
structure, 162163
third-Party Packages, 162
URIs, 160
web and mobile applications, 160
XML, 160
Index
S
ServeMux.HandleFunc Function, 66
Single Page Application (SPA) architecture, 13
Slices
functions, 27
iterate over slice, 29
length and capacity, 28
make function, 26
nil slice, 26
Static type language, 2
Static web server
access, 63
FileServer function, 62
folder structure of, 61
ListenAndServe function, 62
ServeMux.Handle function, 62
Structs. See also Pointer method receivers
calling methods, 38
classes, 35
declaring, with group of fields, 36
fields specification, 37
instances, 36
struct literal, 3637
type with behaviors, 37
T
Test-driven development (TDD), 212, 230, 236, 249
Text/template package. See also html/template
package
collection object, 8182
data structure, 79
definitions, 83
pipes, 84
struct fields, 7980
variable declaration, 83
U, V
Unbuffered channels, 54, 5657
Uniform resource identifiers
(URIs), 160, 165, 169
Unit testing
BDD (see Behavior-driven
development (BDD))
benchmark, 216217
coverage flag, 215216
definition, 211
Parallel method, 222224
Reverse and SwapCase
functions, 217220
separate packages, 224227
Skip method, 221222
software development, 211
string utility functions, 213214
TDD, 211
third-party packages, 212
web applications (see HTTP applications)
URIs. See Uniform resource identifiers (URIs)
W, X, Y, Z
Web and microservices
HTTP package, 13
monolithic application, 13
RESTful APIs, 14
SPA architecture, 13
289