0% found this document useful (0 votes)
24 views110 pages

LXJ 245

Uploaded by

stefanogozzi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views110 pages

LXJ 245

Uploaded by

stefanogozzi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 110

AN IN-DEPTH
LOOK AT
ZFS vs.
BTRFS
Since 1994: The Original Magazine of the Linux Community SEPTEMBER 2014 | ISSUE 245 | www.linuxjournal.com

HOW-TOs Ease the Pain of Provisioning


X.509 Certificates
Synchronize Your Life with ownCloud

+
Check Your Exchange InBox
from the Command Line
12-Factor,
Scalable,
Maintainable
WEB APPS

pi-web-agent DNSMasq OpenAxiom


A DESKTOP ENVIRONMENT A HERO A COMPUTER SYSTEM
FOR THE RASPBERRY PI FOR SERVERS FOR ALGEBRA
A CONFERENCE
EXPLORING OPEN SOURCE,
OPEN TECH AND THE OPEN
WEB IN THE ENTERPRISE.
Are you tired of dealing with proprietary storage? ®

9%2 4MHÆDC 2SNQ@FD


zStax StorCore from Silicon
-
From modest data storage needs to a multi-‐tiered production storage environment, zStax StorCore

zStax StorCore 64 zStax StorCore 104

The zStax StorCore 64 utilizes the latest in The zStax StorCore 104 is the flagship of the
dual-‐processor Intel® Xeon® platforms and fast zStax product line. With its highly available
SAS SSDs for caching. The zStax StorCore 64 configurations and scalable architecture, the
platform is perfect for: zStax StorCore 104 platform is ideal for:

‡ VPDOOPHGLXP RIILFH ILOH VHUYHUV ‡ EDFNHQG VWRUDJH IRU YLUWXDOL]HG HQYLURQPHQWV


‡ VWUHDPLQJ YLGHR KRVWV ‡ PLVVLRQ FULWLFDO GDWDEDVH DSSOLFDWLRQV
‡ VPDOO GDWD DUFKLYHV ‡ DOZD\V DYDLODEOH DFWLYH DUFKLYHV

Talk with an expert today: 866-‐352-‐1173 -‐ http://www.siliconmechanics.com/zstax


CONTENTS SEPTEMBER 2014
ISSUE 245

HOW-TOs
FEATURES
62 Provisioning X.509 74 Synchronize Your Life
Certificates Using RFC 7030 with ownCloud
Enrollment over Secure Transport, Access your data from anywhere
easing the pain of provisioning with ownCloud.
X.509 certificates. Mike Diehl
John Foley

ON THE COVER
‹ (U 0U+LW[O 3VVR H[ A-: HUK );9-: W 
‹ ,HZL [OL 7HPU VM 7YV]PZPVUPUN ? *LY[PMPJH[LZ W 
‹ :`UJOYVUPaL @V\Y 3PML ^P[O V^U*SV\K W 
‹ *OLJR @V\Y ,_JOHUNL 0UIV_ MYVT [OL *VTTHUK 3PUL W 
‹ WP^LIHNLU[! H +LZR[VW ,U]PYVUTLU[ MVY [OL 9HZWILYY` 7P W 
‹ +5:4HZX! H /LYV MVY :LY]LYZ W 
‹ 6WLU(_PVT! H *VTW\[LY :`Z[LT MVY (SNLIYH W 

Cover Image: © Can Stock Photo Inc. / alexaldo

4 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


INDEPTH
84 ZFS and BTRFS
Comparing BTRFS and ZFS, the
two choices for filesystem integrity
in Linux
Russell Coker
94 Introducing pi-web-agent,
a Raspberry Pi Web App
A Web-based replacement for the
mainstream desktop environment.
Vasilis Nicolaou, Angelos
Georgiadis, Georgios Chairepetis
and Andreas Galazis,

COLUMNS
30 Reuven M. Lerner’s At the Forge
12-Factor Apps

40 Dave Taylor’s Work the Shell


Days Between Dates: a Smarter Way

46 Kyle Rankin’s Hack and /


Check Exchange from the
Command Line 28

50 Shawn Powers’
The Open-Source Classroom
DNSMasq, the Pint-Sized
Super Dæmon!

106 Doc Searls’ EOF


Stuff That Matters

IN EVERY ISSUE
8 Current_Issue.tar.gz
10 Letters
16 UPFRONT
28 Editors’ Choice
58 New Products
109 Advertisers Index 94

LINUX JOURNAL (ISSN 1075-3583) is published monthly by Belltown Media, Inc., 2121 Sage Road, Ste. 395, Houston, TX 77056 USA. Subscription rate is $29.50/year. Subscriptions start with the next issue.

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 5


Executive Editor Jill Franklin
[email protected]
Senior Editor Doc Searls
[email protected]
Associate Editor Shawn Powers
[email protected]
Art Director Garrick Antikajian
[email protected]
Products Editor James Gray
[email protected]
Editor Emeritus Don Marti
[email protected]
Technical Editor Michael Baxter
[email protected]
Senior Columnist Reuven Lerner
[email protected]
Security Editor Mick Bauer
[email protected]
Hack Editor Kyle Rankin
[email protected]
Virtual Editor Bill Childers
[email protected]

Contributing Editors
)BRAHIM (ADDAD s 2OBERT ,OVE s :ACK "ROWN s $AVE 0HILLIPS s -ARCO &IORETTI s ,UDOVIC -ARCOTTE
0AUL "ARRY s 0AUL -C+ENNEY s $AVE 4AYLOR s $IRK %LMENDORF s *USTIN 2YAN s !DAM -ONSEN

Publisher Carlie Fairchild


[email protected]

Director of Sales John Grogan


[email protected]

Associate Publisher Mark Irgang


[email protected]

Webmistress Katherine Druckman


[email protected]

Accountant Candy Beauchamp


[email protected]

Linux Journal is published by, and is a registered trade name of,


Belltown Media, Inc.
PO Box 980985, Houston, TX 77098 USA

Editorial Advisory Panel


"RAD !BRAM "AILLIO s .ICK "ARONIAN s (ARI "OUKIS s 3TEVE #ASE
+ALYANA +RISHNA #HADALAVADA s "RIAN #ONNER s #ALEB 3 #ULLEN
+EIR $AVIS s -ICHAEL %AGER s .ICK &ALTYS s $ENNIS &RANKLIN &REY
6ICTOR 'REGORIO s 0HILIP *ACOB s *AY +RUIZENGA s $AVID ! ,ANE
3TEVE -ARQUEZ s $AVE -C!LLISTER s #ARSON -C$ONALD s #RAIG /DA
*EFFREY $ 0ARENT s #HARNELL 0UGSLEY s 4HOMAS 1UINLAN s -IKE 2OBERTS
+RISTIN 3HOEMAKER s #HRIS $ 3TARK s 0ATRICK 3WARTZ s *AMES 7ALKER

Advertising
E-MAIL: [email protected]
URL: www.linuxjournal.com/advertising
PHONE: +1 713-344-1956 ext. 2

Subscriptions
E-MAIL: [email protected]
URL: www.linuxjournal.com/subscribe
MAIL: PO Box 980985, Houston, TX 77098 USA

LINUX is a registered trademark of Linus Torvalds.


L a
11th Annual Re ow nd Re
gi C Sa gi
2014 H P
IGH ERFORMANCE OMPUTING ster fost Cve $1ster C
FOR W S
ALL TREET Show and Conference
or on 00
Fr fe fo
ee re r
September 22, 2014 (Monday) Roosevelt Hotel, NYC Sh nce
Madison Ave and 45th St, next to Grand Central Station ow .
.
Big Data, Cloud, Linux, Low Latency, Among the Featured Speakers:

Networks, Data Centers, Cost Savings.


Wall Street IT professionals, code writers and
programmers will assemble at this 2014 HPC
Show and Conference, Sept. 22.
New for 2014 – Database Month code writers
and programmers to speak on the program.
John Ramsay Brian Bulkowski
Chief Market Policy & CTO & Founder,
Regulatory Officer, Aerospike,

T
IEX Group, formerly with Speaker on the
his 11th Annual HPC networking opportunity will assemble 600 Wall Street IT the SEC Regulatory Divi- Code Writing Panel
professionals, code writers and programmers at one time and one place in New York. sion of Trading & Markets

This HPC for Wall Street conference is focused on High Put-through,


Low Latency, Networks, Data Centers, lowering costs of operation.
Our Show is an efficient one-day showcase and networking opportunity.
Leading companies will be showing their newest systems live on-the-show floor.
Register in advance for the full conference program which includes general sessions,
drill-down sessions, a new code writers track, an industry luncheon, coffee breaks, exclusive
viewing times in the exhibits, and more. Save $100. $295 in advance. $395 on site. Phil Albinus Jeffrey M Birnbaum
Editor Founder & CEO,
Don’t have time for the full Conference? Traders Magazine, 60East Technologies, Inc.,
Attend the free Show. Register in advance at: www.flaggmgmt.com/hpc SourceMedia, Moderator on Moderator on the
the Low Latency Session Code Writing Panel

Show Hours: Mon, Sept 22 8:00 - 4:00


Conference Hours: 8:30 - 4:50

2014 Gold Sponsors

Jeffrey Kutler Dino Vitale (invited)


Editor-in-Chief, GARP Risk Director, Morgan Stanley
Professional, Moderator of Quality Assurance and
HPC Session Production Management
Media Sponsors

Show & Conference: Flagg Management Inc


353 Lexington Avenue, New York 10016
(212) 286 0333 fax: (212) 286 0086 [email protected] Harvey Stein Cory Isaacson
Head of Credit Risk CEO / Chief Technology

Visit: www.flaggmgmt.com/hpc
Modeling, Bloomberg Officer, CodeFutures,
Speaker on the
Code Writing Panel
Current_Issue.tar.gz

How’d You SHAWN POWERS

Do That?
O
pen-source advocates tend to follows with some smarter code for
make for rotten magicians. solving the “how many days have
Whereas most illusionists passed” script he’s been working
insist on taking their secrets to the with for the past few months.
grave, we tend to give away the secret All too often, our perfectly crafted
sauce to anyone who’ll listen. Heck, solutions get ruined by someone
sometimes we create things just so changing something outside our
we can explain to others how they control. Kyle Rankin recently had
work! And, that is how this issue was an issue with his fetchmail setup,
born. We love the How-To concept. and he walks through the process of
Heck, our entire magazine is based troubleshooting and problem solving
on the idea of spreading knowledge, when someone changes the rules.
and this month, we specifically go If, like me, you’re a fan of Kyle’s
out of our way to show not only the passion for the command line, you’ll
result, but the “How” as well. appreciate his efforts to maintain his
Reuven M. Lerner starts us off ASCII lifestyle.
with a discussion of 12-Factor Apps. I made true on my promise to get
He basically describes a process for a little more serious this month and
developing scalable, maintainable wrote about DNSMasq. It’s not a new
Web applications. As someone program, but it’s so powerful and
who recently started creating Web so simple, it’s often overlooked as
apps, I can attest that they get a viable option for serving DNS and
unmanageable quickly! Dave Taylor DHCP. Although most people are fine
with running DNS and DHCP from
VIDEO: their off-the-shelf routers, there are
V

Shawn Powers runs times when you need to run one or


through the latest issue.
both of the services on a server. That’s

8 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


CURRENT_ISSUE.TAR.GZ

just what I needed to do, and I was questions and more.


pleasantly surprised at how powerful If there’s one product that fits into
the little dæmon can be! our How-To issue, it’s the Raspberry
Much like car alarms, self-signed Pi. It’s the heart of just about every
SSL certificates are all too often just Linux-based DIY project on the
accepted, especially on systems we’re Internet, and those little beauties
familiar with using. The problem run half the projects around my
is that if there is a compromise house as well. A quartet of authors
on one of our trusted systems, an (Vasilis Nicolaou, Angelos Georgiadis,
invalid certificate might be the only Georgios Chairepetis and Andreas
warning we get. John Foley walks Galazis) give us a great description
through the entire process for using of pi-web-agent. Although the little
PKI certificates. Whether you are an RPi devices are incredible, they also
old hand at creating certs for VPNs are a bit intimidating for new Linux
or just copy/paste something from users. pi-web-agent changes that
Google whenever you need to create by providing a complete Web front
a Web cert, his article is interesting end for managing and controlling
and educational. the Raspberry Pi, making the RPi
Last year you may recall I mentioned accessible to everyone!
ownCloud as an alternative to If you’ve ever wanted to work with
Dropbox for those willing and able Linux, but weren’t sure “how to” get
to host such a service on their own. started, this issue is for you. And if
Mike Diehl takes it to the next level you’re an old hat who wants to add
with an incredible how-to on setting more skills to your tech quiver? Again,
up, configuring and using ownCloud for you too. To be honest, the entire
for all your cloud-based needs. At Linux community is based on sharing
its core, ownCloud does indeed sync information and collaborating ideas.
files, but it does so much more, There’s no illusion at play, but there’s
it’s worth taking a look at. And plenty of magic! Q
when it comes to file storage on
your server, Russell Coker addresses Shawn Powers is the Associate Editor for Linux Journal .
another extremely important topic: He’s also the Gadget Guy for LinuxJournal.com, and he has
data corruption. Using ZFS or BTRFS an interesting collection of vintage Garfield coffee mugs.
filesystems can protect your data, Don’t let his silly hairdo fool you, he’s a pretty ordinary guy
but which is better? Which should and can be reached via e-mail at [email protected].
you choose? Russell answers all your Or, swing by the #linuxjournal IRC channel on Freenode.net.

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 9


letters
how we do things. We’re advocates
of freedom, and if anything, this
makes us more so!—Shawn Powers

Vim Macros
Regarding Kyle Rankin’s “The Only
Mac I Use” article in the July 2014
issue: I knew all about macros. What
I didn’t know was what <ctrl>-a did.
That is going to save me so much
time in the future.
—Rick

Extremist Bonus Points?


Linux Journal—NSA’s After the latest news regarding the
“Extremist Forum” XKEYSCORE program, I felt it was a
Just came across: “If you visit the great time to subscribe. Now that I
forum page for the popular Linux subscribed, do I get a free upgrade to
Journal, dedicated to the open-source extremist financier?
operating system Linux, you could
be fingerprinted regardless of where Chris at http://jupiterbroadcasting.com
you live because the XKEYSCORE brought me—LAS ep320.
source code designates the Linux —Sean Rodriguez
Journal as an ’extremist forum’”
(http://www.defenseone.com/ Although we haven’t seen any
technology/2014/07/ “upgrades” yet ourselves, I must admit
if-you-do-nsa-will-spy-you/88054/ I’m a little more nervous than usual
?oref=govexec_today_nl). when I go through airport security
—Andy Bach now!—Shawn Powers

Crazy, isn’t it? Although it’s Leap Year Problems


frustrating to draw the attention of Regarding Dave Taylor’s “Days
the NSA, we don’t plan on changing Between Dates” article in the July

10 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


[ LETTERS ]

2014 issue, I think he has a big test. Got a much better one in the next
problem with the valid_date.sh way column. Thanks for writing in, Harrie!
of checking for leap year:
Harrie Wijnans replies: Yeah, my
harrie@v1:~> /lj243_valid_date sh 2 29 1776 solution, of course, fails for the year 29.
checking for feb 29 : was 1776 a leap year? Adding tail solves that (and the -w flag
Yes 1776 was a leapyear so February 29 1776 is a valid date for grep has no real meaning anymore):
harrie@v1:~> /lj243_valid_date sh 2 29 1929

checking for feb 29 : was 1929 a leap year? leapyear=$(cal 2 $year | tail +3 | grep -w 29)

Yes 1929 was a leapyear so February 29 1929 is a valid date

Dave Taylor replies: Nah, the


Well, 1929 was not a leap year. Using solution is to take a sharp left turn
grep -w solves this: and think about it differently. The
question isn’t “is there a Feb 29”,
harrie@v1:~> /lj243_valid_date-w sh 2 29 1929 but “how many days are in the year”‚
checking for feb 29 : was 1929 a leap year? So, with help from GNU date:
Oops 1929 wasn't a leapyear so February only had 28 days

harrie@v1:~> cat lj243_valid_date-w sh $ date -d 12/31/1929 +%j


mon=$1 day=$2 year=$3 365
if [ $mon -eq 2 -a $day -eq 29 ] then $ date -d 12/31/1932 +%j
echo checking for feb 29 : was $3 a leap year? 366
leapyear=$(cal 2 $year | grep -w 29)

if [ ! -z "$leapyear" ] then With this test, you can see that 1932
echo "Yes $year was a leapyear so February 29 $year \ was a leap year, but 1929 wasn’t.
is a valid date "

else Harrie Wijnans replies: That one


echo "Oops $year wasn't a leapyear so February only \ fails for years < 1901:
had 28 days "

fi harrie@v1:~> date -d 1901/12/31


fi Tue Dec 31 00:00:00 AMT 1901
harrie@v1:~> date -d 1900/12/31
—Harrie Wijnans date: invalid date `1900/12/31'

Dave Taylor replies: Yup, someone else That is, for the date on my Linux
also pointed out the flaw in my leap year (openSUSE 12.1), and it also fails

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 11


[ LETTERS ]

for 12/31/1900: Hmm...

harrie@v1:~> date --version Harrie Wijnans replies: The


date (GNU coreutils) 8.14 date.c from the 18.4 that I got from
Copyright (C) 2011 Free Software Foundation, Inc. http://rpmfind.net/linux/sourceforge/m/
License GPLv3+: GNU GPL version 3 or later ma/magiclinux-plus/Source26/
´<http://gnu.org/licenses/gpl.html>. coreutils-8.14-1mgc25.src.rpm uses
This is free software: you are free to change and redistribute it. parse_datetime to fill a struct tm ,
There is NO WARRANTY, to the extent permitted by law. which has:

Written by David MacKenzie. /* Used by other time functions. */

struct tm

Dave Taylor replies: Actually, here’s {

the cleanest version of all of this: int tm_sec; /* Seconds. [0-60] (1 leap second) */

int tm_min; /* Minutes. [0-59] */

function isleap int tm_hour; /* Hours. [0-23] */

{ int tm_mday; /* Day. [1-31] */

# If you have GNU date on your Linux system this is superior: int tm_mon; /* Month. [0-11] */

# leapyear=$(date -d 12/31/$1 +%j | grep 366) int tm_year; /* Year - 1900. */

int tm_wday; /* Day of week. [0-6] */

# If you don't have GNU date (Mac OS X doesn't for example) int tm_yday; /* Days in year.[0-365] */

# use this: int tm_isdst; /* DST. [-1/0/1]*/

leapyear=$(cal -j 12 $1 | grep -E '[^12]366') #ifdef _ _USE_BSD

} long int tm_gmtoff; /* Seconds east of UTC. */

_ _const char *tm_zone; /* Timezone abbreviation. */

But, there’s something else going on #else

then, because I also have date 8.4, long int _ _tm_gmtoff; /* Seconds east of UTC. */

and it works fine: _ _const char *_ _tm_zone; /* Timezone abbreviation. */

#endif

$ date -d 12/31/1844 +%j };

366
$ date -d 12/31/1810 +%j Apparently, openSUSE library does not
365 accept the year field to be <= 0. Sorry,
I’m lost here. I’m not that experienced

12 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


[ LETTERS ]

in Linux C programming, and LJ, I followed the link to the new Letters
lib/parse-datetime.y even uses bison, “page” to find just two letters there.
which I hardly ever used. There always were more when they were
included in the magazine (print and of
Beware of the cal -j you gave as an course digital). Only two! I cannot believe
example. Try it for 2000, which is missed that LJ readers have been that quiet.
as a leap year. (Are we lucky it was a leap —Roy Read
year? Otherwise, the “millennium-bug”
would have been much more severe.) The past couple months, we’ve been
experimenting with moving Letters
This is because there, 366 is at the to the Editor to the Web, or at least
start of the line, so there are no char- partially to the Web. We received a
acters in front of it, hence it does not lot of feedback, and so we have put
match [^12]366 . An extra -e ^366 things back the way they used to
would solve that and still not see
1366 or 2366 as leap years; 366 still
would be considered a leap year.

I thought about the cal 2 $year , The


White Paper
and grep -v February , but
that is language-dependent. (Try

Library
LANG=nl_NL.UTF8 cal 2 2012 or
fr_FR.UTF8 —no February in there.)
on
I’d say, the person who claims creating LinuxJournal.com
scripts for everyone and every environ-
ment is simple, never tried it.

Looking forward to reading your


next column.

Letters?
It is 19.48 GMT on 9 July 2014. I only say
this in case at this point in time there is
a problem with your system. Reading my www.linuxjournal.com/whitepapers

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 13


[ LETTERS ]

be. We appreciate the feedback, and leapyear=$(cal 2 $year | grep '29')


really do listen!—Shawn Powers
which looks for 29 in a calendar
Dave Taylor’s June 2014 Work output. If 29 is present, it is supposed
the Shell Column to be a leap year. However, if the
The first attempt at running Dave’s user tries to find out if 2029 is a leap
script on Solaris 8 failed because the year, the script will fail, because it
default Solaris shell in Solaris 10 and does contain 29 in the output, even
before does not support the $() syntax. though it can never be a leap year. A
better version of this test would be:
The issue after switching to
/bin/bash appears to be likely a leapyear=$(cal -h 2 $year | tail -n 2 | grep '29')

cut-and-paste error. The script


works as is for me, but if you This will look for 29 only in the last two
modify the first sed replacement lines of the calendar. And, it also will omit
so that the space is removed: the highlighting of today’s date, just in
case it is February 29 today. A highlighted
myPATH="$(echo $PATH | sed -e 's//~~/g' -e 's/:/ /g')" 29 would not be found by grep.
—San Bergmans
Then the output is a sed error, which
matches your published results: Dave Taylor replies: Another good
solution to a problem that a number
First RE may not be null of people have pointed out, San!
0 commands, and 0 entries that weren't marked executable

Here’s my latest solution to that


—Brian problem, one requiring GNU date:

Dave Taylor replies: Thanks for the leapyear=$(date -d 12/31/$1 +%j | grep 366)
update. It’s been a while since I had
access to a Solaris system! A different approach, for sure! Thanks
for writing.
Bug in Dave Taylor’s Days
Between Dates Script Digital Encryption and a Random
Dave’s leap year test in the July 2014 Bit Generator
issue contains a little bug. He uses: A printable poster of an unbreakable

14 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


encryption algorithm declassified by the NSA can
be downloaded from http://www.tag.md/public.
At Your Service
In addition, there is a POSIX C implementation of SUBSCRIPTIONS: Linux Journal is available
in a variety of digital formats, including PDF,
a digital non-deterministic random bit generator. .epub, .mobi and an on-line digital edition,
as well as apps for iOS and Android devices.
—Mark Knight Renewing your subscription, changing your
e-mail address for issue delivery, paying your
invoice, viewing your account details or other

Cool stuff, thanks Mark! subscription inquiries can be done instantly


on-line: http://www.linuxjournal.com/subs.
—Shawn Powers E-mail us at [email protected] or reach
us via postal mail at Linux Journal, PO Box
980985, Houston, TX 77098 USA. Please
remember to include your complete name
Comments on Dave Taylor’s June 2014 and address when contacting us.

Work the Shell Column ACCESSING THE DIGITAL ARCHIVE:

Solaris 8 was actually released in 2000, not 2004. Your monthly download notifications
will have links to the various formats
—Peter Schow and to the digital archive. To access the
digital archive at any time, log in at
http://www.linuxjournal.com/digital.

Dave Taylor replies: Bah, and I looked it up LETTERS TO THE EDITOR: We welcome your
letters and encourage you to submit them
on-line too. Thanks for the update. Now for the at http://www.linuxjournal.com/contact or

real question: are you still running it, Peter? mail them to Linux Journal, PO Box 980985,
Houston, TX 77098 USA. Letters may be
edited for space and clarity.

Peter Schow replies: No, I run the latest-and- WRITING FOR US: We always are looking
for contributed articles, tutorials and
greatest Solaris, but there are stories about real-world stories for the magazine.
An author’s guide, a list of topics and
Solaris 7 (~1998) still in production out there. due dates can be found on-line:
http://www.linuxjournal.com/author.

Enjoy your column! FREE e-NEWSLETTERS: Linux Journal


editors publish newsletters on both
a weekly and monthly basis. Receive
late-breaking news, technical tips and
tricks, an inside look at upcoming issues
and links to in-depth stories featured on
http://www.linuxjournal.com. Subscribe
WRITE LJ A LETTER for free today: http://www.linuxjournal.com/

We love hearing from our readers. Please enewsletters.

send us your comments and feedback via ADVERTISING: Linux Journal is a great
resource for readers and advertisers alike.
http://www.linuxjournal.com/contact. Request a media kit, view our current
editorial calendar and advertising due dates,
or learn more about other advertising
and marketing opportunities by visiting
PHOTO OF THE MONTH us on-line: http://ww.linuxjournal.com/
advertising. Contact us directly for further
Remember, send your Linux-related photos to information: [email protected] or
[email protected]! +1 713-344-1956 ext. 2.

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 15


UPFRONT NEWS + FUN

diff -u
WHAT’S NEW IN KERNEL DEVELOPMENT
Sometimes a new piece of code was tolerable for each data packet.
turns out to be more useful than its David Lang also liked Cryogenic
author suspected. Alejandra Morales and agreed that it should go into
recently came out with the Cryogenic the core input/output system. He
Project as part of his Master’s thesis, added that a lot of other folks had
supervised by Christian Grothoff. attempted to accomplish similar
The idea was to reduce energy things. It was a highly desired feature
consumption by scheduling input/ in the kernel. David also pointed out
output operations in batches. that in order to get into the core
This idea turned out to be so good input/output system, the Cryogenic
that H. Peter Anvin didn’t want code would have to demonstrate that
Cryogenic to be a regular driver, it had no performance impact on code
he wanted it to be part of the core that did not use its features, or that
Linux input/output system. On the the impact would be minimal.
other hand, he also felt that the Luis R. Rodriguez recently
programmer interface needed to be pointed out that a lot of drivers were
cleaned up and made a bit sleeker. routinely backported to a large array
Pavel Machek also was highly of older kernels, all the way down
impressed and remarked that this to version 2.6.24. And although he
could save power on devices like acknowledged that this was currently
phones and tablets that were manageable, he expected the number
always running low. And, Christian of drivers and other backportable
confirmed that this was one of the features to continue to increase,
main goals of the code. making the situation progressively
Christian added that power more difficult to sustain.
savings seemed to be on the order of Luis said the kernel folks should
10%, though that number could be do more to educate users about
tweaked up or down by increasing or the need to upgrade. But, he also
decreasing the amount of delay that wrote up a recommendation that

16 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


[ UPFRONT ]

the kernel folks use Coccinelle to limit the number of target kernels.
automate the backporting process It turns out Windows does certain
(http://www.do-not-panic.com/2014/04/ things better than Linux—for example,
automatic-linux-kernel-backporting- in the area of rebooting. Apparently,
with-coccinelle.html). there are several techniques that
Coccinelle is a tool used to transform can be done in software to cause
source code programmatically. It can a system to reboot. But in some
be used to generate changes to earlier cases, the Linux system will go down
kernel code to match the functionality successfully, and then not come up
provided by newer patches. That’s so again. This is a problem, for example,
crazy, it just might work! in server farms with minimal human
But to get started, Luis wanted staff. If 20 systems are acting up and
to draw a line between kernels that you want to reboot them all, it’s easier
would receive backports and kernels to give a single command from a
that would not. Hopefully, that line remote terminal than to send a human
would only move forward. So he out into the noise and the cold to
asked the linux-kernel mailing list press each reset button by hand.
members in general to tell him which One rebooting technique involves
were the earliest kernels they really sending certain values to the 0xCF9
needed to keep using. port on the system. Another is to use
As it turned out, Arend van Spriel the EFI (Extensible Firmware Interface)
knew of Broadcom WLAN testers BIOS replacement from Intel.
that still relied on Fedora 15, running Depending on the circumstances, one
the 2.6.38 kernel. He said he was or the other rebooting technique is
working with them to upgrade to preferred, but the logic behind that
Fedora 19 and the 3.13 kernel, but selection can be tricky. In particular,
that this hadn’t happened yet. changing the state of various pieces of
So it appears that a certain hardware can change the appropriate
amount of backporting will become reboot technique. So, if you run
automated, but of course, the through a series of reboot attempts,
Coccinelle transformations still would and somehow change hardware state
need to be written and maintained by along the way, you can find that none
someone, which is why Luis wanted to of the attempts can succeed.

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 17


[ UPFRONT ]

The cool thing about this fixes some machines. We know that
particular bug is the straightforward it breaks some machines. We don’t
way Linus Torvalds said that know how many machines it fixes or
Windows must be doing something how many machines it breaks. We
right, and that the Linux people don’t know how many machines are
needed to figure out what that was flipped from a working state to a
so Linux could do it right too. broken state whenever we fiddle with
Steven Rostedt pointed out the the order or introduce new heuristics.
boot failure in one of his systems, We don’t know how many go from
and this triggered the bug hunt. broken to working. The only way
Part of the problem is that it’s very we can be reasonably certain that
difficult to understand exactly what’s hardware will work is to duplicate
going on with a system when it precisely what Windows does,
boots up. Strange magical forces are because that’s all that most vendors
apparently invoked. will ever have tested.”
During the course of a somewhat But, Linus Torvalds felt that
heated debate, Matthew Garrett ditching CF9 was equivalent to
summed up what he felt was the flailing at the problem. In the course
underlying issue, and why the problem of discussion he said, “It would
was so difficult to solve. In response to be interesting if somebody can
any of the various bootups attempted, figure out exactly what Windows
he said, “for all we know the firmware does, because the fact that a lot of
is running huge quantities of code Dell machines need quirks almost
in response to any of those register certainly means that it’s us doing
accesses. We don’t know what other something wrong. Dell doesn’t
hardware that code touches. We don’t generally do lots of fancy odd
know what expectations it has. We things. I pretty much guarantee it’s
don’t know whether it was written because we’ve done something odd
by humans or written by some sort of that Windows doesn’t do.”
simulated annealing mechanism that The discussion had no resolution—
finally collapsed into a state where probably because it’s a really tough
Windows rebooted.” problem that hits only a relatively
Matthew was in favor of ditching small number of systems. Apparently
the 0xCF9 bootup technique entirely. the bug hunt—and the debate—will
He argued, “We know that CF9 continue. —ZACK BROWN

18 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


[ UPFRONT ]

Lucidchart

I am a visual learner. When I try to and modified while still staying


teach something, I naturally like to connected. I used Lucidchart for this
use visual examples. That usually month’s Open-Source Classroom
involves me working for hours to column, and if you need to create a
create flowcharts in Google Docs quick flowchart, I recommend you
using the drawing program. Yes, it give it a try as well.
works, but it’s a very cumbersome Lucidchart provides its service
way to create a flowchart. Thankfully, for free, but there also are paid
I recently discovered Lucidchart tiers that give you more options,
(http://www.lucidchart.com). such as importing and exporting
Lucidchart is an on-line service Visio documents and so on. There
that provides a free way to create certainly are other more powerful
flowcharts quickly and easily. Once charting tools available, but few are
the flowchart is created, items as simple and quick to use.
can be dragged around, resized —SHAWN POWERS

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 19


[ UPFRONT ]

One Charger to Rule


Them All
If you’re anything like me, your Granted they don’t all need a
nightstand is full of electronic devices daily charge, but the two tablets
that need to be charged regularly. and cell phone certainly do.
Every night I have: Although many of you are probably
tsk’ing me for buying an iPad, for
Q Nexus 7 tablet. this purpose, it’s a fine example of
a device that is finicky about being
Q Cell phone. charged. Many tablets, the iPad
especially, require a lot of amperage
Q Kindle Paperwhite. to charge properly. Enter the Anker
40W, five-port USB charger.
Q iPad Air. Before buying the Anker, I had to
get a power strip in order to plug in
Q Fitbit. all the wall-warts required to charge

20 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


[ UPFRONT ]

my devices. Two of those devices (the


Fitbit and Kindle) didn’t even come
They Said It
with power adapters, just USB cables
to plug in to a computer for charging. It is necessary
W ith the Anker USB charger, I’m able to try to surpass
to use a single, regular-sized power oneself always; this
cord to charge all my devices. Because occupation ought to
it’s designed specifically to charge, last as long as life.
it has some great features as well: —Queen Christina

Q Dynamic, intelligently assigned We go where our


amperage, up to 2.4 amps vision is.
per port (8 amps max for all —Joseph Murphy
ports combined).
I didn’t mind
getting old
Q Compact size (about the size
when I was young.
of a deck of playing cards).
It’s the being old
Q Supports Apple, Android and other
now that’s getting
USB-based charging. to me.
—John Scalzi
I’ve been using the Anker charger
There’s no such
for several weeks and absolutely love
thing as quitting.
it. There also is a 25 watt version if
Just sometimes
you don’t need the full 40 watts, but
there’s a longer
I highly recommend getting the larger
pause between
version, just in case you need more
relapses.
power in the future.
—Alan Moore
I purchased the charger on Amazon
for $26, and although that’s more than It is better to
I’d normally pay for a USB charger, it’s
offer no excuse
more like getting five chargers in one.
than a bad one.
Check it out at http://www.ianker.com/
—George
support-c7-g345.html.
Washington
—SHAWN POWERS

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 21


[ UPFRONT ]

OpenAxiom
Several computer algebra systems OpenAxiom with:
are available to Linux users. I
even have looked at a few of sudo apt-get install openaxiom
them in this column, but for this
issue, I discuss OpenAxiom If you want to build OpenAxiom
(http://www.open-axiom.org). from source, you need to have a
OpenAxiom actually is a fork Lisp engine installed. There are
of Axiom. Axiom originally was several to choose from on Linux,
developed at IBM under the name such as CLisp or GNU Common Lisp.
ScratchPad. Development started Building is a straightforward:
in 1971, so Axiom is as old as I
am, and almost as smart. In the ./configure; make; make install
1990s, it was sold off to the
Numerical Algorithms Group (NAG). To use OpenAxiom, simply execute
In 2001, NAG removed it from open-axiom on the command line.
commercial sale and released it as This will give you an interactive
free software. Since then, it has OpenAxiom session. If you have
forked into OpenAxiom and FriCAS. a script of commands you want to
Axiom still is available. The system run as a complete unit, you can do
is specified in the book AXIOM: the so with:
Scientific Computation System by
Richard Jenks and Robert Sutor. open-axiom --script myfile.input
This book is available on-line at
http://wiki.axiom-developer.org/ where the file “myfile.input”
axiom-website/hyperdoc/axbook/ contains the OpenAxiom commands
book-contents.xhtml, and it to be executed.
makes up the core documentation So, what can you actually do
for OpenAxiom. with OpenAxiom? OpenAxiom has
Most Linux distributions should many different data types. There
have a package for OpenAxiom. are algebraic ones (like polynomials,
For example, with Debian-based matrices and power series) and data
distributions, you can install structures (like lists and dictionaries).

22 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


[ UPFRONT ]

You can combine them into any new type, Fraction Integer . If you
reasonable combinations, like have used a commercial system like
polynomials of matrices or matrices Maple before, this should be familiar.
of polynomials. These data types are OpenAxiom has data types to try
defined by programs in OpenAxiom. to keep results as exact values. If you
These data type programs also have a reason to use a particular type,
include the operations that can be you can do a conversion with the
applied to the particular data type. :: operator. So, you could redo the
The entire system is polymorphic above division and get the answer as
by design. You also can extend the a float with:
entire data type system by writing
your own data type programs. There (4/6)::Float
are a large number of different
numeric types to handle almost any It even can go backward and
type of operation as well. calculate the closest fraction
The simplest use of OpenAxiom is that matches a given float with
as a calculator. For example, you can the command:
find the cosine of 1.2 with:
%::Fraction Integer
cos(1.2)
The % character refers to the most
This will give you the result with recent result that you calculated. The
20 digits, by default. You can change answer you get from this command
the number of digits being used with may not match the original fraction,
the digits() function. OpenAxiom due to various rounding errors.
also will give you the type of this There are functions that allow
answer. This is useful when you are you to work with various parts
doing more experimental calculations of numbers. You can round() or
in order to check your work. In the truncate() floating-point numbers.
above example, the type would be You even can get just the fractional
Float . If you try this: part with fractionPart() .
One slightly unique thing in
4/6 OpenAxiom is a set of test functions.
You can check for oddness and
the result is 2/3 , and you will see a evenness with the functions odd?()

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 23


[ UPFRONT ]

and even?() . You even can check The last feature I want to look at
whether a number is prime with in this article is how OpenAxiom
prime?() . And, of course, you still handles data structures. The most
have all of the standard functions, basic data structure is a list. Lists
like the trigonometric ones, and the in OpenAxiom are homogeneous,
standard operators, like addition so all of the elements need to
and multiplication. be the same data type. You
OpenAxiom handles general define a list directly by putting a
expressions too. In order to use them, comma-separated group in square
you need to assign them to a variable brackets—for example:
name. The assignment operator is
:= . One thing to keep in mind is that [1,2,3,4]
this operator will execute whatever
is on the right-hand side and assign This can be done equivalently with
the result to the name on the left- the list function:
hand side. This may not be what you
want to have happen. If so, you can list(1,2,3,4)
use the delayed assignment operator
== . Let’s say you want to calculate You can put two lists together with
the square of some numbers. You can the append function:
create an expression with:
append([1,2],[3,4])
xSquared := x**2
If you want to add a single element
In order to use this expression, you to the front of a list, you can use the
need to use the eval function: cons function:

eval(xSquared, x=4) cons(1, [2,3,4])

You also can have multiple List addressing is borrowed from


parameters in your expression. Say the concepts in Lisp. So the most
you wanted to calculate area. You basic addressing functions to get
could use something like this: elements are the functions first
and rest . Using the basic list from
xyArea := x * y eval(xyArea, [x=2, y=10]) above, the function:

24 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


[ UPFRONT ]

first([1,2,3,4]) when you need the value in question.


You can have more complicated
will return the number 1, and examples, like a list of prime numbers:
the function:
myprimes := [i for i in 1.. | prime?(i)]
rest([1,2,3,4])
One issue with lists is that access
will return the list [2,3,4]. Using times depend on how big the list is.
these functions and creative use Accessing the last element of a list
of loops, you can get any element varies, depending on how big said
in a given list. But, this is very list is. This is because lists can vary in
inconvenient, so OpenAxiom length. If you have a piece of code that
provides a simpler interface. If you deals with lists that won’t change in
had assigned the above list to the length, you can improve performance
variable mylist , you could get the by using an array. You can create a
third element with: one-dimensional array with the function:

mylist.3 oneDimensionalArray([2,3,4,5])

or, equivalently: This assigns a fixed area of


memory to store the data, and
mylist(3) access time now becomes uniform,
regardless of the size of the list.
These index values are 1-based, Arrays also are used as the base data
as opposed to 0-based indexing in structure for strings and vectors. You
languages like C. even can create a bits data structure.
A really unique type of list available You could create a group of eight
is the infinite list. Say you want to 1-bits with:
have a list of all integers. You can do
that with: bits(8,true)

myints := [i for i in 1..] In this way, you can begin to do some


rather abstract computational work.
This list will contain all possible As you have seen, OpenAxiom and
integers, and they are calculated only its variants are very powerful systems

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 25


[ UPFRONT ]

for doing scientific computations. take another look at OpenAxiom and


I covered only the very basic see what more advanced techniques
functionality available here, but it are possible, including writing your
should give you a feeling for what you own functions and programs.
can do. In the near future, I plan to —JOEY BERNARD

Non-Linux FOSS:
AutoHotkey

Text expansion and hotkey they’ve also included automatic text


automation are the sort of things you expansion/replacement for speed
don’t realize you need until you try boosts on the fly.
them. Those of you who ever have Programming the hotkeys
played with system settings in order and text replacements is pretty
to change the function of a keystroke straightforward, and the Web site
on your system will understand the offers plenty of tutorials for making
value of custom hotkeys. complex scripts for elaborate
For W indows users, the automation. Even if you just want to
customization of keystrokes is do a few simple hotkeys, however,
pretty limited with the system AutoHotkey is a tool you really will
tools. Thankfully, the folks at want to check out. Best of all, it’s
http://www.autohotkey.com have completely open source, so there’s no
created not only an incredible tool reason not to go download it today!
for creating scripted hotkeys, but —SHAWN POWERS

26 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


The Best SharePoint Training
in the World returns to Boston!
Choose from more than 80 classes and tutorials!

“I really enjoyed it. I can hardly wait to get back to work and
September 16-19, 2014
start using what I learned. I will encourage employees and The Boston Park Plaza Hotel & Towers
co-workers to attend future SPTechCons. The conference
had great speakers with relevant subjects, and the whole
thing was well organized.” Bolster your career by
—Greg Long, Infrastructure Development Manager, ITG, Inc.
becoming a SharePoint Master!
“I prefer SPTechCon over Microsoft’s SharePoint Conference • Learn from SharePoint experts, including dozens of
in Vegas. I’m definitely going to tell others to go.” SharePoint MVPs and Certified SharePoint Professionals
—Ray Ranson, Senior Architect, RSUI • Master document management
• Study SharePoint governance
• Find out about SharePoint 2013
• Learn how to create applications for SharePoint that solve
real business problems
• Exchange SharePoint tips and tricks with colleagues
• Test-drive SharePoint solutions in the Exhibit Hall

If you or your team needs Microsoft SharePoint


training, come to SPTechCon Boston!

Register Early and SAVE! www.sptechcon.com


SPTechCon™ is a trademark of BZ Media LLC. SharePoint® is a registered trademark of Microsoft.
A BZ Media Event @SPTechCon
[ EDITORS' CHOICE ]

Android Candy: EDITORS’


CHOICE
Quit Thumbing ★
Your Passwords!
I use my phone more
often to log in to
on-line accounts than
I use a computer. I
can assure you it’s
not because typing
passwords on a tiny
keyboard is fun. For
most of us, we just
have instant access
to our phones at any
given time during the
day. The big problem
with always using a tiny
phone is that it means
logging in to tiny Web
sites (especially if there
is no mobile version of
the site) with tiny virtual
keys and a long, complex
password. It makes for
real frustration.
W ith PasswordBox,
you not only can store
your user names and
passwords, but also
l og in to those Web

28 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


s ites with a single click. Onc e powerful and convenient tool that
you authenticate with your makes using your phone much less
master password to the burdensome when logging in to
PasswordBox app, it will allo w on-line services. In fact, it greatly
you to create login profiles f o r can improve security as you won’t
dozens of sites and give you t h e need to type in your banking
ability to add entries for you r information in plain sight of the
own personal sites. If you wa n t guy next to you at McDonald’s.
t o log in to your bank with y o u r For those reasons, PasswordBox
phone, but don’t want anyon e gets this month’s Editors’ Choice
t o see you type in your bank i n g Award. Check it out today at
credentials, PasswordBox is t h e http://www.passwordbox.com.
per fect t oo l for you. —SHAWN POWERS
W ith great power comes g re a t
responsibility, and it’s import a n t

LINUX JOURNAL
t o understand what Password B o x
allows you to do. When you
i nitially launch it, you’ll be
prompted for how you want t h e
on your
application to handle when i t Android device
l ocks your data and requires y o u Download the
t o retype the master passwo rd . app now in
I deally, this would be “imme d i a t e l y the Android
after you quit the app”, but Marketplace
PasswordBox allows you to
s acrifice security for conveni e n c e
and will stay unlocked anywh e re
f rom 30 seconds to several h o u r s .
I t even will let you rely on yo u r
Android lock screen for secu r i t y
and never prompt you for yo u r
m ast er passw ord!
Even with its potential for
insecurity, PasswordBox is a www.linuxjournal.com/android

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 29


COLUMNS
AT THE FORGE

12-Factor REUVEN M.
LERNER

Apps
Reuven describes an interesting perspective on scalable,
maintainable Web apps.

I often tell the students in my framework to suggest that we prefer


programming classes that back in the “convention over configuration”,
1960s and 1970s, it was enough for meaning that developers should
a program to run. In the 1980s and sacrifice some freedom in naming
1990s, we needed programs not only conventions and directory locations,
to run, but also to run quickly. In the if it means easier maintenance. And
modern era, programs need to run, indeed, when I take over someone
and run quickly, but even more crucial else’s Rails codebase, the fact that
is that they be maintainable. the framework dictates the names
Of course, we don’t talk much and locations of many parts of the
about “programs” any more. Now program reduces the time it takes
we have “applications”, or as Steve for me to understand and begin
Jobs taught us to say, “apps”. This is improving the program.
especially true for Web applications, Even in a Rails application though,
which are a combination of many we can expect to see many different
different programs, and often files and programs. Heroku, a well-
different languages as well—a server- known hosting company for Web
SIDE LANGUAGE PLUS 31, (4-, #33 apps, looked at thousands of apps
and JavaScript. And, don’t forget the and tried to extract from them the
configuration files, which can be in factors that made it more likely
XML, YAML or other formats entirely. that they would succeed. Their
Modern Web frameworks have tried recommendations, written up by
to reduce the potential for clutter then-CTO Adam Wiggins, are known
and chaos in Web applications. Ruby as the “12-factor app”, and they
on Rails was the most prominent describe practices that Heroku believes

30 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


COLUMNS
AT THE FORGE

The growth of Vagrant and Docker, two open-source


systems that allow for easy virtualization and
containers, means that we might see this aspect
of the 12-factor app change, looking at “containers”
rather than “repositories”.

will make your app more maintainable production server. Obviously, things
and more likely to succeed. have improved a great deal since then,
In this article, I take a look at and many (most?) developers now
each of the factors of a 12-factor understand the importance of keeping
app, describing what they mean their code inside a Git repository.
and how you can learn from them. So, why would it be important to
I should note that not every aspect state this as part of a 12-factor app?
of a 12-factor app is unique to It would seem that the reason is
this set of recommendations; two-fold: keep everything you need
some of these practices have been for the application inside a single
advisable for some time. Moreover, repository, and don’t use the same
it’s important to see these as repository for more than one app. In
recommendations and food for other words, there should be a one-
thought, not religious doctrine. to-one correspondence between your
After all, these recommendations app and the repository in which it sits.
come from examining a large Following this advice means, among
number of applications, but that other things, that you can distribute
doesn’t mean they’re a perfect your app multiple times. I recently
match for your specific needs. finished a project for a client that
originally had developed the software
1. Codebase in Lotus Notes. Now, I don’t know
A 12-factor app has “one codebase much about Notes, but the fact is
tracked in revision control, with many that you cannot easily distribute an
deploys”. I remember the days before application written in Notes to new
version control, in which we would servers, let alone to your laptop. A
modify Web applications on the 12-factor app puts everything inside

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 31


COLUMNS
AT THE FORGE

the repository, dramatically reducing at least for me, of “As opposed to


the work needed to deploy to a new what?” In Rails, for example, I cannot
server or new environment. easily use a package (Ruby gem)
The growth of Vagrant and Docker, without explicitly mentioning it my
two open-source systems that allow application’s Gemfile. In Python, I
for easy virtualization and containers, need to import packages explicitly
means that we might see this in files that use them. To what is the
aspect of the 12-factor app change, 12-factor author referring when he
looking at “containers” rather than says that we shouldn’t implicitly use
“repositories”. Indeed, Discourse external resources?
already has moved in this direction, The author writes that apps should
encouraging users to deploy within not “rely on the implicit existence
a Docker container, rather than of any system tools”, going so far
installing the software themselves. as to say that they should not “shell
However, the idea would be the out” to run external programs. As
same—have one configured version a general rule, that’s certainly a
of the application and then deploy it good idea; the days in which it was
many times. acceptable to open a subshell to run
external UNIX utilities are long gone.
2. Dependencies And yet, there are times when it is
Every program has dependencies; necessary to do so.
if nothing else, software depends So I have to wonder about the
on the language in which it was advice given in the 12-factor app,
written, as well as the core libraries saying that all external programs
of that language. But if you are should be bundled with the
using an open-source language and application, in its repository. It’s a
framework, you likely are using good idea to have everything in one
numerous packages. A 12-factor app, place, but I recently worked on a
according to the definition, explicitly project that needed to use the open-
declares and isolates dependencies. source PdfTk program. Did I really
That is, the application should want to include PdfTk in my app’s
indicate what external libraries it uses repository? I expect it wouldn’t even
and make it possible to change or work to do that, given the mix of
remove those dependencies. Windows, Macintosh and Linux boxes
This factor does raise the question, among the developers, servers and

32 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


COLUMNS
AT THE FORGE

project managers working on the site. popular, and is encouraged at


Another aspect of this factor has http://12factor.net, is the use of
to do with making not only the environment variables. That is, all of
library dependencies explicit, but the configuration is set in environment
also their versions. It’s very easy and variables, in the deployment user’s
tempting simply to use whatever shell or elsewhere in the system
version of a Ruby gem or Python (for example, the HTTP server’s
package is installed on a system. configuration file). If you don’t want
But without version numbers, this to set environment variables yourself,
can cause huge problems—the you can use a system like “dotenv” to
application might behave differently provide them for you.
on various computers, depending on But it gets even better. By
what versions they have installed. putting configuration settings in
Explicitly marking versions that are environment variables, you also
known to work reduces the chance ensure that you can run the same
of such trouble. code in different environments.
Want to change the credentials for
3. Configuration a third-party service? You don’t
It used to be that we would, in have to touch or redeploy the code
our code, explicitly connect to a itself. Want to connect to a different
database or to other servers from database when running your tests?
within our application. One of the Just change an environment variable.
many risks associated with that Of all of the suggestions at
practice was that if others got a hold 12factor.net, this is the one that
of our codebase, they then would I believe offers the most bang for
have our credentials to log in to the buck. It increases security and
databases and third-party services. increases the flexibility of your
A solution has been to put such application. The trick is to reduce,
information in configuration files as much as possible, the number of
that aren’t committed into the code “if” statements in your code that
repository. How do such files then test for such values. You want to
get to the system? That’s actually a be using whatever the environment
bit tricky, and there are a number of provides, not making conditional
solutions to the problem. statements that change the
One solution that has become behavior based on them.

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 33


COLUMNS
AT THE FORGE

4. Backing Services that are special to your organization


The next factor at 12factor.net says and not spend time or effort on the
that we should use external services configuration and tuning of external
as resources. This is a shift that has services. However, this doesn’t mean
been happening more and more. For I’d want to outsource everything; in
example, if you ran a Web app on particular, I’m not sold on the idea of
a standalone server ten years ago, third-party database servers. Perhaps
you would have just sent e-mail to this is something I’ll just have to
your customers directly from the app, get used to, but for now, there are
using the built-in “sendmail” (or certain aspects of my apps that I’ll
equivalent) program. continue to keep in-house or at least
However, the rise of spam filters on servers I’ve bought or rented.
and the intricacies of e-mail delivery,
as well as the move toward server- 5. Build, Release, Run
oriented architectures (SOA), has This factor says you first should build
encouraged people to move away your app and then run it—something
from using their own systems and that I would think we normally do
toward using separate, third-party anyway. Certainly, deployment tools,
systems. Sending e-mail, for example, such as Capistrano, have changed
can be done via Google’s Gmail forever the way that I think about
servers or even through a specialized deploying apps. As I’ve started to
company, such as Sendgrid. experiment with such technologies as
Such third-party services have Chef, Puppet, Vagrant and Docker,
become increasingly common. This is I believe it’s only a matter of time
true because networks have become before we see an app as a self-
faster and servers have become more contained, almost-compiled binary
reliable, but also because it’s less image that we then distribute to one
and less justifiable for a company or more servers. I have heard of a
to spend money on an entire IT growing number of companies that
staff, when those functions can be not only use this approach, but that
outsourced to a third party. also deploy an entirely new Amazon
In my opinion, such third-party EC2 server with each new release. If
services are an excellent way to there is a problem with the code on a
create an application. It allows you to server, you just shut down the server
focus on the parts of your application and replace it with another one.

34 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


COLUMNS
AT THE FORGE

I’d generally agree that it’s a bad hear that we could add as many Web
idea to modify production code. servers as we wanted, because of the
However, I’ve certainly been in “share nothing” architecture.
situations when it was necessary Now, you do sometimes want to
to do so, with particularly hard- have cached memory or state for
to-track-down bugs that even a users or other resources. In such
seemingly identical staging system cases, it’s best to use something
could not change. Yes, that might like Memcached or Redis—or even
point to problems with the testing a full-fledged relational database—
environment, but when there is a to keep the state. This has the
crisis, my clients generally don’t want advantage of not only keeping it
to hear clever suggestions about separate from your application, but
the development and build process. also of sharing the data across all
Rather, they first want to fix the the Web servers you deploy.
problem and then worry about how
they can avoid the same problem in 7. Port Binding
the future. This factor suggests that the
Nevertheless, if you aren’t using application should be a self-contained
a deployment tool to put your sites system and, thus, export itself and its
out on their servers, you might want services via HTTP on a particular port.
to consider that. The idea here seems to be that every
application should include an HTTP
6. Processes server of some sort and then start that
The next factor says that the server on a port. In this way, every
application should be one or more Web application becomes a resource
stateless processes. The main on the Internet, available via a URL
implication is that the application and a port.
should be stateless, something I truly I must admit that this strikes me as
would hope has been the case for a bit odd. Do I really want to see my
most Web apps for some time. And HTTP server as part of my Web app?
yet, when I was speaking with folks at Probably not, so I don’t see a good
a Fortune 500 company several days reason to include them together.
ago, asking about the scalability of an At the same time, I do see a strong
application that I’m building for them, advantage of considering each Web
they seemed genuinely surprised to application as a small SOA-style

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 35


COLUMNS
AT THE FORGE

resource, which can be queried and some of these things. But as


used from other applications. The 12factor.net says, threads cannot be
entire Web is becoming an operating moved to a different server or VM,
system, and the API calls of that whereas processes (especially if they
operating system are growing all don’t contain state and save things to
the time, as new URLs expose new a common storage facility) can.
services. By exposing your application
as a resource, you are contributing 9. Disposability
to this new operating system and This aspect of 12factor.net says that
increasing the richness of the apps we can start up or shut down an app
that run on it. However, I’m not at any time, on any number of servers.
convinced that where the HTTP server To be honest, I’m not sure how this
lies, and how closely it is bound to the is different from the existing world
app, really affects that mindset. of Web applications. For as long as I
can remember, I was able to start up a
8. Concurrency Web app without too much fuss and
Those of us in the Linux world are fully take it down with even less trouble.
familiar with the idea of creating new “Fast startup” is a good goal to
processes to take care of tasks. UNIX have, but it can be hard to achieve,
has a long history and tradition of particularly with the overhead
using processes for multitasking, and of many modern frameworks
while threads certainly exist, they’re and languages. If you’re using a
not the default way of scaling up. language that sits on top of the Java
The “concurrency” section of virtual machine (JVM), you’re likely
12factor.net says that we should to experience slow startup time but
indeed use processes and not be very fast execution time.
afraid to spin up processes that That said, I’m not sure how
will handle different aspects of important it is to be able to start
our application. Each process then up a new copy of your application
can run a specialized program, quickly, relative to other issues and
which communicates with the other constraints. True, it’s frustrating to
processes using an API—be it a have slow startup times, particularly
named pipe, socket, a database or if those affect your ability to run
even just the filesystem. a test suite. But most of the time,
True, we could use threads for your application will be running, not

36 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


COLUMNS
AT THE FORGE

starting up—thus, I’d downplay the on the staging server and a fourth
importance of this particular factor. way on the production machine. Such
a situation is asking for trouble. It
10. Dev/Prod Parity means that even if you have excellent
The idea behind this factor is test coverage of your application,
extremely simple but also important: you’re likely to experience hard-to-
keep your development, staging and debug problems that have to do with
production environments as similar the configuration or the inherent
as possible. differences between operating-
It’s hard to exaggerate the number system versions.
of times I have experienced problems Once again, an increasingly popular
because of this. The system worked solution to this problem is to use a
one way on my own development virtual machine and/or a container.
machine, another way on another With a Vagrant VM, for example,
programmer’s machine, a third way you can share the same machine, not
COLUMNS
AT THE FORGE

just the same environment, among tail -f on a textual logfile or grep


all developers and servers. Working on a file that’s of interest to me. But
in this way saves time and reliability, I have used some third-party logging
although it does admittedly have some solutions, such as Papertrail, and have
negative effects on the performance come away impressed. There also
of the system. are open-source solutions, such as
Greylog2, which some of my clients
11. Logs have used to great satisfaction.
I love logs and sending data to them. In
a small application, it’s enough to have 12. Admin Processes
a single logfile on the filesystem. But if The final factor in a 12-factor app is
you’re using a read-only system (such that of administrative processes. Now,
as Heroku), or if you are going to create I often compare a Web app to a hotel,
and remove servers on a regular basis in that the end user sees only the
with Chef or Puppet, or if you have minority of the actual workings. Just as
multiple servers, you will likely want to guests never see the kitchen, laundry
have logs as an external service. or administrative offices of a hotel,
Now, old-time UNIX users might say users of a Web app never see the
that syslog is a good solution for this. administrative pages, which often can
And indeed, syslog is fairly flexible be extensive, powerful and important.
and allows you to use one system as However, the 12-factor app
the logging server, with the others prescription for admin processes isn’t
acting as clients. about administrative parts of the site.
The 12-factor suggestion is to go Rather, it’s about the administrative
one step further than this, treating tasks you need to do, such as
a log as a writable event stream to updating the database. This factor
which you send all of your data. says that you really should have a
Where does it go? It might be syslog, REPL (that is, a read-eval-print loop,
but it’s more likely going to be a aka an interactive shell), and that you
third-party service, which will allow can put many administrative tasks into
you to search and filter through small programs that execute.
the logs more easily than would be I agree that an REPL is one of the
possible in simple text files. most powerful and important aspects
I must admit there’s still some of the languages I typically use. And I
comfort in my being able to run a love database migrations, as opposed

38 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


AT THE FORGE

to manual tinkering with the database in a way that all can agree on.
that always can lead to problems. However, as with design patterns,
However, I’m not sure if this warrants it’s important to see this as a tool,
inclusion as a crucial component of a not a religion. Consider your needs,
maintainable Web application. take the 12-factor app prescriptions
into account, and apply as necessary.
Conclusion If all goes well, your app will end
I see the 12-factor app as a great up being more scalable, reliable and
way to think about Web applications maintainable. Q
and often to increase their stability
and maintainability. In some ways, I Reuven M. Lerner is a Web developer, consultant and trainer.
see it as similar to design patterns, He recently completed his PhD in Learning Sciences from
in that we have gained a language Northwestern University. You can read his blog, Twitter feed
that allows us to communicate with and newsletter at http://lerner.co.il. Reuven lives with his wife
others about our technical design and three children in Modi’in, Israel.

LINUX JOURNAL
now available
for the iPad and
iPhone at the
App Store.

linuxjournal.com/ios
For more information about advertising opportunities within Linux Journal iPhone, iPad and
Android apps, contact John Grogan at +1-713-344-1956 x2 or [email protected].
COLUMNS
WORK THE SHELL

Days DAVE TAYLOR

Between Dates:
a Smarter Way
How many days have elapsed? Our intrepid shell script
programmer Dave Taylor looks at how to solve date-related
calculations and demonstrates a script that calculates
elapsed days between any specified day in the past and
the current date.

In case you haven’t been reading it’s not working.


my past few columns or, perhaps, Here’s the state of things:
are working for the NSA and
scanning this content to identify $ date

key phrases, like “back door” and Mon Jul 7 09:14:37 PDT 2014

“low-level vulnerability”, we’re $ sh daysago sh 7 4 2012

working on a shell script that The date you specified -- 7-4-2012 -- is valid Continuing

calculates the number of days 0 days transpired between end of 2012 and beginning of this year

between a specified date in the past calculated 153 days left in the specified year

and the current date. Calculated that 7/4/2012 was 341 days ago

When last we scripted, the basic


functionality was coded so that The script correctly ascertains that
the script would calculate days the current date, July 7, 2014, is 153
from the beginning date to the end days into the year, but the rest of it’s
of that particular year, then the a hopeless muddle. Let’s dig in to the
number of years until the current code and see what’s going on!
year (accounting for leap years),
followed by the current number of Two Versions of e = Not Good
days into the year. The problem is, The code in my last article was fairly

40 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


COLUMNS
WORK THE SHELL

Did you try that and find that date complained


about the -j flag? Good.

convoluted in terms of calculating License GPLv3+: GNU GPL version 3 or later

the number of days left in the <http://gnu.org/licenses/gpl.html>.

starting year subsequent to the date This is free software: you are free to change and redistribute it.

specified (July 4, 2012, in the above There is NO WARRANTY, to the extent permitted by law.

example), but there’s a far easier Written by David MacKenzie.

way, highlighted in this interaction:


How many days were in a given
$ date -j 0803120010 year? That’s also easily done with a
Tue Aug 3 12:00:00 PDT 2010 shortcut, checking the day of the year
$ date -j 0803120010 +%j of December 31st. For example:
215
$ date -d 12/31/2010 +%j
In other words, modern date 365
commands let you specify a date (in
MON DAY HOUR MIN YEAR format) But, 2012 was a leap year. So:
and then use the %j format notation
to get the day-of-the-year for that $ date -d 12/31/2012 +%j
specific date. August 3, 2010, was 366
the 215th day of the year.
Did you try that and find that Therefore, the days-left-in-year
date complained about the -j flag? calculation is simply days-in-year
Good. That means you’re likely – day-of-the-year.
using GNU date , which is far The next calculation is days/year *
superior and is actually something years between the specified date and
we’ll need for the full script to the current year.
work. Test which version you have
by using the --version flag: Days Left in Year
The first step of calculating the days
$ date --version left in the starting year is to create
date (GNU coreutils) 8.4 the correct date format string for the
Copyright (C) 2010 Free Software Foundation, Inc. date command. Fortunately, with

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 41


COLUMNS
WORK THE SHELL

GNU date, it’s easily done: me, so let’s do this:

# build date string format for the specified starting date daysbetweenyears=0

startdatefmt="$startmon/$startday/$startyear" tempyear=$(( $startyear + 1 ))

calculate="$(date -d "12/31/$startyear" +%j) - $(date -d while [ $tempyear -lt $thisyear ] ; do

´$startdatefmt +%j)" echo "intervening year: $tempyear"

echo "Calculating $calculate" daysbetweenyears=$(( $daysbetweenyears + $(date -d

daysleftinyear=$(( $calculate )) ´"12/31/$tempyear" +%j) ))

tempyear=$(( $tempyear + 1 ))

When run as part of the script, the done

debugging echo statement offers useful echo "Intervening days in years between $startyear and

information to help debug things: ´$thisyear = $daysbetweenyears"

$ sh daysago sh 2 4 2012 Again, I’m adding a debugging


The date you specified -- 2-4-2012 -- is valid Continuing echo statement to clarify what’s going
Calculating 366 - 035 on, but we’re getting pretty close:
There were 337 days left in the starting year

$ sh daysago sh 2 4 2013 $ sh daysago sh 2 4 2010

The date you specified -- 2-4-2013 -- is valid Continuing The date you specified -- 2-4-2010 -- is valid Continuing

Calculating 365 - 035 Calculating 365 - 035

There were 336 days left in the starting year Calculated that there were 336 days left in the starting year

intervening year: 2011

Notice that when we specified intervening year: 2012

Feb 4, 2012, a leap year, there are intervening year: 2013

337 days left in the year, but when intervening days in years between 2010 and 2014 = 1096

we specify the same date on the


following non-leapyear, there are 336 That seems to pass the reasonable
days. Sounds right! test, as there are three years between
2010 and 2014 (namely 2011, 2012
Days in Intervening Years and 2013) and the back-of-envelope
The next portion of the calculation calculation of 3*365 = 1095.
is to calculate the number of days in If you’re using non-GNU date, you
each year between the start year and already have realized that the string
the current year, not counting either format is different and that you need to
of those. Sounds like a while loop to use the -j flag instead of the -d flag.

42 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


COLUMNS
WORK THE SHELL

The problem is, the older date command also


works differently, because 1969 is the beginning
of Epoch time.

The problem is, the older date command That’s it. Now, stripping out the
also works differently, because 1969 is debug echo statements, here’s what
the beginning of Epoch time. Look: we can ascertain:

$ date -j 0204120060 # last two digits are year, so '60 $ sh daysago.sh 2 4 1949

Wed Feb 4 12:00:00 PST 2060 23900 days have elapsed between 2/4/1949 and today,

´day 188 of 2014.

It interprets the two-digit “60” as $ sh daysago.sh 2 4 1998

2060, not 1960. Boo! 6003 days have elapsed between 2/4/1998 and today,

If you’re not running GNU date , ´day 188 of 2014.

you’re going to have a really big $ sh daysago.sh 2 4 2013

problem for dates prior to 1969, and 524 days have elapsed between 2/4/2013 and today,

I’m just going to say “get GNU date, ´day 188 of 2014.

dude” to solve it.


But look, there’s still a lurking
Total Days That Have Passed problem when it’s the same year that
We have everything we need to do we’re calculating:
the math now. We can calculate the
number of days left in the start year, $ sh daysago.sh 2 4 2014

the number of days in intervening years, 524 days have elapsed between 2/4/2014 and today,

and the day-of-year number of the ´day 188 of 2014.

current date. Let’s put it all together:


That’s a pretty easy edge case
### DAYS IN CURRENT YEAR to debug, however, so I’m going
dayofyear=$(date +%j) # that's easy! to wrap things up here and let
### NOW LET'S ADD IT ALL UP you enjoy trying to figure out
totaldays=$(( $daysleftinyear + $daysbetweenyears + $dayofyear )) what’s not quite right in the
echo "$totaldays days have elapsed between resultant script.
´$startmon/$startday/$startyear and today, Stumped? Send me an e-mail via
´day $dayofyear of $thisyear." http://www.linuxjournal.com/contact.Q

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 43


COLUMNS
WORK THE SHELL

Listing 1. Full daysago.sh Script


#!/bin/sh exit 1
fi
# valid-date lite fi

function daysInMonth ###################################


{ ## Now part two: the number of days since the day you specify.
case $1 in ###################################
1|3|5|7|8|10|12 ) dim=31 ;; # most common value
4|6|9|11 ) dim=30 ;; ### FIRST, DAYS LEFT IN START YEAR
2 ) dim=29 ;; # depending on if it's a leap year
* ) dim=-1 ;; # unknown month # calculate the date string format for the specified starting date
esac
} startdatefmt="$startmon/$startday/$startyear"

function isleap calculate="$(date -d "12/31/$startyear" +%j) - $(date -d


{ ´$startdatefmt +%j)"
leapyear=$(cal 2 $1 | grep '29')
} daysleftinyear=$(( $calculate ))

### DAYS IN INTERVENING YEARS


if [ $# -ne 3 ] ; then
echo "Usage: $(basename $0) mon day year" daysbetweenyears=0
echo " with just numerical values (ex: 7 7 1776)" tempyear=$(( $startyear + 1 ))
exit 1
fi while [ $tempyear -lt $thisyear ] ; do
daysbetweenyears=$(( $daysbetweenyears + $(date
eval $(date "+thismon=%m;thisday=%d;thisyear=%Y;dayofyear=%j") ´-d "12/31/$tempyear" +%j) ))
tempyear=$(( $tempyear + 1 ))
startmon=$1; startday=$2; startyear=$3 done

daysInMonth $startmon # sets global var dim ### DAYS IN CURRENT YEAR

if [ $startday -lt 0 -o $startday -gt $dim ] ; then dayofyear=$(date +%j) # that's easy!
echo "Invalid date: Month #$startmon has $dim days,
´so day $startday is impossible." ### NOW LET'S ADD IT ALL UP
exit 1
fi totaldays=$(( $daysleftinyear + $daysbetweenyears + $dayofyear ))

if [ $startmon -eq 2 -a $startday -eq 29 ] ; then echo "$totaldays days have elapsed between
isleap $startyear ´$startmon/$startday/$startyear and today,
if [ -z "$leapyear" ] ; then ´day $dayofyear of $thisyear."
echo "$startyear wasn't a leapyear, so February
´only had 28 days." exit 0

Dave Taylor has been hacking shell scripts for more than 30 years.
Really. He’s the author of the popular Wicked Cool Shell Scripts Send comments or feedback via
and can be found on Twitter as @DaveTaylor and more generally http://www.linuxjournal.com/contact
at his tech site http://www.AskDaveTaylor.com. or to [email protected].

44 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


Android Everywhere!

Android is everywhere! Come to AnDevCon and learn how to develop apps for the
next generation of connected devices and how to create a seamless user experience.
Learn about Android Wear, Android L, Android TV and more!

Get the best Android developer training anywhere!


• Choose from more than 80 classes and tutorials
• Network with expert speakers and other Android developers
• Check out more than 50 exhibiting companies

“AnDevCon is a great opportunity to take your Android


skills to the next level, get exposed to technologies
you haven’t touched yet, and to network with some
of the best Android developers in the world.”
—Joe Mitchell, Software Engineer, Quicken Loans
San Francisco Bay Area
Learn more at www.AnDevCon.com November 18-21, 2014

A BZ Media Event
AnDevCon™ is a trademark of BZ Media LLC. Android™ is a trademark of Google Inc. Google’s Android Robot is used under terms of the Creative Commons 3.0 Attribution License.
COLUMNS
HACK AND /

Check KYLE RANKIN

Exchange
from the
Command Line
When fetchmail can’t fetch mail, it’s time to fall back to raw
command-line commands.

Through the years, you tend to I just run fetchmail -c , which


accumulate a suite of tools, practices returns each mailbox along with how
and settings as you use Linux. In many messages are unseen. I parse
my case, this has meant a Mutt that, and if I have any unread mail,
configuration that fits me like a tailored I display it in the notification area in
suit and a screen session that at home screen. I’ve written about that before
reconnects me to my IRC session and in my February 2011 Hack and /
at work provides me with quick access column “Status Messages in Screen”
to e-mail with notifications along the (http://www.linuxjournal.com/
bottom of the terminal for Nagios article/10950), and up until now, it
alerts and incoming e-mail. I’ve written has worked well for me. Whenever
about all of these different tools in I set up my computer for a new job,
this column through years, but in this I just configure fetchmail and reuse
article, I talk about how I adapted when the same script.
one of my scripts no longer worked. Recently, however, we switched
My e-mail notification script is our mail servers at work to a central
relatively straightforward. I configure Exchange setup, which by itself
fetchmail on my local machine, but wouldn’t be too much of an issue—in
instead of actually grabbing e-mail, the past I just configured Mutt and

46 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


COLUMNS
HACK AND /

fetchmail to treat it like any other IMAP then check for unread messages.
host—but in this case, the Exchange
server was configured with security in OpenSSL s_client
mind. So in addition to using IMAPS, The first step was to set up an OpenSSL
each client was given a client certificate s_client connection. Most people
to present to the server during probably interact with OpenSSL on the
authentication. Mutt was able to handle command line only when they need to
this just fine with a few configuration generate new self-signed certificates
tweaks, but fetchmail didn’t fare so or read data from inside a certificate,
well. It turns out that fetchmail has but the tool also provides an s_client
what some would call a configuration mode that you can use to troubleshoot
quirk and others would call a bug. SSL-enabled services like HTTPS. With
When you configure fetchmail to use a s_client, you initiate an SSL
client certificate, it overrides whatever connection and after it outputs relevant
user name you have configured in favor information about that SSL connection,
of the user specified inside the client you are presented with a prompt just
certificate. In my case, the two didn’t as though you used Telnet or Netcat to
match, so fetchmail wasn’t able to log connect to a remote port. From there,
in to the Exchange server, and I no you can type in raw HTTP, SMTP or IMAP
longer got new mail notifications inside commands depending on your service.
my screen session. The syntax for s_client is
I put up with this for a week or so, relatively straightforward, and here
until I realized I really missed knowing is how I connected to my Exchange
when I had new e-mail while I was server over IMAPS:
working. I decided there must be some
other way to get a count of unread $ openssl s_client -cert /home/kyle/.mutt/imaps_cert.pem

messages from the command line, ´-crlf -connect imaps.example.com:993

so I started doing research. In the


end, what worked for me was to use The -cert argument takes a full
OpenSSL’s s_client mode to handle path to my client certificate file,
the SSL session between me and the which I store with the rest of my
Exchange server (including the client Mutt configuration. The -crlf
certificate), and then once that session option makes sure that I send the
was established, I was able to send raw right line feed characters each
IMAP commands to authenticate and time I press enter—important for

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 47


COLUMNS
HACK AND /

expect allows you to construct incredibly


complicated programs that look for certain
output and then send your input.

some touchy IMAPS servers. Finally by the mailbox and then (UNSEEN)
the -connect argument lets me to tell it to return the number of
specify the hostname and port to unseen messages:
which to connect.
Once you connect, you will see a lot tag STATUS INBOX (UNSEEN)
of SSL output, including the certificate * STATUS INBOX (UNSEEN 1)
the server presents, and finally, you tag OK STATUS completed.
will see a prompt like the following:
In this example, I have one unread
* OK The Microsoft Exchange IMAP4 service is ready. message in my INBOX. Now that I
have that information, I can type tag
From here, you use the tag login LOGOUT to log out.
IMAP command followed by your
user name and password to log in, expect
and you should get back some sort of Now this is great, except I’m not
confirmation if login succeeded: going to go through all of those steps
every time I want to check for new
tag login kyle.rankin supersecretpassword mail. What I need to do is automate
tag OK LOGIN completed. this. Unfortunately, my attempts just
to pass the commands I wanted as
Now that you’re logged in, you can input didn’t work so well, because I
send whatever other IMAP commands needed to pause between commands
you want, including some that would for the remote server to accept the
show you a list of mailboxes, e-mail previous command. When you are in a
headers or even the full contents of situation like this, a tool like expect
messages. In my case though, I just is one of the common ways to handle
want to see the number of unseen it. expect allows you to construct
messages in my INBOX, so I use the incredibly complicated programs that
tag STATUS command followed look for certain output and then send

48 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


COLUMNS
HACK AND /

your input. In my case, I just needed For my screen session, I just want
a few simple commands: 1) confirm the name of the mailbox and the
Exchange was ready; 2) send my login; number of read messages (and
3) once I was authenticated, send the no output if there are no unread
tag STATUS command; 4) then finally messages), so I modify my egrep
log out. The expect script turned into slightly and pipe the whole thing
the following: to a quick Perl one-liner to strip
output I don’t want. The final script
set timeout 10 looks like this:
spawn openssl s_client -cert /home/kyle/ mutt/imaps_cert pem

´-crlf -connect imaps example com:993 #!/bin/bash

expect "* OK"

send "tag login kyle rankin supersecretpassword\n" MAILCOUNT=`expect ~/ imapsexpectscript | egrep '\(UNSEEN [1-9]'

expect "tag OK LOGIN completed " ´| perl -pe 's/ *STATUS \w+ *?(\d+)\) *?$/$1/'`

sleep 1 if [ "$MAILCOUNT" != "" ] then

send "tag STATUS INBOX (UNSEEN)\n" echo INBOX:${MAILCOUNT}

expect "tag OK" fi

send "tag LOGOUT\n"

Now I can just update my .screenrc


I saved that to a local file (and to load the output of that script into
made sure only my user could read one of my backtick fields instead
it) and then called it as the sole of fetchmail (more on that in my
argument to expect : previous column about screen), and
I’m back in business. Q
$ expect .imapsexpectscript
Kyle Rankin is a Sr. Systems Administrator in the San Francisco
Of course, since this script runs Bay Area and the author of a number of books, including The
through the whole IMAPS session, Official Ubuntu Server Book, Knoppix Hacks and Ubuntu Hacks.
it also outputs my authentication He is currently the president of the North Bay Linux Users’ Group.
information to the screen. I need
only the INBOX status output
anyway, so I just grep for that:
Send comments or feedback via
$ expect ~/.imapsexpectscript | egrep '\(UNSEEN [0-9]' http://www.linuxjournal.com/contact
* STATUS INBOX (UNSEEN 1) or to [email protected].

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 49


COLUMNS
THE OPEN-SOURCE CLASSROOM

DNSMasq, SHAWN POWERS

the Pint-Sized
Super Dæmon!
What’s better than a DNS server, a DHCP server and a TFTP
server? A single dæmon that does it all!

I’ve always been a fan of putting serve my network. I like having


aftermarket firmware on consumer- those two services tied to the router,
grade routers. Whether it’s DD-WRT, because if every other server on my
Tomato, OpenWRT or whatever your network fails, I still can get on-line. I
favorite flavor of “better than stock” figure the next best thing is to have
firmware might be, it just makes a Raspberry Pi dedicated to those
economic sense. Unfortunately, services. Because all my RPi devices
my routing needs have surpassed
my trusty Linksys router. Although
I could certainly buy a several-
hundred-dollar, business-class
router, I really don’t like spending
money like that. Thankfully, I found
an incredible little router (the
EdgeRouter Lite) that can route a
million packets per second and has
three gigabit Ethernet ports. So far,
it’s an incredible router, but that’s all
it does—route. Which brings me to
the point of this article. Figure 1. The Cubox is more powerful
I’ve always used the DHCP and than a Raspberry Pi, but even an RPi is
DNS server built in to DD-WRT to more power than DNSMasq requires!

50 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


COLUMNS
THE OPEN-SOURCE CLASSROOM

currently are attached to televisions more useful in a small- to medium-


around the house (running XBMC), I size network. It even does reverse
decided to enlist the Cubox computer DNS (PTR records) automatically!
I reviewed in November 2013 (Figure (More on those details later.)
1). It’s been sitting on my shelf
collecting dust, and I’d rather have it Q DHCP server: where the DNS
do something useful. portion of DNSMasq lacks in
Although the Cubox certainly is certain advanced features, the
powerful enough to run BIND and DHCP services offered actually are
the ISC DHCP server, that’s really extremely robust. Most routers
overkill for my network. Plus, BIND running firmware like DD-WRT
really annoys me with its serial- don’t offer a Web interface to
number incrementation and such the advanced features DNSMasq
whenever an update is made. It wasn’t provides, but it rivals and even
until I started to research alternate surpasses some of the standalone
DNS servers that I realized just how DHCP servers.
powerful DNSMasq can be. Plus, the
way it works is simplicity at its finest. Q TFTP server: working in perfect
First, let’s look at its features: tandem with the advanced features
of DHCP, DNSMasq even offers a
Q Extremely small memory and built-in TFTP server for things like
CPU footprint: I knew this booting thin clients or sending
was the case, because it’s the configuration files.
program that runs on Linux-based
consumer routers where memory Q A single configuration file:
and CPU are at a premium. it’s possible to use multiple
configuration files, and I even
Q DNS server: DNSMasq approaches recommend it for clarity’s sake.
DNS in a different way from the In the end, however, DNSMasq
traditional BIND dæmon. It doesn’t requires you to edit only a single
offer the complexity of domain configuration file to manage all
transfers, master/slave relationships of its powerful services. That
and so on. It does offer extremely configuration file also is very
simple and highly configurable well commented, which makes
options that are, in my opinion, far using it much nicer.

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 51


COLUMNS
THE OPEN-SOURCE CLASSROOM

Installation init script to do that:


DNSMasq has been around for a
very long time. Installing it on any sudo service dnsmasq restart
Linux operating system should be
as simple as searching for it in your But, if your system doesn’t have
distribution’s package management the init service set up for DNSMasq,
system. On Debian-based systems, you can issue an HUP signal by typing
that would mean something like: something like this:

sudo apt-get install dnsmasq sudo kill -HUP $(pidof dnsmasq)

Or, on a Red Hat/CentOS system: This will find the PID (process ID)
and send the signal to reload its
yum install dnsmasq (as root) configuration files. Either way should
work, but the init script will give you
The configuration file (there’s more feedback if there are errors.
just one!) is usually stored at
/etc/dnsmasq.conf, and as I First Up: DNS
mentioned earlier, it is very well Of all the features DNSMasq offers, I
commented. Figuring out even the find its DNS services to be the most
most advanced features is usually as useful and awesome. You get the full
easy as reading the configuration file functionality of your upstream DNS
and un-commenting those directives server (usually provided by your ISP),
you want to enable. There are even while seamlessly integrating DNS
examples for those directives that records for you own network.
require you to enter information To accomplish that “split DNS”-type
specific to your environment. setup with BIND, you need to create
After the dnsmasq package is a fake DNS master file, and even
installed, it most likely will get started then you run into problems if you
automatically. From that point on, are missing a DNS name in your local
any time you make changes to the master file, because BIND won’t
configuration (or make changes to the query another server by default
/etc/hosts file), you’ll need to restart for records it thinks it’s in charge
the service or send an HUP signal to of serving. DNSMasq, on the other
the dæmon. I recommend using the hand, follows a very simple procedure

52 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


COLUMNS
THE OPEN-SOURCE CLASSROOM

Figure 2. DNSMasq makes DNS queries simple, flexible and highly configurable.

when it receives a request. Figure 2 it will query the upstream DNS server
shows that process. and return the live IP for my Web host.
For my purposes, this means I can DNSMasq makes a split-DNS scenario
put a single entry into my server’s extremely easy to maintain, and
/etc/hosts file for something like because it uses the server’s /etc/hosts
“server.brainofshawn.com”, and file, it’s simple to modify entries.
DNSMasq will return the IP address My personal favorite feature of
in the /etc/hosts file. If a host DNSMasq’s DNS service, however,
queries DNSMasq for an entry not is that it supports round-robin load
in the server’s /etc/hosts file, balancing. This isn’t something that
www.brainofshawn.com for instance, normally works with an /etc/hosts file

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 53


COLUMNS
THE OPEN-SOURCE CLASSROOM

entry, but with DNSMasq, it does. server’s /etc/hosts file:


Say you have two entries in your
/etc/hosts file like this: 192 168 1 15 www example com mail example com ftp example com

192.168.1.10 webserver.example.com Any regular DNS queries for


192.168.1.11 webserver.example.com www.example.com, mail.example.com
or ftp.example.com will get answered
On a regular system (that is, if you with “192.168.1.15”, but a reverse DNS
put it in your client’s /etc/hosts file), lookup on 192.168.1.15 will get the
the DNS query always will return single response “www.example.com”.
192.168.1.10 first. DNSMasq, on the DNSMasq is so flexible and feature-
other hand, will see those two entries rich, it’s hard to find a reason not to
and mix up their order every time it’s use it. Sure, there are valid reasons for
queried. So instead of 192.168.1.10 using a more robust DNS server like
being the first IP, half of the time, it BIND, but for most small to medium
will return 192.168.1.11 as the first networks, DNSMasq is far more
IP. It’s a very rudimentary form of appropriate and much, much simpler.
load balancing, but it’s a feature most
people don’t know exists! Serving DHCP
Finally, DNSMasq automatically It’s possible to use DNSMasq for DNS
will create and serve reverse DNS only and disable the DHCP services
responses based on entries found it offers. Much like DNS, however,
in the server’s /etc/hosts file. In the simplicity and power offered
the previous example, running by DNSMasq makes it a perfect
the command: candidate for small- to medium-sized
networks. It supports both dynamic
dig -x 192.168.1.10 ranges for automatic IP assignment
and static reservations based on the
would get the response MAC address of computers on your
“webserver.example.com” as the network. Plus, since it also acts as
reverse DNS lookup. If you have the DNS server for your network,
multiple DNS entries for a single it has a really great hostname-DNS
IP address, DNSMasq uses the first integration for computers connected
entry as the reverse DNS response. to your network that may not have
So if you have a line like this in your a DNS entry. How does that work?

54 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


COLUMNS
THE OPEN-SOURCE CLASSROOM

Figure 3 shows the modified method server. (The extra step is shown
used when the DNS server receives as the orange-colored diamond in
a query if it’s also serving as a DHCP the flowchart.)

Figure 3. If you use DHCP, it automatically integrates into your DNS system—awesome
for finding dynamically assigned IPs!

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 55


COLUMNS
THE OPEN-SOURCE CLASSROOM

Figure 4. There are no DNS entries anywhere for my Hackintosh, but thanks to
DNSMasq, it’s pingable via its hostname.

Basically, if your friend brings a laptop Even though it isn’t listed in any of
to your house and connects to your the server’s configuration files, since it
network, when it requests a DHCP handles DHCP, it creates a DNS entry
address, it tells the DNSMasq server its on the fly.
hostname. From that point on, until Static DHCP entries can be entered
the lease expires, any DNS queries the in the single configuration file using
server receives for that hostname will be this format:
returned as the IP it assigned via DHCP.
This is very convenient if you have a dhcp-host=90:fb:a6:86:0d:60 xbmc-livingroom 192 168 1 20

computer connected to your network dhcp-host=b8:27:eb:e3:4c:5f xbmc-familyroom 192 168 1 21

whose hostname you know, but it gets dhcp-host=b8:27:eb:16:d9:08 xbmc-masterbedroom 192 168 1 22

a dynamically assigned IP address. In my dhcp-host=00:1b:a9:fa:98:a9 officelaser 192 168 1 100

house, I have a Hackintosh computer dhcp-host=04:46:65:d4:e8:c9 birdcam 192 168 1 201

that just gets a random IP assigned


via DNSMasq’s DHCP server. Figure 4 It’s also valid to leave the hostname
shows what happens when I ping the out of the static declaration, but
name “hackintosh” on my network. adding it to the DHCP reservation

56 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


COLUMNS
THE OPEN-SOURCE CLASSROOM

adds it to the DNS server’s list of secure, there’s no need to install BIND.
known addresses, even if the client DNSMasq supports DNSSEC, and once
itself doesn’t tell the DHCP server again provides configuration examples
its hostname. You also could just in the configuration file.
add the hostname to your DNSMasq Truly, DNSMasq is the unsung
server’s /etc/hosts file, but I prefer hero for consumer-grade Internet
to make my static DHCP entries with routers. It allows those tiny devices
hostnames, so I can tell at a glance to provide DNS and DHCP for your
what computer the reservation is for. entire network. If you install the
program on a regular server (or teeny
And If That’s Not Enough... tiny Raspberry Pi or Cubox), however,
The above scenarios are all I use it can become an extremely robust
DNSMasq for on my local network. It’s platform for all your network needs. If
more incredible than any DHCP/DNS it weren’t for my need to get a more
combination I’ve ever used before, powerful and reliable router, I never
including the Windows and OS X server- would have learned about just how
based services I’ve used in large networks. amazing DNSMasq is. If you’ve ever
It does provide even more services, been frustrated by BIND, or if you’d
however, for those folks needing them. just like to have more control over
The TFTP server can be activated via the DNS and DHCP services on your
configuration file to serve boot files, network, I urge you to give DNSMasq
configuration files or any other TFTP files a closer look. It’s for more than just
you might need served on your network. your DD-WRT router! Q
The service integrates flawlessly with the
DHCP server to provide boot filenames, Shawn Powers is the Associate Editor for Linux Journal.
PXE/BOOTP information, and custom He’s also the Gadget Guy for LinuxJournal.com, and he has an
DHCP options needed for booting even interesting collection of vintage Garfield coffee mugs. Don’t let
the most finicky devices. Even if you his silly hairdo fool you, he’s a pretty ordinary guy and can be
need TFTP services for a non-boot- reached via e-mail at [email protected]. Or, swing by
related reason, DNSMasq’s server is just the #linuxjournal IRC channel on Freenode.net.
a standard TFTP service that will work for
any computer or device requiring it.
If you’ve read Kyle Rankin’s recent Send comments or feedback via
articles on DNSSEC and want to http://www.linuxjournal.com/contact
make sure your DNS information is or to [email protected].

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 57


NEW PRODUCTS

CoreOS Managed Linux


The team at CoreOS recently announced three big, concurrent news
items. First, CoreOS rolled out its two flagship products: CoreOS
Managed Linux, which bills as “the world’s first OS as a Service”,
and CoreUpdate, which provides a dashboard for full control of
rolling updates. Second, CoreOS announced the receipt of $8 million
in funding from the venture capital firm Kleiner Perkins Caulfield and Byer, meaning you
are sure to hear more about CoreOS in the future. Capital is following this development
because companies like Google, Facebook, Twitter and others must run their services
at scale with high resilience. The solution is CoreOS, a new Linux OS that has been re-
architected to provide the foundation of warehouse-scale computing. CoreOS customers
receive a continuous stream of updates and patches (via CoreUpdate), as well as a high
level of commercial support, eliminating the need for major OS migrations every few years.
The goal is to make the migration to CoreOS’s products the last migration they ever need.
Included platforms are Bare Metal, Amazon, Google and Rackspace, among others.
http://www.coreos.com

Open-E JupiterDSS
The latest release of the Open-E JupiterDSS—
or Defined Data Storage Software—is a
result of three years of development, testing,
working closely with partners and integrating
customer feedback, reported maker Open-E.
The firm added that Open-E JupiterDSS
provides enterprise users the highest level
of performance with unlimited capacity and
volume size. Delivered through Open-E certified partners as a software-defined storage
system, Open-E JupiterDSS comes complete with advanced features, including thin
provisioning, compression and de-duplication. This milestone release of the company’s
flagship application comes in response to customers demanding ever larger storage
environments while maintaining high benchmarks for quality, reliability, performance and
price. Open-E JupiterDSS features a ZFS- and Linux-based storage operating system.
http://www.open-e.com

58 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


NEW PRODUCTS

Brian Ward’s How Linux Works (No Starch Press)


Though there now exists a seemingly limitless list of great Linux books,
those like Brian Ward’s How Linux Works: What Every Superuser Should
Know are the kind of books that should go into the “Linux Beginner’s
Canon”. Ward’s book contains the essentials that new enthusiasts should
know as they embark on their journey of Linux discovery. To master Linux
and avoid obstacles, one needs to understand Linux internals like how the
system boots, how networking works and what the kernel actually does.
In this completely revised second edition, author Ward makes the concepts
behind Linux internals accessible to anyone who wants to understand them. Inside these
information-packed pages, readers will find the kind of knowledge that normally comes from
years of experience doing things the hard way, including essential topics like Linux booting;
how the kernel manages devices, device drivers and processes; how networking, interfaces,
firewalls and servers work; and how development tools, shared libraries and shell scripts
work. Publisher No Starch Press notes that the book’s combination of background, theory,
real-world examples and patient explanations will teach readers to understand and customize
their systems, solve pesky problems and take control of their OS.
http://www.nostarch.com

QLogic’s NetXtreme II 10Gb


Ethernet Adapters
As part of its mission to be the most comprehensive supplier of
NETWORK INFRASTRUCTURE ACROSS )"-S ENTIRE SERVER PORTFOLIO 1,OGIC
recently released the NetXtreme II 10Gb Ethernet adapter line, which the firm claims is
THE lRST FOR )"- 0OWER 3YSTEMS !VAILABLE IN '"!3% 4 OR 3&0 VERSIONS 1,OGIC 'B%
adapters are critical for achieving desired levels of performance and consolidation for
cloud computing, convergence and data-intensive application environments, such as
VIDEO AND SOCIAL MEDIA CONTENT -IGRATION TO THE NEW ADAPTERS IS SEAMLESS SAYS 1,OGIC
because of the full backward-compatibility with an installed base of millions of 1GbE
twisted-pair switch ports. For applications requiring leading-edge networking performance,
1,OGIC .ET8TREME )) 3&0 LOW PROlLE ADAPTERS COMBINE ADVANCED MULTIPROTOCOL OFmOAD
technologies with standard Ethernet functionality.
http://www.qlogic.com

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 59


NEW PRODUCTS

The Icinga Project’s


Icinga 2
Although the Icinga 2 monitoring solution is
backward-compatible to Nagios and Icinga 1,
the Icinga Project considers its latest masterpiece
“a league apart from its predecessors”. Icinga 2
is a next-generation open-source monitoring solution designed to meet modern IT infrastructure
needs. Built from scratch and based on C++, Icinga 2 boasts multi-threaded architecture for
high-performance monitoring, a new dynamic configuration format and distributed monitoring
out of the box. Version 2 also features multiple back ends for easy integration of popular
add-ons. In contrast to its predecessors, core features and their related libraries are shipped
with the application and can be activated as needed via “icinga2-enable-feature” soft link
commands, easing installation and extension, enabling multi-functionality and improving
performance. Additional new advancements include cluster stacks, a new object-based
template-driven configuration format and extensive documentation, as well as configuration
validation and syntax highlighting to support troubleshooting.
http://www.icinga.org

Zentyal Server
What’s special about the upgraded Zentyal
Server 3.5 is that it integrates both
the complex Samba and OpenChange
technologies, making it easy to integrate
Zentyal into an existing Windows environment and carry out phased, transparent migration
to Linux. In other words, the Zentyal Linux Small Business Server offers a native drop-in
replacement for Windows Small Business Server and Microsoft Exchange Server that can
be set up in less than 30 minutes and is both easy to use and affordable. Because Zentyal
Server’s 3.5 release focuses on providing a stable server edition with simplified architecture,
it comes with a single LDAP implementation based on Samba4, helping solve a number of
synchronization issues caused by having two LDAP implementations in the earlier editions.
In addition, a number of modules have been removed in order to advance the core goal of
offering the aforementioned drop-in Windows server replacement capabilities.
http://www.zentyal.com

60 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


NEW PRODUCTS

Dave Seff’s Mastering 3D Printing


LiveLessons—Video Training
(Addison-Wesley Professional)
The conditions are riper than ever to learn about 3-D
printing, especially thanks to a new video training program
called Mastering 3D Printing LiveLessons from Addison-
Wesley Professional. What’s special for our readers is that
the instructor, Dave Seff, is a modern-day Goethe-esque
geek, a Linux/UNIX engineer who also has mastered topics
as varied as computer and mechanical design, machining, electronics and computer
programming. Seff has even built several of his own 3-D printers and other CNC machines.
In the videos, Seff not only teaches the fundamentals of 3-D printing but also does so
utilizing the open-source modeling software, Blender. Seff explores how to create a 3-D
model (beginner and advanced lessons), how to slice (prepare for printing) and then how
to print a 3-D model. Seff also covers troubleshooting problems when they arise.
http://www.informit.com/livelessons

Nevercenter’s
Silo 3D Modeler
In response to its number-one request,
Nevercenter has ported version 2.3 of its Silo 3D
Modeler application to Linux, complementing
the existing Mac OS and Windows editions. Silo,
popular with designers of video games, movies
and 3-D architectural applications, can be used
either as a standalone tool or as a versatile element of a multi-application 3-D graphics
workflow. Nevercenter is finding ever more studios and individuals in this space moving to
Linux. Silo’s internals also have received significant updates, including an updated windowing
system and bug fixes across all platforms, as well as added support for .stl import.
http://www.nevercenter.com/silo

Please send information about releases of Linux-related products to [email protected] or


New Products c/o Linux Journal, PO Box 980985, Houston, TX 77098. Submissions are edited for length and content.

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 61


FEATURE Provisioning X.509 Certificates Using RFC 7030

Provisioning
X.509 Certificates
Using
RFC 7030 Learn how to use libest
to deploy X.509 certificates
across your enterprise.
JOHN FOLEY

62 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


H
ave you ever found yourself in provisioned by end-entities from either
the need of an X.509 certificate an RA or a CA. An end-entity will use
when configuring a Linux the X.509 certificate to identity itself
service? Perhaps you’re enabling to a peer when establishing a secure
HTTPS using Apache or NGINX. Maybe communication channel. The X.509
you’re configuring IPsec between two certificate contains a public key that
Linux hosts. It’s not uncommon to use another entity can use to verify the
on-line forums or developer blogs to identity of the entity presenting the
find setup instructions to meet these X.509 certificate. The owner of an
goals quickly. Often these sources of X.509 certificate retains the private
information direct the reader to use a key associated with the public key in
self-signed certificate. Such shortcuts its X.509 certificate. The private key
overlook good security practices is used only by the end-entity that
that should be followed as part of a owns the X.509 and must remain
sound Public Key Infrastructure (PKI) confidential. Leakage of the private
deployment strategy. This article key compromises the security model.
proposes a solution to the problem X.509 certificates are the most
of widespread deployment of X.509 commonly used approach for verifying
certificates using Enrollment over peer identity on the Internet today.
Secure Transport (EST). First, I X.509 certificates are used with TLS,
provide a brief primer on PKI. Then, IPsec, SSH and other protocols. For
I give an overview of using EST to example, a Web browser will use the
provision a certificate, demonstrated X.509 certificate from a Web server to
using both curl and libest. Finally, I verify the identity of the server. This
show a brief example of an OpenVPN is achieved by using the public key in
deployment using certificates the CA certificate and the signature in
provisioned with EST. the Web server certificate.
PKI allows for multiple layers of
PKI Primer trust, with the root CA at the top
Public Key Infrastructure consists of of the trust chain. The root CA also
several building blocks, including is called a trust anchor. The root
X.509 certificates, Certificate CA can delegate authority to a
Authorities, Registration Authorities, sub-CA or an RA. The sub-CA or RA
public/private key pairs and certificate can provision X.509 certificates on
revocation lists. X.509 certificates are behalf of an end-entity, such as a

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 63


FEATURE Provisioning X.509 Certificates Using RFC 7030

Web browser or a Web server. For X.509 certificate generated by a CA


simplicity, this article is limited to contains a digital signature. This
showing a single layer of trust where signature is the encrypted output
the root CA generates certificates from the RSA algorithm. The CA
directly for the end-entities. will calculate the hash (for example,
The multiple layers of trust in the SHA1) of the certificate contents. The
PKI hierarchy are implemented using hash value is then encrypted using
asymmetric cryptographic algorithms. the CA’s private RSA key. The CA also
RSA is the most common asymmetric will include information about itself
algorithm used today. RSA is named in the new certificate in the Issuer
after its inventors: Ron Rivest, Adi field, which allows another entity
Shamir and Leonard Adleman. RSA to know which CA generated the
uses a public/private key pair. The certificate (Figure 1). This becomes

Figure 1. Anatomy of an X.509 Certificate

64 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


important when another entity needs
to verify the authenticity of another
entity’s certificate. Elliptic Curve
When another entity needs to verify
the identity and authenticity of an Digital Signature
X.509 certificate, the public key from
the CA certificate is used to verify the
X.509 certificate to be verified. The
Algorithm
Elliptic Curve Digital Signature
verifying identity calculates the hash Algorithm (ECDSA) is an
(for example, SHA1) of the certificate, asymmetric cryptographic
similar to how the CA did this when algorithm that can be used similar
generating the certificate. Instead of to RSA. ECDSA uses a public/
using the CA private key, the verifying private key pair similar to RSA.
entity will use the CA public key to However, ECDSA provides an
encrypt the hash result. If the public equivalent security level to RSA
key encrypted value of the hash using much smaller key sizes.
matches the signature that is in the As of 2014, the NIST minimum
X.509 certificate, the verifying identity recommended RSA key size is 2048
is assured that the certificate is valid bits. Using an ECDSA 256-bit key
and was issued by the CA. provides better security than RSA
Astute readers will observe that the 2048, which significantly reduces
verifying entity needs to have the CA the amount of data that needs
public key to perform this verification to be transmitted between peers
process. This problem is commonly during the key exchange process
solved by deploying the public keys of of protocols like TLS, DTLS and
well-known certificate authorities with IKE. This bandwidth savings
the software that will be performing is desirable for IoT devices or
the verification process. This is other end-entities that need to
known as out-of-band trust anchor minimize bandwidth consumption
deployment. Commonly used Web over radio interfaces.
browsers (such as Firefox, Chrome and
IE) follow this model. When you install
the Web browser on your computer, browse to a secure Web site, the
the well-known CA certificates are Web browser uses this trust anchor
installed with the software. When you of pre-installed CA certificates

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 65


FEATURE Provisioning X.509 Certificates Using RFC 7030

(EST) is defined in RFC 7030. This

FQDN protocol solves the challenge of


PKI deployment across a large
X.509 certificates should contain infrastructure. For example, RFC
the Fully Qualified DNS Domain 7030 defines methods for both
Name (FQDN) of the entity that provisioning end-entity certificates
owns the certificate. RFC 6125 and deploying CA public keys,
provides guidance on how FQDN which are required for end-entities
should be implemented in a PKI to verify each other. This article
deployment. The FQDN should focuses on the client side of solving
be in either the Subject Common these two challenges. Let’s use the
Name or the Subject Alternative interop test server hosted at
Name of the X.509 certificate. http://testrfc7030.cisco.com as the
When an entity, such as a Web EST server for the examples. Note:
browser, verifies the authenticity the certificates generated by this
of the peer certificate, the FQDN test server are for demonstration
should be checked to ensure that purposes only and should not be
it matches the hostname used used for production deployments.
to initiate the IP connection. For RFC 7030 defines a REST interface
example, when you browse to for various PKI operations. The first
https://www.foobar.org, the X.509 operation is the /cacerts method,
certificate presented by the Web which is used by an end-entity to
server should contain www.foobar.org retrieve the current CA certificates.
in either the Subject Common This set of CA certificates is called
Name or Subject Alternative Name. the explicit trust anchor. The /cacerts
Checking the FQDN helps mitigate method is the first step invoked by the
an MITM (Man In The Middle) attack. end-entity to ensure that the latest
trust anchor is used for subsequent
EST operations. The following steps
(containing public keys) to verify show how to use curl as an EST client
the X.509 certificate presented by the to issue the /cacerts operation.
Web server. Step 1: Retrieve the public CA
certificate used by testrfc7030.cisco.com.
Enrollment over Secure Transport Note: this step emulates the out-
Enrollment over Secure Transport of-band deployment of the implicit

66 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


trust anchor: to provision a certificate for each
entity. You’ll use OpenSSL to create
wget http://testrfc7030.cisco.com/DST_Root_CA_X3.pem a certificate request. The certificate
request is called a CSR, defined by
Step 2: Use curl to retrieve the the PKCS #10 specification. You’ll
latest explicit trust anchor from the also create your public/private RSA
test server: key pair when creating the CSR.
You’ll use OpenSSL to create both the
curl https://testrfc7030.cisco.com:8443/.well-known/est/cacerts \ CSR and the key pair.
-o cacerts.p7 --cacert ./DST_Root_CA_X3.pem Step 4: Generate an RSA public/
private key pair for the end-entity and
Note: you can use a Web browser create the CSR. The CSR will become
instead of wget/curl for steps 1 and the X.509 certificate after it has
2. Enter the URL shown in step 2 been signed by the CA. You will be
into your browser. Then save the prompted to supply the values for the
result from the Web server to your Subject Name field to be placed in the
local filesystem using the filename X.509 certificate. These values include
cacerts.p7. the country name, state/province,
Step 3: Use the OpenSSL command locality, organization name, common
line to convert the trust anchor name and your e-mail address. The
to PEM format. This is necessary, common name should contain the
because the EST specification requires &1$. OF THE ENTITY THAT WILL USE THE
the /cacerts response to be base64- certificate. The challenge password
encoded PKCS7. However, the PEM and company name are not required
format is more commonly used: and can be left blank:

openssl base64 -d -in cacerts.p7 | \ openssl req -new -sha256 -newkey rsa:2048 \

openssl pkcs7 -inform DER -outform PEM \ -keyout privatekey.pem -keyform PEM -out csr.p10

-print_certs -out cacerts.pem


Generating a 2048 bit RSA private key

The certificate in cacerts.pem is the ........+++

explicit trust anchor. You will use this ....................................................

certificate later to establish a secure ....................................................

communication channel between .................+++

two entities. However, first you need writing new private key to 'privatekey.pem'

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 67


FEATURE Provisioning X.509 Certificates Using RFC 7030

Enter PEM pass phrase: certificate from the test server


Verifying - Enter PEM pass phrase: using the CSR you just generated.
----- Normally the explicit trust anchor is
You are about to be asked to enter information that used for this step. However, the test
will be incorporated server doesn’t use the explicit trust
into your certificate request. anchor for HTTPS services. Therefore,
What you are about to enter is what is called a you’ll continue to use the implicit
Distinguished Name or a DN. trust anchor (DST_Root_CA_X3.pem)
There are quite a few fields but you can leave some for this step (note: the test CA
blank at testrfc7030.cisco.com uses a
For some fields there will be a default value, well-known user name/password
If you enter '.', the field will be left blank. of estuser/estpwd):
-----

Country Name (2 letter code) [AU]:US curl https://testrfc7030.cisco.com:8443/.well-known/est/simpleenroll \

State or Province Name (full name) [Some-State]:Colorado --anyauth -u estuser:estpwd -o cert.p7 \

Locality Name (eg, city) []:Aspen --cacert ./DST_Root_CA_X3.pem --data-binary @csr.p10 \

Organization Name (eg, company) [Internet Widgits Pty Ltd]:ACME, Inc. -H "Content-Type: application/pkcs10"

Organizational Unit Name (eg, section) []:Mfg

Common Name (e.g. server FQDN or YOUR name) []:mfgserver1.acme.org Step 6: If successful, the curl
Email Address []:[email protected] command should place the new
certificate in the cert.p7 file. The
Please enter the following 'extra' attributes EST specification requires the
to be sent with your certificate request certificate to be base64-encoded
A challenge password []: PKCS7. Because PEM is a more
An optional company name []: commonly used format, you’ll
use OpenSSL to convert the new
Now that you have the CSR and key certificate to PEM format:
pair, you need to send the CSR to the
CA to have it signed and returned to openssl base64 -d -in cert p7 | openssl pkcs7 -inform DER \

you as an X.509 certificate. The EST -outform PEM -print_certs -out cert pem

/simpleenroll REST method is used


for this purpose. Curl can be used Step 7: Finally, use OpenSSL
again to send the CSR to the CA as a again to confirm the content in the
RESTful EST operation. certificate. The Subject Name should
Step 5: Use curl to enroll a new contain the values you used to create

68 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


the CSR earlier: 3f:b7

Exponent: 65537 (0x10001)

openssl x509 -text -in cert.pem X509v3 extensions:

X509v3 Basic Constraints:

Certificate: CA:FALSE

Data: X509v3 Key Usage:

Version: 3 (0x2) Digital Signature

Serial Number: 42 (0x2a) X509v3 Subject Key Identifier:

Signature Algorithm: ecdsa-with-SHA1 E2:C5:FC:55:42:D9:52:D9:81:F7:CC:6C:01:56:BF:10:35:41:7A:D8

Issuer: CN=estExampleCA X509v3 Authority Key Identifier:

Validity keyid:EE:DE:AA:C0:5B:AC:38:7D:F3:08:26:33:73:00:3F:F3:2B:63:41:F8

Not Before: Jun 4 18:42:56 2014 GMT

Not After : Jun 4 18:42:56 2015 GMT Signature Algorithm: ecdsa-with-SHA1

Subject: CN=mfgserver1.acme.org 30:45:02:20:1e:b6:b6:32:fa:79:de:26:c0:34:0d:a5:5c:70:

Subject Public Key Info: cb:27:a3:8f:fc:9f:d2:1f:ca:5c:99:fd:d0:ff:bf:7f:51:e8:

Public Key Algorithm: rsaEncryption 02:21:00:be:1f:36:b3:f6:46:65:58:eb:57:05:c3:af:4c:4a:

Public-Key: (2048 bit) 0e:d1:28:e9:0b:58:e3:ac:3f:db:27:36:33:98:3f:b1:9e

Modulus:

00:c0:4c:65:d1:6c:d2:8b:7d:37:b9:a1:67:da:7a: At this point, you have the new


a1:6c:4f:b9:9f:68:e0:9a:44:24:a0:aa:54:55:19: certificate and associated private key.
c0:fc:6b:35:c5:a7:14:ed:70:e9:99:32:6a:21:19: The certificate is in the file cert.pem.
49:2b:8e:42:89:eb:9f:ec:3d:69:75:49:2f:f7:18: The private key is in the file
f6:14:ed:d5:71:54:b5:0a:d0:f3:7b:8e:36:19:f1: privatekey.pem. These can be used
45:07:37:b9:aa:73:7c:60:bb:e1:f1:ac:b2:75:74: for a variety of security protocols
22:9e:5d:b5:ee:13:7c:b8:31:61:c5:9a:ef:7e:07: including TLS, IPsec, SSH and so on.
24:8d:c8:50:44:89:6d:fe:dd:e0:28:fd:80:1c:b9:

61:94:8d:63:cd:54:2c:a9:86:7a:3b:35:62:e9:c6: libest
76:58:fb:27:c1:bf:db:c2:03:66:e5:dd:cb:75:bc: Curl provides a primitive method to
72:6c:ca:27:76:2a:f7:48:d5:3b:42:de:85:8e:3b: issue the RESTful operations of the
15:f1:7a:e4:37:3c:96:b2:91:70:6f:97:22:15:c6: EST enrollment process. However,
82:ea:74:8b:f2:80:39:c1:c2:10:78:6e:70:11:78: the curl command-line options
31:2f:4a:c3:c4:2b:ab:2f:4d:f2:87:15:59:88:b3: required to enroll a new certificate
17:12:1d:92:b2:6d:a6:8a:94:3f:b3:76:18:53:f9: securely are cumbersome and error-
59:29:e1:9b:8c:81:41:7e:8c:a2:a7:34:c9:b4:07: prone. Additionally, curl is unable
32:77:57:37:59:dd:fb:36:02:59:74:bb:96:6e:e7: to perform the TLS channel binding

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 69


FEATURE Provisioning X.509 Certificates Using RFC 7030

requirements defined in RFC 7030 command-line tool to replace the


section 3.5. There is an open-source curl commands described earlier.
alternative called libest. This library Additionally, libest exposes an API when
supports client-side EST operations EST operations need to be embedded
required to provision a certificate. The into another application. Next, I
libest library comes with a client-side demonstrate how use the libest CLI to
enroll a certificate from the test server.
libest is available at https://github.com/

Heartbleed cisco/libest. It’s known to work on popular


Linux distributions, such as Ubuntu and
The Heartbleed bug in OpenSSL was Centos. You will need to download,
a severe vulnerability in OpenSSL that configure, compile and install libest to
was publicly announced in April 2014. follow along. The default installation
The severity of this bug was due to location is /usr/local/est. libest requires
the potential for leaking the X.509 that you install the OpenSSL devel
private key of the TLS server. When package prior to configuration.
a private key is leaked, previously OpenSSL 1.0.1 or newer is required.
recorded communications using You’ll use the same implicit trust
the key can be compromised. For anchor and CSR that you used earlier
example, a compromised TLS server when using curl as the EST client.
may have had all communications The implicit trust anchor is located in
revealed since the private key was DST_Root_CA_X3.pem, and the CSR
issued to it. It’s not uncommon for is in csr.p10.
certificates to be issued for a year Step 1: Configure the trust anchor
or longer. Imagine every transaction to use with libest:
you conducted with your on-line
bank during the past year being export EST_OPENSSL_CACERT=DST_Root_CA_X3.pem
compromised. The Heartbleed bug
provides motivation to use shorter Step 2: Using the same CSR used
validity periods when issuing X.509 earlier with curl in the csr.p10 file,
certificates. EST can be used to provision a new X.509 certificate for
automate the certificate renewal the CSR:
process to avoid interruptions in
service due to certificate expiry. estclient -e -s testrfc7030.cisco.com -p 8443 \

-o . -y csr.p10 -u estuser -h estpwd

70 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


Step 3: Similar to the curl example Linux hosts. The certificate enrollment
shown earlier, use OpenSSL to convert process described earlier needs to be
the new certificate to PEM format and completed on each Linux host.
confirm the contents of the certificate: OpenVPN supports a wide variety of
configurations. In this example, let’s
openssl base64 -d -in ./cert-0-0.pkcs7 | \ use the TLS client/server model. Two
openssl pkcs7 -inform DER -print_certs -out cert.pem Linux hosts are required. OpenVPN
openssl x509 -text -in cert.pem should be installed on both systems.
One host will operate as the TLS
This enrollment procedure can be used server for VPN services. The other
on any number of end-entities. These host will operate as the TLS client.
end-entities than can use their certificate In this example, the IP address for
along with the explicit trust anchor to the physical Ethernet interface on
verify each other when establishing the server is 192.168.1.35. The TAP
secure communications. This eliminates
the need to generate self-signed
Listing 1. TLS Server OpenVPN Config
certificates and manually copy those
dev tap
certificates among end-entities. The
ifconfig 10.3.0.1 255.255.255.0
enrollment process could be automated
tls-server
using curl or libest. EST can be used
dh dh2048.pem
for certificate renewal as well, which
ca cacerts.pem
can automate the process of renewing cert cert.pem
certificates that are about to expire. key privatekey.pem
Automating the process can facilitate the
shortening of certificate validity periods,
which improves the overall security
posture of a PKI deployment. Listing 2. TLS Client OpenVPN Config
remote 192.168.1.35
Using the New Certificates dev tap
Now that you have provisioned a new ifconfig 10.3.0.2 255.255.255.0
certificate for the end-entity, the certificate tls-client
can be used with a variety of protocols. ca cacerts.pem
Next, I show how the certificate can be cert cert.pem
used with OpenVPN to establish a secure key privatekey.pem

communication channel between two

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 71


FEATURE Provisioning X.509 Certificates Using RFC 7030

interface is used on both the client authentication methods other than


and the server for the VPN tunnel. using a user name/password. The client
The server TAP interface uses the can be authenticated using SRP or an
address 10.3.0.1, while the client uses existing X.509 certificate. For example,
10.3.0.2. The certificates provisioned an existing certificate on the EST client
using EST are configured in the should be used when renewing a
OpenVPN configuration file on each certificate prior to expiration. A good
system (see Listings 1 and 2). source of information for learning
Step 1: Generate DH parameters for more about these concepts is the EST
the OpenVPN server: specification (RFC 7030).
This article has focused on the
openssl gendh -out dh2048.pem 1024 client side. In the future, I will look at
implementing EST on the server side to
Step 2: Start the OpenVPN server: front a CA. The EST protocol is a new
protocol and not widely adopted at this
sudo openvpn vpnserver.conf time. Several well-known commercial
CA vendors are currently implementing
Step 1: Start the OpenVPN client: EST. If you use a commercial CA as part
of your PKI infrastructure today, you
sudo openvpn vpnclient.conf may want to ask your CA vendor about
its plans to support EST in the future.Q
Step 2: Ping across the tunnel to
ensure that the VPN is working: John Foley is a Technical Leader in product development at
Cisco Systems. John has worked on a variety of projects during
ping 10.3.0.1 the past 14 years at Cisco, including VoIP systems, embedded
development, security and multicast. John has spent the past
Summary three years working with security protocols, including TLS, SRTP
This article has focused on the minimal and EST. Prior to this, John worked for seven years in the IT
client-side EST operations required to industry doing application development and RDBMS design. John
establish an explicit trust anchor and is an active contributor to the libest and libsrtp projects on GitHub.
provision a new certificate. EST provides
additional capabilities including
certificate renewal, CSR attributes Send comments or feedback via
and server-side key generation. http://www.linuxjournal.com/contact
EST also provides for various client or to [email protected].

72 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


Instant Access to Premium
Online Drupal Training
Instant access to hundreds of hours of Drupal
training with new videos added every week!

Learn from industry experts with real world


H[SHULHQFH EXLOGLQJ KLJK SURȴOH VLWHV

Learn on the go wherever you are with apps


for iOS, Android & Roku

We also offer group accounts. Give your


whole team access at a discounted rate!

Learn about our latest video releases and


RIIHUV ȴUVW E\ IROORZLQJ XV RQ )DFHERRN DQG
7ZLWWHU #GUXSDOL]HPH 

Go to http://drupalize.me and
get Drupalized today!
FEATURE Synchronize Your Life with ownCloud

Synchronize
Your Life with
ownCloud

How to build a synchronization server


and keep your data where you want it.
MIKE DIEHL

74 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


L
ike most families these days, I’m doing next Tuesday at 3:00pm; it’s
our family is extremely busy. just none of their business. By hosting
We have four boys who have my own groupware server, I maintain
activities and appointments. My wife my privacy and don’t have to worry
and I both have our own businesses about arbitrary changes in service.
as well as outside activities. For years, The ownCloud system has a
we’ve been using eGroupware to calendar, address book, task
help coordinate our schedules and manager, bookmark manager and
manage contacts. The eGroupware file manager, among other features.
system has served us well for a long These services can be accessed
time. However, it is starting to show from any Web browser. However,
its age. As a Web-based groupware ownCloud also supports the calDAV,
system, it’s pretty well polished, but cardDAV and webDAV standards, so
it doesn’t hold a candle to Kontact synchronization with other clients
or Thunderbird. Also, my wife finds should be pretty straightforward.
that she needs to access her calendar In practice, there was a slight
from her Android phone, and learning curve, but synchronization
eGroupware just isn’t very mobile- works very well. The ownCloud system
friendly. Sure, we can set up calendar also allows you to integrate third-
synchronization, but eGroupware party modules (apps) in order to add
seems to have added synchronization features. Apps are available that
as an afterthought, and it really provide music and video streaming,
doesn’t work as well as we’d like. file encryption, e-mail and feature
So, I started looking for a new enhancements for existing functions.
groupware system that would allow us In order to install ownCloud,
to access our calendars and contacts you need PHP, a Web server and
seamlessly from our smartphones, a a database server. The installation
Web browser or our favorite desktop documentation walks you through
PIM. Sure, we simply could have configuring the Apache, Lighttpd,
uploaded all of our information to Nginx, Yaws or Hiawatha Web
a Google server. However, I may be servers for use with ownCloud. For
paranoid, but I just don’t want an a database server, you can choose
outside corporation having personal FROM -Y31, 0OSTGRE31, OR 31,ITE
information like who my friends are, It’s pretty hard to have a system that
my wife’s recipe for cornbread or what doesn’t meet those requirements.

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 75


FEATURE Synchronize Your Life with ownCloud

The installation process is well After all of the installation is


documented, so I won’t go into too complete, you won’t able to access
much detail here. Essentially, you your new ownCloud installation. To
download and extract the tarball resolve this problem, you have to edit
into a subdirectory under your Web ./config/config.php and comment out
server’s htdocs directory. Then you the trusted_domains line. This is a
make the Web server configuration security setting that determines which
changes indicated in the manual and domains clients are able to connect
restart the Web server. from, and by default, limits access
Basically, you’re setting permissions only to localhost. I happen to think
and enabling cgi execution. Once this the default values are a bit strict.
is done, you point a Web browser After the installation is complete,
at the new installation and follow point a Web browser at your
the installation wizard. I purposely ownCloud server and log in. You will
neglected to make some of the file be greeted with a page resembling
permission changes, and the wizard what is shown in Figure 1. As you can
notified me that the permissions see, the interface is simple. From here,
weren’t right. The installation is really you can access the calendar, contact
pretty straightforward. manager, task list and so on. All of

Figure 1. ownCloud Web Interface

76 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


the tools are intuitive to use, but not capabilities. For example, I created
polished enough that you’d want to a group for our family as a whole,
use them every day. The intent is that and a separate group for each of our
you’d point your favorite PIM at the businesses. This way, when I create a
server and use it as an interface to calendar or address book, I can share
your shared information. it to just my company group, and my
The initial configuration should be wife doesn’t have to look at it on her
done in a particular order. Since my PIM. I initially made the mistake of
initial intent was simply to test this simply creating a family group and
system, I managed to do everything sharing everything to it. But when
in the wrong order. If I had known I created a chore list for the kids, I
I would be using the system as a discovered that they also were able
permanent solution, I would have to see my company’s calendar, which

The moral of the story is to spend the


time and keep your groups as granular
as possible because users in the same
group can see everything shared to it.
put more thought into its initial isn’t what I wanted. The moral of the
implementation. I still ended up with story is to spend the time and keep
a usable system, but I’ve made things your groups as granular as possible,
more complicated and harder to because users in the same group can
manage than they should have been. see everything shared to it. Once
Let me share what I did wrong and you’ve got your groups created, you
how it could have been done better. can create users and assign them to
As soon as you get logged in as the the appropriate group(s) from the pick
administrator user, you should start list. In my case, I created the users
creating users and groups. First, I first, then I had to go back and assign
would create the groups. You’ll want them to groups, which was tedious.
to create a group for every group Next, you should start creating
of users who need unique access calendars. I thought I’d be clever

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 77


FEATURE Synchronize Your Life with ownCloud

and log in as the administrative iptables to port-forward TCP/80 to the


user and create a family calendar, a machine that’s hosting ownCloud. A
calender for each of our businesses reverse proxy might be more secure,
and a private calendar for each family but this works quite well.
member. This sounds reasonable I have successfully synchronized
until you discover that each user gets my ownCloud with Kontact,
created with his or her own default Thunderbird, Evolution, my Android
calendar, which is now redundant. phone and our iPad.
So, use the administrative account to Kontact is the easiest to set up.
create entity calendars and address In order to configure address book
books, but let each of your users synchronization, you simply create
share their assets themselves. a new cardDAV address book and
Then, create a shared documents point it at http://server.example.com/
folder. This is a pretty straightforward owncloud/remote.php/carddav/.
process. However, I would recommend Kontact happily will discover every
that once you’ve created the shared shared address book to which
space, you also create as much of your login has access. Similarly,
the directory structure as you can by creating a calDAV calendar and
reasonably foresee. Otherwise, you pointing it at http://server.example.com/
end up with a hodge-podge, and owncloud/remote.php/caldav/, you’ll
users won’t be able to find anything be able to get all of your calendars
when they need it, and that defeats configured in one step.
the purpose of a shared file space. Thunderbird and Evolution are
One of the goals of this project is to the next easiest clients to configure.
be able to access the system from the However, in these cases, you
LAN or from the Internet. To make this have to point the client to each
work from the LAN side, I logged in to individual asset. For example, if you
my router, which is running OpenWRT, have a calendar named “family”,
and configured a static hostname, you have to point these clients to
which it was happy to serve to every http://server.example.com/owncloud/
DNS client on the network. Then, remote.php/caldav/calendars/
I went to my DNS registrar and username/family/. You have to do this
CONFIGURED THE SAME &1$. BUT WITH for each calendar and address book
my router’s outside IP address. Then, with which you want to synchronize.
it was simply a matter of configuring To make matters a bit worse, the

78 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


structure of the URL changes if the Android calendar application supports
asset was shared by another user. multiple calendars, but you have to
Fortunately, ownCloud will tell you select which ones will be displayed.
what the correct URL is for each asset. It doesn’t do any good to have a
To get this information, simply edit the perfectly functioning synchronization
asset. You will see an icon that looks system that simply isn’t turned on,
like a globe. If you click on that, you and don’t ask me how I know.
will be provided with the correct URL. The ownCloud Web site indicates
In order to get the iPad to that there is a custom client
synchronize, you simply create an available that costs $.99. I installed
account under settings, where it says it to see how it works. I was a little
“Mail, Contact, Calendars”, and point disappointed to find that it was simply
it to the same URL mentioned above. a webDAV client. I guess I was hoping
This is pretty easy to get working it would be an integrated calendar,
even for a non-Apple user like myself. contacts and file client. However,
I don’t have an iPhone, but I’m once it was configured, I was able to
assuming the process is the same. share files from my Android directly to
Synchronizing to the Android my file space on my ownCloud server.
device requires additional software. I did find that the client occasionally
For contact synchronization, I used lost its configuration and had to be
“CardDAV-Sync free beta”. For reconfigured, which is a bit tedious.
calendar synchronization, I used Otherwise, the ownCloud client rounds
“Caldav Sync Free Beta”. Once the out almost all of the synchronization
software is installed, you simply create features of ownCloud.
a corresponding account for each I say “almost” because ownCloud
application under Setup. However, also offers a Firefox browser
you have to point the software to synchronization function. This
the individual assets, just as you do function is supposed to allow you
for Thunderbird and Evolution. There to synchronize your bookmarks
are two potential gotchas though. and browser history across multiple
Automatic synchronization isn’t machines. However, with the latest
turned on by default, so you have version of Firefox, there is no way
to turn it on and perform an initial to point Firefox to the ownCloud
synchronization before you will see server. Perhaps this will be fixed with
your calendars and contacts. Also, the the next upgrade.

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 79


FEATURE Synchronize Your Life with ownCloud

Once everything is configured, there file from the filesystem directly or via a
are some operational issues. The obvious Samba share. In those cases, you either
issue stems from making concurrent have to change the user name that
changes to an asset. This results in a the Web server runs as or the name
conflict, and the various clients handle that the Samba server uses to access
conflicts differently. To avoid problems, the files. Either way, you still won’t be
simply synchronize the asset before able to access the files directly. I’ve not
you modify it, and then re-synchronize yet decided on if or how I intend to fix
when your changes are complete. This this. I’ll probably just access the files
will ensure that everyone has the same via a Samba share or NFS mount.
version of each asset on their client. The ownCloud system supports
I also discovered that it is very server-side encryption that can be
difficult to move assets from one turned on and off on a per-user basis.
calendar or address book to another. This leads to more problems than it’s
The various clients don’t seem to do worth, in my opinion. For example,
a very good job of this. So far, my what happens when a user encrypts his
attempts at organizing my contacts or her files and then shares a directory
have resulted in duplicate contacts in with a user who does not? I happen
different address books. I think the to know that you get a warning from
solution is going to involve adding ownCloud, but I didn’t spend the time
the assets in question to a category, to find out what actually happens,
exporting the assets in that category, because I stumbled upon another
deleting the assets in that category problem. Server-side encryption pretty
and the re-importing the assets into much breaks any possible means of file
the appropriate calendar or address access besides webDAV. I guess that’s
book. This seems like the long way the point of server-side encryption,
around the block, so I’m going to hold but it doesn’t work for the way I
on doing it this way until I know for want/need to access my files. I
sure there isn’t an easier way to do it. ended up turning off encryption and
The rest of the difficulties involve decrypting my existing files, which was
file security. The first problem is that done seamlessly for me by ownCloud.
when a user uploads a file into his or The better solution might be to use
her cloud space, that file will be owned an encrypted filesystem like Encfs to
by the Web server user. This is okay as protect your files. With this solution,
long as you don’t want to access the you still will be able to use Samba and

80 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


NFS to access the plain-text files on members of my family, no matter where
the filesystem. Also, you’ll be able to they are or what client they choose to
upload the encrypted files to another use...and I maintain complete control
cloud provider, such as Dropbox, over my information.Q
as a means of backing up your files
without giving up your privacy. Mike Diehl in an uber-nerd who has been using Linux since
I have found ownCloud to be a the days when Slackware came on 14 5.25" floppy disks and
very capable and easy-to-manage installed kernel version 0.83. He currently operates an Internet
synchronization server. The actual telephone company and lives in Blythewood, South Carolina,
installation process is pretty simple, with his wife and four sons.
so I’ve spent most of this article
pointing out as many of the potential
pitfalls as I could. Now that I have it Send comments or feedback via
properly configured, I am able to share http://www.linuxjournal.com/contact
calendars, contacts and files with other or to [email protected].

LINUX JOURNAL
on your
e-Reader
Customized e-Reader
editions
Kindle and Nook
editions FREE
for Subscribers
now available

LEARN MORE
KNOWLEDGE HUB

WEBCASTS
Learn the 5 Critical Success Factors to Accelerate
IT Service Delivery in a Cloud-Enabled Data Center
Today's organizations face an unparalleled rate of change. Cloud-enabled data centers are increasingly seen as a way to accelerate
IT service delivery and increase utilization of resources while reducing operating expenses. Building a cloud starts with virtualizing
your IT environment, but an end-to-end cloud orchestration solution is key to optimizing the cloud to drive real productivity gains.

> http://lnxjr.nl/IBM5factors

Modernizing SAP Environments with Minimum


Risk—a Path to Big Data
Sponsor: SAP | Topic: Big Data
Is the data explosion in today’s world a liability or a competitive advantage for your business? Exploiting massive amounts
of data to make sound business decisions is a business imperative for success and a high priority for many firms. With rapid
advances in x86 processing power and storage, enterprise application and database workloads are increasingly being moved
from UNIX to Linux as part of IT modernization efforts. Modernizing application environments has numerous TCO and ROI
benefits but the transformation needs to be managed carefully and performed with minimal downtime. Join this webinar to
hear from top IDC analyst, Richard Villars, about the path you can start taking now to enable your organization to get the
benefits of turning data into actionable insights with exciting x86 technology.

> http://lnxjr.nl/modsap

WHITE PAPERS
White Paper: JBoss Enterprise Application
Platform for OpenShift Enterprise
Sponsor: DLT Solutions
Red Hat’s® JBoss Enterprise Application Platform for OpenShift Enterprise offering provides IT organizations with a simple and
straightforward way to deploy and manage Java applications. This optional OpenShift Enterprise component further extends
the developer and manageability benefits inherent in JBoss Enterprise Application Platform for on-premise cloud environments.

Unlike other multi-product offerings, this is not a bundling of two separate products. JBoss Enterprise Middleware has been
hosted on the OpenShift public offering for more than 18 months. And many capabilities and features of JBoss Enterprise
Application Platform 6 and JBoss Developer Studio 5 (which is also included in this offering) are based upon that experience.

This real-world understanding of how application servers operate and function in cloud environments is now available in this
single on-premise offering, JBoss Enterprise Application Platform for OpenShift Enterprise, for enterprises looking for cloud
benefits within their own datacenters.

> http://lnxjr.nl/jbossapp

82 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


KNOWLEDGE HUB

WHITE PAPERS
Linux Management with Red Hat Satellite:
Measuring Business Impact and ROI
Sponsor: Red Hat | Topic: Linux Management

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to de-
ploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT
organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility
workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows
in importance in terms of value to the business, managing Linux environments to high standards of service quality —
availability, security, and performance — becomes an essential requirement for business success.

> http://lnxjr.nl/RHS-ROI

Standardized Operating Environments


for IT Efficiency
Sponsor: Red Hat
The Red Hat® Standard Operating Environment SOE helps you define, deploy, and maintain Red Hat Enterprise Linux®
and third-party applications as an SOE. The SOE is fully aligned with your requirements as an effective and managed
process, and fully integrated with your IT environment and processes.

Benefits of an SOE:

SOE is a specification for a tested, standard selection of computer hardware, software, and their configuration for use
on computers within an organization. The modular nature of the Red Hat SOE lets you select the most appropriate
solutions to address your business' IT needs.

SOE leads to:

s $RAMATICALLY REDUCED DEPLOYMENT TIME

s 3OFTWARE DEPLOYED AND CONFIGURED IN A STANDARDIZED MANNER

s 3IMPLIFIED MAINTENANCE DUE TO STANDARDIZATION

s )NCREASED STABILITY AND REDUCED SUPPORT AND MANAGEMENT COSTS

s 4HERE ARE MANY BENEFITS TO HAVING AN 3/% WITHIN LARGER ENVIRONMENTS SUCH AS

s ,ESS TOTAL COST OF OWNERSHIP 4#/ FOR THE )4 ENVIRONMENT

s -ORE EFFECTIVE SUPPORT

s &ASTER DEPLOYMENT TIMES

s 3TANDARDIZATION

> http://lnxjr.nl/RH-SOE

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 83


INDEPTH
ZFS and BTRFS
BTRFS and ZFS are two options for protecting against data cor-
ruption. Which one should you use, and how should you use it?
RUSSELL COKER

For a long time, the software Currently, there are two filesystems
RAID implementation in the Linux available on Linux that support
kernel has worked well to protect internal RAID with checksums on all
data against drive failure. It provides data to prevent silent corruption:
great protection against a drive ZFS and BTRFS. ZFS is from Sun and
totally failing and against the has some license issues, so it isn’t
situation where a drive returns read included in most Linux distributions.
errors. But what it doesn’t offer It is available from the ZFS On Linux
is protection against silent data Web site (http://zfsonlinux.org).
corruption (where a disk returns BTRFS has no license problems and is
corrupt data and claims it to be included in most recent distributions,
good). It also doesn’t have good but it is at an earlier stage of
support for the possibility of drive development. When discussing BTRFS
failure during RAID reconstruction. in this article, I concentrate on the
Drives have been increasing in size theoretical issues of data integrity
significantly, without comparable and not the practical issues of kernel
increases in speed. Modern drives panics (which happen regularly to me
have contiguous read speeds 300 but don’t lose any data).
times faster than drives from 1988
but are 40,000 times larger (I’m Do Drives Totally Fail?
comparing a recent 4TB SATA disk For a drive totally to fail (that is, be
with a 100M ST-506 disk that can unable to read any data successfully
sustain 500K/s reads). So the RAID at all), the most common problem
rebuild time is steadily increasing, used to be “stiction”. That is when
while the larger storage probably the heads stick to the platters, and
increases the risk of data corruption. the drive motor is unable to spin the

84 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


INDEPTH

disk. This seems to be very uncommon regarded that the best research on the
in recent times. I presume that drive incidence of hard drive “failure” is the
manufacturers fixed most of the Google Research paper “Failure Trends
problems that caused it. in a Large Disk Drive Population”
In my experience, the most (http://research.google.com/pubs/
common reason for a drive to pub32774.html), which although
become totally unavailable is due to very informative, gives no information
motherboard problems, cabling or on the causes of “failure”. Google
connectors—that is, problems outside defines “failure” as anything other
the drive. Such problems usually than an upgrade that causes a drive
can be fixed but may cause some to be replaced. Not only do they
downtime, and the RAID array needs not tell us the number of drives that
to keep working with a disk missing. totally died vs. the number that had
Serious physical damage (for some bad sectors, but they also don’t
example, falling on concrete) can tell us how many bad sectors would
cause a drive to become totally be cause for drive replacement.
unreadable. But, that isn’t a Lakshmi N. Bairavasundaram, Garth
problem that generally happens to R. Goodson, Bianca Schroeder, Andrea
a running RAID array. Even when C. Arpaci-Dusseau and Remzi H.
I’ve seen drives fail due to being in Arpaci-Dusseau from the University
an uncooled room in an Australian of Wisconsin-Madison wrote a
summer, the result has been many paper titled “An Analysis of Data
bad sectors, not a drive that’s totally Corruption in the Storage Stack”
unreadable. It seems that most drive (http://research.cs.wisc.edu/adsl/
“failures” are really a matter of an Publications/corruption-fast08.
increasing number of bad sectors. html). That paper gives a lot of
There aren’t a lot of people who information about when drives have
can do research on drive failure. An corrupt data, but it doesn’t provide
individual can’t just buy a statistically much information about the case of
significant number of disks and run major failure (tens of thousands of
them in servers for a few years. I errors), as distinct from cases where
couldn’t find any research on the there are dozens or hundreds of
incidence of excessive bad sectors errors. One thing it does say is that
vs. total drive failure. It’s widely the 80th percentile of latent sector

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 85


INDEPTH

errors per disk with errors is “about (something less than 48M of data)
50”, and the 80th percentile of was a small fraction of a 3TB disk. If
checksum mismatches for disks with an older filesystem was used in that
errors is “about 100”. So most disks situation, a metadata error could
with errors have only a very small corrupt a directory and send all its
number of errors. It’s worth noting entries to lost+found.
that this research was performed ZFS supports even greater
with data that NetApp obtained redundancy via the copies= option. If
by analyzing the operation of its you specify copies=2 for a filesystem,
hardware in the field. NetApp has then every data block will be written
a long history of supporting a large to two different parts of the disk. The
number of disks in many sites with number of copies of metadata will
checksums on all stored data. be one greater than the number of
I think this research indicates copies of data, so copies=2 means
that the main risks of data loss are that there will be three copies of
corruption on disk or a small number every metadata block. The maximum
of read errors, and that total drive number of copies for data blocks
failure is an unusual case. in ZFS is three, which means that
the maximum number of copies of
Redundancy on a Single Disk metadata is four.
By default, a BTRFS filesystem that The paper “An Analysis of Data
is created for a single device that’s Corruption in the Storage Stack”
not an SSD will use “dup” mode shows that for “nearline” disks (that
for metadata. This means that every is, anything that will be in a typical
metadata block will be written to two PC or laptop), you can expect a 9.5%
parts of the disk. In practice, this can probability of read errors (latent sector
allow for recovering data from drives errors) and a 0.466% probability
with many errors. I recently had a 3TB of silent data corruption (checksum
disk develop about 14,000 errors. mismatches). Typical Linux Journal
In spite of such a large number of readers probably can expect to see data
errors, the duplication of metadata loss from hard drive read errors on
meant that there was little data loss. an annual basis from the PCs owned
About 2,000 errors in metadata blocks by their friends and relatives. The
were corrected with the duplicates, probability of silent data corruption
and the 12,000 errors in data blocks is low enough that all users have a

86 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


INDEPTH

This option of providing extra protection for


data is a significant benefit for ZFS when
compared to BTRFS.

less than 50% chance of seeing it on with BTRFS, the default configuration
their own PCs during their lives—unless is to use “dup” for metadata. This
they purchased one of the disks with a means that a small number of disk
firmware bug that corrupts data. errors will be unlikely to lose any
If you run BTRFS on a system with metadata, and a scrub will tell you
a single disk (for example, a laptop), which file data has been lost due
you can expect that if the disk to errors. Duplicate metadata alone
develops any errors, they will result can make the difference between a
in no metadata loss due to duplicate server failing and continuing to run.
metadata, and any file data that is It is possible to run with “dup” for
lost will be reported to the application data as well, but this isn’t a well
by a file read error. If you run ZFS on supported configuration (it requires
a single disk, you can set copies=2 mixed data and metadata chunks that
or copies=3 for the filesystem that require you to create a very small
contains your most important data filesystem and grow it).
(such as /home on a workstation) to It is possible to run RAID-1 on two
decrease significantly the probability partitions on a single disk if you are
that anything less than total disk willing to accept the performance
failure will lose data. This option of loss. I have a 2TB disk running as a
providing extra protection for data 1TB BTRFS RAID-1, which has about
is a significant benefit for ZFS when 200 bad sectors and no data loss.
compared to BTRFS. Finally, it’s worth noting that a
If given a choice between a RAID-1 “single disk” from the filesystem
array with Linux software RAID (or perspective can mean a RAID array.
any other RAID implementation that There’s nothing wrong with running
doesn’t support checksums) and a BTRFS or ZFS over a RAID-5 array.
single disk using BTRFS, I’d choose The metadata duplication that both
the single disk with BTRFS in most those filesystems offer will reduce
cases. That is because on a single disk the damage if a RAID-5 array suffers

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 87


INDEPTH

When you replace a disk in Linux software RAID,


the old disk will be marked as faulty first, and all
the data will be reconstructed from other disks.

a read error while replacing a failed array from the contents of the old
disk. A hardware RAID array can disk and from the other disks in a
offer features that ZFS doesn’t offer redundant set. BTRFS supports the
(such as converting from RAID-1 to same thing with the btrfs replace
RAID-5 and then RAID-6 by adding command. In the most common error
more disks), and hardware RAID situations (where a disk has about 50
arrays often include a write-back disk bad sectors), this will give you the
cache that can improve performance effect of having an extra redundant
for RAID-5/6 significantly. There’s disk in the array. So a RAID-5 array
also nothing stopping you from using in BTRFS or in ZFS (which they call
BTRFS or ZFS RAID-1 over a pair of a RAID-Z) should give as much
hardware RAID-5/6 arrays. protection as a RAID-6 array in a
RAID implementation that requires
Drive Replacement removing the old disk before adding
When you replace a disk in Linux a new disk. At this time, RAID-5
software RAID, the old disk will be and RAID-6 support in BTRFS is
marked as faulty first, and all the still fairly new, and I don’t expect
data will be reconstructed from other it to be ready to use seriously by
disks. This is fine if the other disks the time this article is published.
are all good, but if the other disks But the design of RAID-5 in BTRFS
have read errors or corrupt data, is comparable to RAID-Z in ZFS,
you will lose data. What you really and they should work equally well
need is to have the new disk directly when BTRFS RAID-5 code has been
replace the old disk, so the data for adequately tested and debugged.
the new disk can be read from the Hot-spare disks are commonly used
old disk or from redundancy in the to allow replacing a disk more quickly
array, whichever works. than someone can get to the server.
ZFS has a zpool replace The idea is that the RAID array might
command that will rebuild the be reconstructed before anyone even

88 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


INDEPTH

can get to it. But it seems to me that How Much Redundancy


the real benefit of a hot-spare when Is Necessary?
used with a modern filesystem, such The way ZFS works is that the
as ZFS or BTRFS, is that the system copies= option (and the related
has the ability to read from the disk metadata duplication) is applied on
with errors as well as the rest of the top of the RAID level that’s used for
array while constructing the new disk. the storage “pool”. So if you use
If you have a server where every disk copies=2 on a ZFS filesystem that
bay contains an active disk (which runs on a RAID-1, there will be two
is a very common configuration in copies of the data on each of the
my experience), it is unreasonably disks. The allocation of the copies is
difficult to support a disk replacement arranged such that it covers different
operation that reads from the failing potential failures to the RAID level,
disk (using an eSATA device for the so if you had copies=3 for data
rebuild isn’t easy). Note that BTRFS stored on a three-disk RAID-Z pool,
doesn’t have automatic hot-spare each disk in the pool would have
support yet, but it presumably will a copy of the data (and parity to
get it eventually. In the meantime, a help regenerate two other copies).
sysadmin has to instruct it to replace The amount of space required for
the disk manually. some of these RAID configurations
As modern RAID systems (which on is impractical for most users. For
Linux servers means ZFS as the only example, a RAID-Z3 array composed
fully functional example at this time) of six 1TB disks would have 3TB of
support higher levels of redundancy, RAID-Z3 capacity. If you then made
one might as well use RAID-Z2 a ZFS filesystem with copies=3 , you
(the ZFS version of RAID-6) instead would get 1TB of usable capacity
of RAID-5 with a hot-spare, or a out of 6TB of disks. 5/6 disks is more
RAID-Z3 instead of a RAID-6 with redundancy than most users need.
a hot-spare. When a disk is being If data is duplicated in a RAID-1
replaced in a RAID-6/RAID-Z2 array array, the probability of two disks
with no hot-spare, you are down having errors on matching blocks
to a RAID-5/RAID-Z array, so there’s from independent random errors
no reason to use a disk as a hot- is going to be very low. The paper
spare instead of using it for extra from the University of W isconsin-
redundancy in the array. Madison notes that firmware bugs

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 89


INDEPTH

In many deployments, the probability of the


server being stolen or the building catching
on fire will be greater than the probability of
a RAID-Z2 losing data.

can increase the probability of catching on fire will be greater than


corrupt data on matching blocks the probability of a RAID-Z2 losing
and suggests using staggered stripes data. So it’s worth considering when
to cover that case. ZFS does stagger to spend more money on extra disks
some of its data allocation to deal and when to spend money on better
with that problem. Also, it’s fairly off-site backups.
common for people to buy disks In 2007, Val Bercovici of NetApp
from two different companies for a suggested in a StorageMojo interview
RAID-1 array to prevent a firmware that “protecting online data only via
bug or common manufacturing RAID-5 today verges on professional
defect from corrupting data on two malpractice” (http://storagemojo.com/
identical drives. The probability of 2007/02/26/netapp-weighs-in-on-disks).
both disks in a BTRFS RAID-1 array During the past seven years, drives
having enough errors that data have become bigger, and the
is lost is very low. W ith ZFS, the difficulties we face in protecting data
probability is even lower due to the have increased. While Val’s claim is
mandatory duplication of metadata hyperbolic, it does have a basis in fact.
on top of the RAID-1 configuration If you have only the RAID-5 protection
and the option of duplication of (a single parity block protecting each
data. At this time, BTRFS doesn’t stripe), there is a risk of having a
support duplicate metadata on a second error before the replacement
RAID array. disk is brought on-line. However, if
The probability of hitting a failure you use RAID-Z (the ZFS equivalent
case that can’t be handled by of RAID-5), every metadata block is
RAID-Z2 but that can be handled by stored at least twice in addition to the
RAID-Z3 is probably very low. In many RAID-5 type protection, so if a RAID-Z
deployments, the probability of the array entirely loses a disk and then
server being stolen or the building has a read error on one of the other

90 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


INDEPTH

disks, you might lose some data but working or not?”, they are part of
won’t lose metadata. For metadata the same issue.
to be lost on a RAID-Z array, you
need to have one disk die entirely Comparing BTRFS and ZFS
and then have matching read errors For a single disk in a default
on two other disks. If disk failures configuration, both BTRFS and
are independent, it’s a very unlikely ZFS will store two copies of each
scenario. If, however, the disk failures metadata block. They also use
are not independent, you could have checksums to detect when data is
a problem with all disks (and lose no corrupted, which is much better
matter what type of RAID you use). than just providing corrupt data to
an application and allowing errors
Snapshots to propagate. ZFS supports storing
One nice feature of BTRFS and ZFS as many as three copies of data
is the ability to make snapshots blocks on a single disk, which is a
of BTRFS subvolumes and ZFS significant benefit.
filesystems. It’s not difficult to write For a basic RAID-1 installation,
a cron job that makes a snapshot of BTRFS and ZFS offer similar features
your important data every hour or by default (storing data on both
even every few minutes. Then when devices with checksums to cover
you accidentally delete an important silent corruption). ZFS offers duplicate
file, you easily can get it back. Both metadata as a mandatory feature and
BTRFS and ZFS can be configured the option of duplicate data on top of
such that files can be restored from the RAID configuration.
snapshots without root access so BTRFS supports RAID-0, which is a
users can recover their own files good option to have when you are
without involving the sysadmin. working with data that is backed
Snapshots aren’t strictly related up well. The combination of the
to the the topic of data integrity, use of BTRFS checksums to avoid
but they solve the case of accidental data corruption and RAID-0 for
deletion, which is the main reason performance would be good for a
for using backups. From a sysadmin build server or any other system
perspective, snapshots and RAID that needs large amounts of
are entirely separate issues. From temporary file storage for repeatable
the CEO perspective, “is the system jobs but for which avoiding data

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 91


INDEPTH

corruption is important. for future development of extreme


BTRFS supports dynamically levels of redundancy in BTRFS at
increasing or decreasing the size of some future time, but it probably
the filesystem. Also, the filesystem won’t happen soon.
can be rebalanced to use a different Generally, it seems that ZFS is
RAID level (for example, migrating designed to offer significantly greater
between RAID-1 and RAID-5). ZFS, redundancy than BTRFS supports,
however, has a very rigid way of while BTRFS is designed to be easier
managing storage. For example, if to manage for smaller systems.
you have a RAID-1 array in a pool, Currently, BTRFS doesn’t give
you can never remove it, and you good performance. It lacks read
can grow it only by replacing all the optimization for RAID-1 arrays and
disks with larger ones. Changing doesn’t have any built-in support
between RAID-1 and RAID-Z in ZFS for using SSDs to cache data
requires a backup/format/restore from hard drives. ZFS has many
operation, while on BTRFS, you can performance features and is as fast
just add new disks and rebalance. as a filesystem that uses so much
ZFS supports different redundancy redundancy can be.
levels (via the copies= setting) on Finally, BTRFS is a new filesystem,
different “filesystems” within the and people are still finding bugs in
same “pool” (where a “pool” is group it—usually not data loss bugs but
of one or more RAID sets). BTRFS often bugs that interrupt service. I
“subvolumes” are equivalent in design haven’t yet deployed BTRFS on any
to ZFS “filesystems”, but BTRFS doesn’t server where I don’t have access to
support different RAID parameters for the console, but I have Linux servers
subvolumes at this time. running ZFS in another country. Q
ZFS supports RAID-Z and RAID-Z2,
which are equivalent to BTRFS Russell Coker has been working on NSA Security Enhanced
RAID-5, RAID-6—except that RAID-5 Linux since 2001 and has been working on the Bonnie++
and RAID-6 are new on BTRFS, and benchmark suite since 1999.
many people aren’t ready to trust
important data to them. There is
no feature in BTRFS or planned for Send comments or feedback via
the near future that compares with http://www.linuxjournal.com/contact
RAID-Z3 on ZFS. There are plans or to [email protected].

92 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


INDEPTH

Introducing
pi-web-agent,
a Raspberry Pi
Web App
A Web application allowing everyday users to control
the Raspberry Pi.
VASILIS NICOLAOU, ANGELOS GEORGIADIS,
GEORGIOS CHAIREPETIS and ANDREAS GALAZIS

The pi-web-agent is a Web The Product


application that aims to give The idea behind pi-web-agent is to
new users experience with the support desktop environment functionality
Raspberry Pi and potentially with any on browsers and provide extensions that
Linux distribution on any architecture. behave similarly to mainstream desktop
Raspberry Pi has introduced Linux applications. If you have used GNOME
to a lot of people, and we want to or KDE, you will have noticed that each
enhance their experience further. provides its own set of applications.
This project also demonstrates the pi-web-agent is similar to Webmin
extensibility capabilities of Linux. (http://www.webmin.com), with the
What we provide enables you to install difference being that pi-web-agent
the pi-web-agent as soon as you have targets everyday users. We want
Raspbian (http://www.raspbian.org) to give users a desktop experience
installed. You’ll simply be able to through their browsers.
open your usual browser from your
everyday machine and start interacting Main Features
with your Pi. The moment you launch

94 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


INDEPTH

Figure 1. pi-web-agent Home Screen

pi-web-agent (after changing The navigation menu reveals


your password of course), you will the standard extensions that every
realize that it already provides a lot desktop environment provides for
of functionality despite its youth. users to interact with the underlying
On the left-hand side, you’ll see operating system, including power
some information concerning your management for shutting down
Pi, such as its temperature, kernel and restarting, and a package
version, update notifications, disk management section where we
and memory usage. provide some of our favourite
The home screen (Figure 1) will and useful applications with the
check whether an update for the capability to install them, enabling
pi-web-agent is available. If it you to get started fast. One of the
is, a button will appear that will most important extensions is VNC,
download and install the new which provides the TightVNC server
version of the application for you. and gives you access to your real Pi

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 95


INDEPTH

One of the most important extensions is VNC,


which provides the TightVNC server and gives you
access to your real Pi desktop with the Glavsoft
TightVNC client Java applet.

desktop with the Glavsoft TightVNC expert). You can set rules for various
client Java applet. chains, block certain IP addresses or
The firewall management, despite allow connections through different
its early stage, enables you to avoid protocols (Figure 2).
the fuss of many complicated options Clicking the “Other” tab will reveal
that iptables provides from the more extensions. The camera controller
command line (until you become an enables you to take snapshots and

Figure 2. Firewall View When Clicking on a Certain Chain

96 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


INDEPTH

Figure 3. Media Player View When Listening to an Audio File or Streaming

even begin your own live stream! your files, and download or choose to
We also provide a media player play audio files with the pi-web-agent’s
(tagged as radio, since it started as media player extension. (Note: this
such), where you can provide a URI functionality is available only from the
of an audio file or a streaming radio development branch on GitHub, but it
channel. Your Raspberry Pi will start will be available in version 0.3.)
playing the audio, so get ready to The media player extension is
attach your HD speakers! an Mplayer port that also enables
You also can play an audio file you to control the sound with an
straight from your Pi, but don’t bother equalizer (Figure 3).
typing the URI in the text box. Find it If you want to be more hard-
through the file browser we provide. core and play with some wires and
Although it’s simple in functionality for LEDs, we provide an extension for
now, it enables you to browse through controlling the GPIO pins on your Pi

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 97


INDEPTH

Figure 4. The GPIO module gives you control over the Raspberry Pi GPIO pins.

(check out the interface in Figure 4). Apache (http://www.apache.org).


You also can check for updates or Some claim that Apache is big and
update your system and turn services heavy, but we believe it has the most
on or off. There is more to come, so chances of incorporating cutting-
keep yourself posted via our Facebook edge technology. If you don’t know
page (https://www.facebook.com/ what this means, check out mod_spdy
piwebagent). We want you to forget with the SPDY protocol from Google
you are using a Web browser and not (https://code.google.com/p/mod-spdy).
bother clicking the VNC option. We also can choose among a variety of
modules that can increase pi-web-agent’s
How It Works potential. For those of you who want
The pi-web-agent is served by a lighter HTTP server, we are going to

98 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


INDEPTH

release a pi-web-agent modification has three phases:


after the API is complete.
The core is written in Python and Q The development of version 0.3,
interacts via the Common Gateway which will introduce framework
Interface (CGI) to provide dynamic improvements. This also involves
content. Our goal is also to provide better extension management.
an API, so almost every module is As the number of provided
able to generate JSON data, making extensions grows, we want users
the application highly extensible both to choose what they want, and
for us and for third-party developers. install and uninstall them at will.
This will give us a lot of flexibility for This will significantly reduce
future products we plan to deliver. long installation times caused by
The styling currently is achieved with dependencies (dependencies of
bootstrap (http://getbootstrap.com) the camera controller introduced
using the Flatly (http://bootswatch.com/ 100MB of dependency packages).
flatly) theme. User interaction mainly
is achieved via JavaScript calls and Q The development of the pi-web-agent
rendering the document on the client for Android, which also includes the
side (most of the time). Our rule of optimization of the API.
thumb is to move as much processing
from the Raspberry (server side) to the Q The design and development of
more powerful machine that runs the the pi-web-agent version 1.0. We
browser (client side). want this to look like a real desktop
environment, and we also want to
Current Status achieve this in no more than a year.
pi-web-agent is in a very early
stage. It started in October pi-web-agent is open source, and
2013 at HackManchester it’s easy for you to get involved—
(http://www.hackmanchester.com), just fork our main git repository
winning the University challenge (http://www.github.com/vaslabs/
prize. The first version (0.1) was pi-web-agent), and send us your
released on December 27, 2013, changes through pull requests.
and the second release (0.2)
followed in April 2014. Using pi-web-agent
The current stage of development Imagine your Raspberry Pi has

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 99


INDEPTH

just arrived and you have installed following and then pressing Enter:
Raspbian on your SD card. Even if
you don’t have much experience with pistore
Linux and the command line, worry
no more. You can connect to your Pi When pistore opens, just register
with SSH and install the pi-web-agent, and search for pi-web-agent.
which will help you in your first steps. Everything else is straightforward.
While you become more experienced
with the Pi and Linux, the pi-web-agent Installing via the Command Line
will grow with you, giving you If you are not on a Linux machine,
more powerful capabilities and or if your distribution is headless,
making your interaction with your you still can install pi-web-agent
Pi more enjoyable. easily. The following commands will
The most difficult task you’ll face fetch and install pi-web-agent:
is the installation process, especially
if you run a headless Debian wget https://github.com/vaslabs/\
distribution on your Raspberry Pi. pi-web-agent/archive/0.2-rc-1.zip
You won’t be able to avoid executing unzip 0.2-rc-1.zip
commands (until we release a cd pi-web-agent-0.2-rc-1
Raspbian mod with the pi-web-agent ./install.sh
included). You need to connect with ./run.sh
your Pi via SSH. There are two ways
to install the pi-web-agent, which Troubleshooting
are described below. We’ve started a discussion on Reddit
that covers a lot of troubleshooting,
Installing through pistore thanks to users’ questions
If you are using a Linux machine, it’s (http://www.reddit.com/r/raspberry_pi/
easy. Just do: comments/249j4r/piwebagent_control_
your_pi_from_the_ease_of_your). You
ssh -X pi@raspberrypi can find guidelines on how to install
under various circumstances and
The -X will enable you to execute how to resolve problems that others
graphical applications. Provide your already have faced. All the issues
password to the prompt, and then identified in this discussion have been
launch the pistore by typing the resolved, but if you face a new one,

100 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


INDEPTH

It is possible to extend the pi-web-agent by adding


new Python modules.

just post a new comment. os.environ['MY_HOME']=


'/usr/libexec/pi-web-agent'
Supported Platforms sys.path.append(os.environ['MY_HOME']+
The pi-web-agent framework is based '/etc/config')
on the micro-CernVM (Scientific from framework import output
Linux) appliance agent framework
developed at CERN in summer def main():
2013 (https://github.com/cernvm/ output('Title', 'Hello my first module')
cernvm-appliance-agent). We
developed pi-web-agent based on if _ _name_ _ == "_ _main_ _":
that framework. We’ve modified it to main()
work on Raspbian and cover the needs
of the Raspberry Pi users. However, it There are a lot of modules and
is possible to use it on various Linux methods you can use to get the most out
distributions with minor modifications of the framework. The most important
concerning Apache configuration and is the output, which takes care of what’s
replacing Raspberry Pi-specific modules, appearing on your browser. You can give
such as the Update and the GPIO. two arguments, the title and the HTML,
We plan to release pi-web-agent as the main content of your extension.
version 1.0 for Raspbian, Pidora and
Arch. Until then, the only officially Framework Overview
supported platform is Raspbian. The framework is composed of
various Python modules and
Developing on pi-web-agent configuration files. The configuration
It is possible to extend pi-web-agent files initially were in XML format, but
by adding new Python modules. Upon they have been converted to JSON
creating a Python module, you’ll find format, which reduced the codebase
that the best way to work is to follow by a significant amount.
the structure below: The core module is framework.py,
placed in the /usr/libexec/pi-web-agent/
if 'MY_HOME' not in os.environ: etc/ config directory. This module

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 101


INDEPTH

uses view.py and menu.py, which exclusively by version 1.0.


also use HTMLPageGenerator.py and
BlueprintDesigner.py to construct the The Future
Web site skeleton. All the information Reading through this article, you will
about which modules are in use is in have noticed that there are a lot of
config.cfg in the same directory with things pending and even more that
the framework.py. can be improved. This is our goal:
to develop a solid application, not
Adding Features only to satisfy users, but also to
When you create your first module, provide a good environment for other
you’ll need to register it to the developers to extend or build on top
config.cfg file in order to be placed of the pi-web-agent. That’s why we
in the navigation menu. You’ll have started multiple spin-off projects.
also find that you can declare its We have started a bunch of
version as Alpha or Beta. More help projects in order to make life
options will be added soon, such as easier for us, users and third-party
dependencies (next version 0.3) of a developers. We created a benchmark
corresponding feature. (https://github.com/azardilis/testing-fw)
When a feature is added to the that gives us the loading times of each
configuration file, the framework feature. We also started a plugin for
places it in the navigation menu with the gedit text editor (the “official”
the URL provided in that file. There text editor of our dev team) to
are two types of URL links: one for automate the creation and deployment
reloading the whole page and one of pi-web-agent extensions.
for updating just the extension view Last but not least, we are
(append ?type=js ). Since version 0.2, developing pi-web-agent for
we started using the second format, Android. Not only have we started
and the first format is deprecated. this application to increase user
When clicking to select a feature, satisfaction, but it also is the driver
a JavaScript routine is triggered that for the pi-web-agent API, which
loads the content of the extension in will be given out officially, ready
the appropriate area, where the user and documented, for third-party
can interact with it. It is important to developers to build on. In addition,
note that all user interface renderings the API will be solely used for the
will be performed on the client side creation of pi-web-agent version 1.0.

102 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


INDEPTH

What Needs to Be Done at this point. We want you, the users,


pi-web-agent is in an early stage, to get involved before we start over-
but it needs support from special engineering things and making the
tools to speed up the development pi-web-agent a big ugly piece of code
process. Now that the framework has that doesn’t meet your needs.
started to be more stable in terms of There are a lot of ideas that need to
changes, we need to finish the gedit be implemented. The pi-web-agent is
development plugin. already a product that is released in
Next, we need to finish the approximately a three-month cycle.
pi-web-agent API very soon, and the You can become involved by sending
pi-web-agent for Android will help us e-mail with recommendations,
shape a well-defined and documented following us on GitHub, and forking
API. Then, we will extend the framework and contributing to the repository. If
to encapsulate the API modules both on you are new to any of these, don’t
client and server side. We plan to create worry; just send us your questions, and
the client-side framework using Dart we’ll get you started.
(https://www.dartlang.org). If visualizing a desktop environment
We also plan to start a new spin- in a Web browser is not so exciting
off project that will be a Web site for for you, you can contribute to any
hosting extensions for pi-web-agent, of the spin-off projects that aim to
which users will be able to install via boost the pi-web-agent development
the Web application. process. Here is a list of all the
projects and their repositories:
Our Team’s Goals
Following the Android model, we want Q pi-web-agent:
to build a platform to act as a desktop http://www.github.com/vaslabs/
environment for the Raspberry Pi, which pi-web-agent.
will be able to expand via extensions or
Activities. A different Web site will act Q Benchmark for testing the
as a market and host those extensions. framework: https://github.com/
Developers will be able to register and azardilis/testing-fw.
publish their own extensions.
Q pi-web-agent for Android:
Get Involved https://github.com/vaslabs/
The project is small—as it should be pi-android-agent.

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 103


INDEPTH

Q pi-web-agent gedit development technology has made this possible.


plugin. Along with the Linux extensibility,
we chose to start with the Raspberry
Conclusion Pi, as it’s the perfect platform
The pi-web-agent is a product for educational and experimental
that aims to replace the desktop purposes. The Raspberry Pi also has
environment with a Web-based the resource limitations we need,
alternative. HTML5 and CSS3 with the idea that if it runs fast on a

Figure 5. Sneak peek of the pi-web-agent version 1.0, simple window with content
inside and dock as navigation menu.

104 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


INDEPTH

Raspberry Pi, it will be super-fast on


Vasilis Nicolaou, pi-web-agent’s founder, is maybe the
mainstream machines.
biggest Linux lover on the team, at his job and maybe in other
We already are developing a new
groups and places. He even knows what Linus Torvalds does
design that brings the feel of a
daily better than Linus himself. Vasilis loves open source,
mainstream desktop environment.
Python, Java (don’t tell that to Linus), Raspberry Pi and new
Figure 5, which demonstrates the
technology in general, which (as Linus says) is what keeps him
dock navigation menu and the
interested in programming. He graduated from the University of
windowing system, provides a sneak
Manchester with an MEng degree in Software Engineering.
peek of the new design.
As we mentioned already, we
Angelos Georgiadis is a Java programmer who became a Python
also need a lot of help. If you are a
Python/CSS/HTML/JavaScript expert expert for the sake of pi-web-agent. He probably is the best

with some free time and a passion chance of keeping pi-web-agent alive if Vasilis is hit by a bus
for open source, don’t hesitate to tomorrow. He is the number-one suspect when something
contact us and join our team. breaks in pi-web-agent and is probably responsible, since he
has blown the repository quite a few times (17-ish).
Acknowledgements
We want to give credit and a kind Georgios Chairepetis is a young programming enthusiast,
thank you to all the people that currently studying for an MSc in Software Engineering at the
helped shape the pi-web-agent: University of Manchester. He got involved with the rest of the
pi-web-agent team initially by taking part in a hackathon
Q Kyriacos Georgiou contest and was lucky enough to win an award in the first
competition he ever attended. He enjoys staying inside on
Q Maria Charalambous Saturdays doing some programming with friends, but he also
likes to go outside and enjoy the sunshine, maybe with some
Q Argyris Zardylis beer, when he has the chance.

Q Iliada Eleftheriou Andreas Galazis has been a junior Web developer for six
months. The fact that his favourite Linux application is
Q Theodoros Pertsas Mplayer is somewhat misleading, as he spends most of
his time coding rather than watching movies or listening
to music, but when he does, he wants to do it the proper
Send comments or feedback via way. When he heard about pi-web-agent, he decided to join
http://www.linuxjournal.com/contact forces to develop an extension to demonstrate the power of
or to [email protected]. his favourite media player.

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 105


EOF
Stuff That DOC SEARLS

Matters
Are we going to get real about privacy for everybody—or just
hunker in our own bunkers?

I
’m writing this in a hotel room friends and once in a bunker entered
entered through two doors. The hall through a men’s room at the beach.
door is the normal kind: you stick a After gathering behind a heavy door,
card in a slot, a light turns green, and everyone in the shelter tensely but
the door unlocks. The inner one is three cheerfully waits to hear a “boom” or
inches thick, has no lock and serves two, then pauses another few minutes
a single purpose: protection from an to give shrapnel enough time to finish
explosion. This grace is typical of many falling to the ground. Then they go
war-zone prophylaxes here in Tel Aviv, outside and return to whatever they
Israel’s second-largest city. The attacks were doing before the interruption.
come in cycles, and the one going on But not everybody bothers with the
now (in mid-July 2014) is at a peak. Sirens shelters. Some go outside and look at
go off several times a day, warning of the sky. I was one of those when I shot
incoming rockets from Gaza. When that the photo shown in Figure 1 from the
happens, people stop what they’re doing front porch of my hotel, a few moments
and head for shelters. If they’re driving, after hearing a pair of booms.
they get out of their cars and lie on the The photo tells a story in smoke of
ground with their heads covered. two incoming Hamas rockets from
Since I got here, I have joined Gaza, intercepted by four Israeli
those throngs three times in small, missiles. The round puffs of smoke
claustrophobia-inducing shelters— mark the exploded rockets. The
once in my hotel, once in an parallel trails mark the paths of the
apartment house where I was visiting interceptors. These are examples of the

106 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


EOF

Figure 1. Two incoming Hamas rockets from Gaza, intercepted by four Israeli missiles.

“Iron Dome” (http://en.wikipedia.org/ a comsec mechanism advocated by extremists on extremist forums

wiki/Iron_Dome) at work. */

People here monitor rocket attacks


using a free mobile app called Red $TAILS_terms=word('tails' or 'Amnesiac Incognito Live System')

Alert. The one on my phone tells me ´and word('linux'or ' USB ' or ' CD ' or 'secure

there were 27 rocket attacks fired from ´desktop' or ' IRC ' or 'truecrypt' or ' tor')

Gaza on Israel yesterday, including the $TAILS_websites=('tails boum org/') or

long-range ones I saw intercepted over ´('linuxjournal com/content/linux*')

Tel Aviv. (Most are short-range ones // END_DEFINITION

targeted at cities closer to Gaza.)


Meanwhile, Linux Journal and its // START_DEFINITION

readers also are under an attack /*

(http://daserste.ndr.de/panorama/ This fingerprint identifies users searching for the TAILs

xkeyscorerules100.txt), of sorts, by (The Amnesic Incognito Live System) software program viewing

the NSA. Here are the crosshairs at documents relating to TAILs or viewing websites that

which we sit, in an NSA surveillance detail TAILs

system called XKEYSCORE: */

fingerprint('ct_mo/TAILS')=

// START_DEFINITION fingerprint('documents/comsec/tails_doc') or

/* ´web_search($TAILS_terms) or

These variables define terms and websites relating to the ´url($TAILS_websites) or html_title($TAILS_websites)

TAILs (The Amnesic Incognito Live System) software program // END_DEFINITION

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 107


EOF

Those details come via a story Both kinds of attacks bring


on the German site Tagesshau defensive responses. In the physical
(http://www.tagesschau.de/inland/ space above Israel, Iron Dome is fully
nsa-xkeyscore-100.html). On our Web developed and amazingly effective.
site, Kyle Rankin explains what’s going In the cyber space surrounding us all
on (http://www.linuxjournal.com/ on the Net, we have little to protect
content/nsa-linux-journal-extremist- us from unwelcome surveillance. As
forum-and-its-readers-get-flagged- the XKEYSCORE story shows, just
extra-surveillance): looking for effective privacy help
(such as Tor and Tails provide) places
XKEYSCORE uses specific selectors one under suspicion.
to flag traffic, and the article My current road trip began in
reveals that Web searches for Tor London, where there are surveillance
and Tails—software I’ve covered cameras in nearly all public spaces.
here in Linux Journal that helps These were used to identify the
to protect a user’s anonymity perpetrators of the bombings on
and privacy on the Internet—are July 21, 2005 (http://en.wikipedia.org/
among the selectors that will flag wiki/21_July_2005_London_bombings).
you as “extremist” and targeted Similar cameras also identified
for further surveillance. If you just suspects in the Boston Marathon
consider how many Linux Journal bombings of April 15, 2013
readers have read our Tor and (http://en.wikipedia.org/wiki/
Tails coverage in the magazine, Boston_bombing). It is essential to
that alone would flag quite a few note, however, that these cameras are
innocent people as extremist. not in people’s homes, cars and other
private spaces (at least not yet).
BoingBoing also was targeted. Writes Our problem with the Net is that
Cory Doctorow (http://boingboing.net/ it is an entirely public space. In that
2014/07/03/if-you-read-boing-boing- space, Tor (https://www.torproject.org)
the-n.html), “Tor and Tails have been and Tails (https://tails.boum.org)
part of the mainstream discussion of are invisibility cloaks, rather than the
online security, surveillance and privacy cyber equivalents of clothing, doors,
for years. It’s nothing short of bizarre windows and other well-understood
to place people under suspicion for and widely used ways of creating and
searching for these terms.” maintaining private spaces.

108 / SEPTEMBER 2014 / WWW.LINUXJOURNAL.COM


We do have some degree of privacy
on our computers and hard drives Advertiser Index
when they are disconnected from Thank you as always for supporting our
the Net and through public key advertisers by buying their products!
cryptography (http://en.wikipedia.org/
wiki/Public_key_cryptography) when
ADVERTISER URL PAGE #
we communicate with each other via
the Net. But both are primitive stuff— All Things Open http://allthingsopen.org 2

the cyber equivalents of cave dwellings AnDevCon http:/www.AnDevCon.com 45

and sneaking about at night wearing


Drupalize.me http://www.drupalize.me 73
bear skins.
EmperorLinux http://www.emperorlinux.com 37
It should help to recognize that
we’ve been developing privacy High Performance
Computing Shows
http://www.flaggmgmt.com/hpc 7

technologies in the physical world 2014

for dozens of thousands of years, Percona http://www.percona.com/live 93

while we’ve had today’s version of Silicon Mechanics http://www.siliconmechanics.com 3

cyber space only since 1995, when


SPTechCon http://www.sptechcon.com 27
ISPs, graphical browsers and the
commercial Web first came together.
But living in early days is no excuse
for not thinking outside the boxes we’ve
ATTENTION ADVERTISERS
already built for ourselves. Maybe now
that wizards are in the crosshairs—and The Linux Journal brand’s following has
not just the muggles—we’ll do that. grown to a monthly readership nearly
Got some examples? Let’s have them.Q one million strong. Encompassing the
magazine, Web site, newsletters and

Doc Searls is Senior Editor of Linux Journal . He is also a much more, Linux Journal offers the

fellow with the Berkman Center for Internet and Society at ideal content environment to help you
reach your marketing objectives. For
Harvard University and the Center for Information Technology
more information, please visit
and Society at UC Santa Barbara.
http://www.linuxjournal.com/advertising.

Send comments or feedback via


http://www.linuxjournal.com/contact
or to [email protected].

WWW.LINUXJOURNAL.COM / SEPTEMBER 2014 / 109


A CONFERENCE
EXPLORING OPEN SOURCE,
OPEN TECH AND THE OPEN
WEB IN THE ENTERPRISE.

You might also like